Sunday, March 11, 2012

Some computing scenarios would benefit from a second monitor, even a small one. A good example are HTPC, where info like what music, radio station, or TV channel is played is often shown on little 2-line LCD devices.

Photo frames offer so much more space - and at much higher resolution than even the biggest LCD devices - to display information. If one only could write to them.

The Samsung SPF-87H Digital Photo Frame is such a device. It can be connected to a computer and switched into a so called Mini-Monitor mode, allowing the computer to write to the frame. Samsung is offering a program called 'Frame Manager' for Windows, but nothing for Linux. Some attempts for Linux functionality have been made already, like here, here, and discussion here.

I am now offering a Python script, which can lock the frame into Mini-Monitor mode and send pictures to the frame. The script is very simple, has basically no error checking, but is heavily commented. It provides the basic functionality only, e.g. pictures must be pre-sized to what the frame can handle (800x480 pixel, width x height). To use the script, copy the content of the post pyframe_basic into a file pyframe_basic and make it executable (chmod a+x pyframe_basic).

An advanced version - not shown yet - will use the Python Imaging Library (PIL) to process pictures of any size and type to fit with the frame requirements, and could prepare pictures with textual information.

The photo frame unfortunately does not allow auto-connection. Go through these steps for manual connection:

UPDATE 3: a program to send screenshots to the photo frame at video speeds, completely from within Python
UPDATE 4: a program which allows to send screenshots upon receiving a trigger signal
UPDATE 5: A video recorded from a photoframe, showing a video playing on the photoframe

Using the Python programs from this site I demonstrate with a video recorded by digital camera from a Samsung SPF-87H Digital Photo Frame. The quality of the video shown here on the blog is awful in color and resolution, while on the photoframe itself both are excellent. But at least this clip shows that the video plays smoothly through all scenes.

The setup used a virtual frame buffer, so this setting can also be used for in a headless client. In a terminal give these commands :

This creates a virtual frame buffer xserver as #99 with a screen resolution the same as the photoframe (800x480, change to match your frame if needed, here and also in the script ), and in it starts the Python videoframe script (see below), and uses mplayer to play a movie in full screen mode. This was then recorded with a digital camera from the photoframe, and uploaded to this post.

The videoframe script records some frame and transfer rates. Here is an excerpt from the final scenes (each line is an average over 50 frames, i.e. 2-3 seconds):

Depending on the complexity of the picture to be jpg coded, the observed variation in fps ranges from 11 ... 27fps. In this setup the cpu is a 6 year old Intel Core2 T7200 2.0GHz, running at ~50% load (its cpu-mark is 1150; for reference: today's Intel Core i5-2500 has a cpu-mark of 6750). As noticed earlier, the bottleneck appears to be the frame itself. The speed of movements within the movie scenes does NOT play a role for the transfer rate, as always a single screenshot is taken and processed. However, fine structures (grass, hair, fur,...) which make for big jpg files slow the frame rate down.

The python code for videoframe is shown below the line. See code in other posts below for more detailed comments on parts of the script.

Update:
the command:

pmap.save(buffer, 'jpeg')

is the same as:

pmap.save(buffer, 'jpeg', quality = -1)

which sets the quality to its default setting of 75. Quality ranges from 0 (=very poor) to 100 (=very good). The save command itself is not faster at poorer settings, but the resulting picture size is smaller, and thus transfer speed over the USB bus increases, allowing higher frame rates! Quality settings of 60 are usually good enough; certainly for video.

# Program: videoframe## This videoframe program plays videos on the 'Samsung SPF-87H Digital Photo Frame'# by taking rapid snapshots from a video playing on a screen and transfers them as jpeg# pictures to the photo frame# # It is an application of the sshot2frame program found on the same# website as this program# Read that post to understand details not commented here# Copyright (C) ullix

# Enter into a loop to repeatedly take screenshots and send them to the framestart = time.time()frames = 0mbyte = 0

while True: # take a screenshot and store into a pixmap # the screen was set to 800x480, so it already matches the photoframe # dimensions, and no further processing is necessary pmap = QtGui.QPixmap.grabWindow(QtGui.QApplication.desktop().winId())

# create a buffer object and store the pixmap in it as if it were a jpeg file buffer = QtCore.QBuffer() buffer.open(QtCore.QIODevice.WriteOnly)

pmap.save(buffer, 'jpeg')

# now get the just saved "file" data into a string, which we will send to the frame pic = buffer.data().__str__()

When using a photoframe as a display or a (headless) PC, one might want to update the display at regular intervals, e.g. once per minute to update a clock, but then also at other events, like pressing a key on a keyboard or remote control.

This can be achieved by making the screenshot program listen to UNIX signals. These signals must not be mistaken for signals emitted from GUIs with events like clicking a button, or checking a checkbox. The probably best known of these UNIX signals is SIGINT, which is sent to a program when CTRL-C is pressed, and usually ends the program.

For user defined purposes the signals SIGUSR1 and SIGUSR2 (numerical codes 10 and 12, resp.) have been reserved. In the shell these signals can be send by

kill -SIGUSR1 pid-of-program-to-receive-signal

The tsshot2frame program below will listen to this signal and take a screenshot and send it to the frame when it receives it.

The triggershot program below is just a demo to show how a program can create and send such a signal. In this case the program does it when the key 't' is pressed. Obviously, other events can be used, like key presses on remote control, alarm signals from sensors, etc.
______________________________________________________________________________

#!/usr/bin/python# -*- coding: UTF-8 -*-

# Program: tsshot2frame# based on sshot2frame, but allows to be triggered by a SIGUSR1 signal## This triggered-screenshot-to-frame program takes a screenshot from your desktop# and sends it to the 'Samsung SPF-87H Digital Photo Frame'## The screenshots are taken at regular intervals, but can also be triggered randomly# by a SIGUSR1 signal, to which this program is listening.## It is an extension of the sshot2frame program found here:# http://pyframe.blogspot.com# Read other posts to understand details not commented here# Copyright (C) ullix

# take a screenshot and store into a pixmap #pmap = QtGui.QPixmap.grabWindow(QtGui.QApplication.desktop().winId()) # if you want a screenshot from only a subset of your desktop, you can define it like this pmap = QtGui.QPixmap.grabWindow(QtGui.QApplication.desktop().winId(), x=0, y= 600, width=1200, height=720)

# next code line is needed only when screenshot does not yet have the proper dimensions for the frame # note that distortion will result when aspect ratios of desktop and frame are different! # if not needed then inactivate to save cpu cycles pmap = pmap.scaled(800,480)

# create a buffer object and store the pixmap in it as if it were a jpeg file buffer = QtCore.QBuffer() buffer.open(QtCore.QIODevice.WriteOnly) pmap.save(buffer, 'jpeg') buffer.close()

# now get the just saved "file" data into a string, which we will send to the frame pic = buffer.data().__str__()

# Receiving a signal will interrupt the time.sleep() in the main while loop, # which will result in a shot being taken immediatel<. Therefor a separate # takeshot() is not needed here; it would result in two successive shots # being taken #takeshot()

# Must have a QApplication running to use the other pyqt4 functionsapp = QtGui.QApplication(sys.argv)

# Take screenshots in regular intervals and send them to the frame;# screenshots triggered by SIGNALS will come in additionwhile True: print time.time(), takeshot() time.sleep(60) """ Remember that receiving a SIGNAL will interrupt time.sleep ! From the python documentation: time.sleep(secs) Suspend execution for the given number of seconds. The argument may be a floating point number to indicate a more precise sleep time. The actual suspension time may be less than that requested because any caught signal will terminate the sleep() following execution of that signal’s catching routine. Also, the suspension time may be longer than requested by an arbitrary amount because of the scheduling of other activity in the system. """

Following is the triggershot program:
_______________________________________________________________________________

Monday, March 5, 2012

# Program: sshot2frame## This screenshot-to-frame program takes a screenshot from your desktop # and sends it to the 'Samsung SPF-87H Digital Photo Frame' # # This can be done at frame rates of 20+ fps so that it is even possible# to watch video on the frame, when video is playing on the desktop!# (tested with mythtv)## It is an extension of the pyframe_basic program found here:# http://pyframe.blogspot.com/2011/12/pyframebasic-program_15.html# Read that post to understand details not commented here# Copyright (C) ullix

# Must have a QApplication running to use the other pyqt4 functionsapp = QtGui.QApplication(sys.argv)

# Enter into a loop to repeatedly take screenshots and send them to the framestart = time.time()frames = 0while True: # take a screenshot and store into a pixmap pmap = QtGui.QPixmap.grabWindow(QtGui.QApplication.desktop().winId())

# if you want a screenshot from only a subset of your desktop, you can define it like this #pmap = QtGui.QPixmap.grabWindow(QtGui.QApplication.desktop().winId(), x=0, y= 600, width=800, height=480)

# next line is needed only when screenshot does not yet have the proper dimensions for the frame # note that distortion will result when aspect ratios of desktop and frame are different! # if not needed then inactivate to save cpu cycles pmap = pmap.scaled(800,480)

# if desired, save the pixmap into a jpg file on disk. Not required here #pmap.save(filename , 'jpeg')

# create a buffer object and store the pixmap in it as if it were a jpeg file buffer = QtCore.QBuffer() buffer.open(QtCore.QIODevice.WriteOnly) pmap.save(buffer, 'jpeg')

# now get the just saved "file" data into a string, which we will send to the frame pic = buffer.data().__str__()

###################### # this code within ########## is needed only to create an PIL Image object to be shown below # by image.show(), e.g. for debugging purposes when no frame is present

#picfile = StringIO.StringIO(pic) # stringIO creates a file in memory #im1=Image.open(picfile) #im = im1.resize((800,480), Image.ANTIALIAS) # resizing not needed when screenshot already has the right size # note that distortion will result when aspect ratios of desktop # and frame are different! #picfile.close() ######################

Thursday, January 12, 2012

A picture can only be written to the photo frame when it is in Mini Monitor mode (and it is INITialized). However, when it is found in Mass Monitor mode, it can be switched to Mini Monitor mode by a script. The code reqired is:

dev.ctrl_transfer(0x00|0x80, 0x06, 0xfe, 0xfe, 0xfe )

Settling on the USB bus takes <0.5sec, but give it some extra time.

A stripped down version of a program which takes care of switching and initialization and can be fed with pictures of any size and (almost any) type is following. Note the use of the Image module for image manipulation, and of StringIO to avoid writing and reading temp files to/from disk:

def frame_init(dev): """Init device so it stays in Mini Monitor mode""" # this is the minimum required to keep the frame in Mini Monitor mode!!! dev.ctrl_transfer(0xc0, 4 )

def frame_switch(dev): """Switch device from Mass Storage to Mini Monitor""" dev.ctrl_transfer(0x00|0x80, 0x06, 0xfe, 0xfe, 0xfe ) # settling of the bus and frame takes about 0.42 sec # give it some extra time, but then still make sure it has settled time.sleep(1)

Wednesday, January 11, 2012

I was wondering about the transfer speed of pictures to the frame, given that Python is an interpreted language. It turned out to be much faster than expected:

Two very different pictures were loaded by the script, prepared, stored in memory and alternatively transferred to the frame. I used one pair of pictures which were simple and small (<<16384 Bytes), and another one with rather complex and hence larger (ca 100kB after resizing to
800x480) pictures. All are attached to this post. The transfer of the pics resulted in a CPU load of only about 1-2%. Here the measured data:

Since a minimum chunk size of 16384 bytes per picture needs to be transferred irrespective of the picture size, the small pictures do not benefit much from their small size with respect to transfer speed. Generally, 20+ Pictures/sec should be achievable.

Since the USB bus can transfer at least 10x as much, and the CPU even more, I conclude that the transfer speed is limited by the frame.

Next I took a video clip and converted each frame into a JPG picture using ffmpeg, which I then tried to transfer to the frame as a fast sequence. Suprisingly, I could not transfer even a single picture out of several hundred, although each picture could be viewed correctly with all photoview programs on my computer. I tried a variety of permutations of the ffmpeg parameters, but without success.

However, reading and rewriting each picture using the Python IMAGE Modul and this code resulted in pictures fit for transfer to the frame:

Transfering those pictures was possible with a flicker-free frame rate of some 26 pictures/sec (1.7 MB/sec) - the "picture" frame turned into a "video" frame! Reading the pictures from a fast SSD did not improve the speed, which is consistent with the frame itself being the bottleneck.