New work for Napster and p2p

Why only mp3?
You could distribute images and photoes same way, but latest are made by
self.

Most of people make photos. Some of them really great.
Most of people just put them on Web to get rid of them. They are not
against
people downloading them. (Arn't you? ;).
Sometimes one want to find some specific image for page (say apple), but
it is hard to find one.
With simple descriptions and some P2P programm (server in which you
could setup public photo/images dir) it whould be
easy to find one you need. Or just to make changable desktop image.

The main difference with mp3 - almost all are creators of
photos/images, but almost none of mp3 :)

Info to specify is: size XxY px, title, list of objects on it,
place(country/city/street),
camera type, film type, scanner. Also quality could be specified.
You could thinks of adding md5 check to (image+info) data, so it whould
be
easy to find author of original image.

I don't think the P2P model fits well for images now. (Napster is not
really P2P, to start with).

Why P2P works for music ?

the current music file formats carry along within the files a
description of its
contents. Many image formats don't allow this, and even when they do,
you can count on your fingers how many people fill that field with
useful information.

people (the peers) are willing to keep a reasonably large music
collection (many people keep gigabytes of MP3s in their hard disks. Best
music playback is achieved when the files are in a fast, mounted 24/7,
media. Most people already have the music files available 100% of the
uptime there, and most are online for at least some hours each day,
sharing is just a step further. And they don't even need to index the
files, the ID3 tags are already there!

How this differs from images ?
The usual "image experience" is not "playing back" a collection of image
files in a playlist (you don't really watch a slideshow of your JPEGs
every morning, do you?). So the image are better placed on removable
media -- zip disks, CD-Rs. At least for me, my image collections are
kept in 96.1 MB zip disks (if you want to believe 100,000,000 B == 100
MB, go ahead...). The "image experience" is 1) I need an image to tinker
with 2) I grep the 'ls -lR' of my image disks (kept in my /home) for
what I want, 3) mount the disk 4) fire up gimp 5) copy whatever needed
inside gimp 6) umount disk. The images aren't "there" to be shared, many
times they aren't mounted. Also, classifying images for searchability
can be boring -- altavista must know that, a media search for "tolkien
dwarf" returned a page with pictures of crocodiles for me :)

So provinding a service where images are indexed and available online is
probably better achieved by concentrating the images in one or a few
servers, with 24/7 storage for them.

When we started sharing music we needed special programs to do that --
Netscape, Opera, Mozilla, et caterva don't know how to "display" music
files, and even if they did, you'd probably get angry since these
browsers freeze up so much -- not reliable enough for playback. So here
comes Napster with search primitives for bitrate, name, line speed, etc.

Also, image files usually aren't as large as music files. HTTP
transfers are enough.

But browsers can show images pretty well. And getting them to browse
some other formats (Gimp's XCF, TIFF, PCX...) doesn't look so tough (as
long as the browser in question is under an open source license that
encourages contributions from users). I'd say the solution here is an
image collection based on HTTP with an associated database (MySQL,
PostgreSQL) and server-side code to search for images and contribute to
the archive (e.g.: PHP, CGI, Zope/Wikis). Better reliability could be
achieved by mirroring the site.

Even though P2P networks are nice, and many things will get shared on
them when the P2P-thing really happen, from music to images to flashes
(All your base II - CATS's wrath ?) to videos to text docs. For now, P2P
just isn't the "right here, right now" solution for image sharing.

One thing we could start thinking is a wrapper file format for images,
with a header containing image dimensions, size, keywords, author info,
and a data chunk containing the image file in its original format (PNG,
JPG, etc), so that images could be shared without the aid of a database
backend and still easily opened (modifying imaging programs to "extract"
the file is not tough, and a command line tool to extract the image file
from the "wrapper archive" can be written in 30- minutes (given the
wrapper format spec).

Take a look at rdjpgcom and wrjpgcom that are part of the jpeg
library that almost everyone uses on Unix. I tag most of the files that
I index on my home page with this. This lets me throw in a copyright
and a description. Then my indexing script will pull the description
out of the comment block for the index.

If you take a look at the index of Blue
Mountains (notice the indx extention? It invokes my Python script
that dynamicly builds the index. It also resizes the images to your
current preferences.) you will see the descriptions. If you
click on an image (not a thumbnail, I haven't gotten it to push the
descriptions to the resized images yet), and then view an image like myself
in the Blue Moutains and look at the Page
Info (with Netscape, not
sure you can do this with IE), you will see the copyright and
description of the file.

The technology already exists out there for tagging files, and has
been for a long while. Just the applications haven't been making use of
it.

Of course, the real reason why there's no interest in p2p distribution
of images is that there's no real demand for graymarket copies of
copyrighted photographs, with one exception: porn. A p2p distribution
system for pornography would probably be in extremely high demand --
until people started sending kiddie porn over it, and the feds arrest
everyone even remotely involved. Facilitating copyright infringement
(Napster) is merely a civil offense. Facilitating the distribution of
child pornography, on the other hand, is a federal felony.

I believe napster is the only one that restricts what the bits are
supposed to contain. Mojonation
even has an often overlooked feature of being able to publish and view
websites as a single piece of content (perfect for photo galleries).
Just point it at the top level html file or the directory that contains
it when publishing and it will spider it for relative links, publishing
it as a whole.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser
code is live. It needs further work but already handles most
markup better than the original parser.