Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

The Wall Street Journal is reporting on a tech start-up that proposes to offer the ultimate in assurance for content owners. Attributor Corporation is going to offer clients the ability to scan the web for their own intellectual property. The article touches on previous use of techniques like DRM and in-house staff searches, and the limited usefulness of both. They specifically cite the pending legal actions against companies like YouTube, and wonder about what their attitude will be towards initiatives like this. From the article: "Attributor analyzes the content of clients, who could range from individuals to big media companies, using a technique known as 'digital fingerprinting,' which determines unique and identifying characteristics of content. It uses these digital fingerprints to search its index of the Web for the content. The company claims to be able to spot a customer's content based on the appearance of as little as a few sentences of text or a few seconds of audio or video. It will provide customers with alerts and a dashboard of identified uses of their content on the Web and the context in which it is used. The content owners can then try to negotiate revenue from whoever is using it or request that it be taken down. In some cases, they may decide the content is being used fairly or to acceptable promotional ends. Attributor plans to help automate the interaction between content owners and those using their content on the Web, though it declines to specify how."

You know, I've actually had a thought along those lines in trying to explain to untechnologically savvy individuals why Digital Rights laws are screwed up and that handling digital content on the web is a grey area. Consider the following.

Most web sites have a copyright statement on them some where (even this one!). Technically speaking, if I go to that web site, my browser copies the page along with all it's media content and caches it. Since many of those sites do not have a terms of service posted

Reading something does not violate its copyright. If they distribute copies of robots.txt you might have a case of some sort.

how can you read it on the web then without having made a copy of it somewhere on your computer... you've pulled in a copy of it using your browser, there is now a copy of it in ram and also maybe in the cache... so you've made at least two unauthorised copies.

Hahaha! You screwed up! I have your IP address now! I will send 127.0.0.1 to every company that uses the sniffer and tell them the person at that IP is an evil, evil person who exploits innocent people for their own profit and power!

You joke, of course, of course, but there are tools out there to detect when a bot is abusing your site and not following robots.txt. The usual technique is to hide a few links in your page, and also have these links blocked by robots.txt. When a user visits the link, they're banned from viewing the site. (Sometimes, a CAPTCHA-like utility for unblocking yourself is presented along with the 403 page, in the event that a particularly curious user manages to find the link and activate it manually.)

True, but there's a way around that as well. Any robot service worth its weight in fiber has more than one IP, and can have multiple subnets. Best way is to dump robots.txt links to a separate subnet, have it check later in the day. If the IP gets banned, it can check by trying to access the main page, see if it starts getting errors. It can then mark "booby-trap" sites on a list, and route around either the specific triggers or actually honor the robots.txt.You have to have more links than they have IP

Another company "Cyveillance" already does this for major corporations and the government. I've used htaccess rules to disallow all from their assigned netblocks after they racked up almost 20,000 hits to my personal site in one day. As you mentioned, they didn't follow robots.txt and attempted to index parts of my site that are password protected as well as content names that did not exist (music and videos and such), all the while identifying their bot as a variant of IE.Here's how to block two subnets

There's an easier way. You can hand mod_access netblocks and more [apache.org]. This method will avoid eating cycles with mod_rewrite. If you can put it in your conf instead of.htaccess, you'll save even more time/processing. Just put it in for your doc root. From my httpd.conf:

Anybody care to place a friendly wager that they're not going to honor robots.txt?

I had a similar thought. How much extra bandwidth is this going to suck from sites hunting for copyright material on completely legitimate sites? Particularly sites which might have a lot of large media content.

If I put up a terms of service forbidding the crawling of my site, can I then sue them for bandwidth costs? Seems reasonable to me, why should I be presummed to be guilty?

You're absolutely right that "if you don't want it on the public Web, don't put it there in the first place" -- but there are still times when you have a legitimate reason that you don't want a page indexed, downloaded, or otherwise visited by a robot. Dynamically generated content is one example reason; sometimes certain pages can be a big drain on your website, and you'd prefer not to have every spider in the world hitting them up every few minutes.

Let's take a fun legitimate site like, oh... Wikipedia [wikipedia.org]:

# Folks get annoyed when VfD discussions end up the number 1 google hit for# their name. See bugzilla bug #4776# en:Disallow:/wiki/Wikipedia:Articles_for_deletion/Disallow:/wiki/Wikipedia%3AArticles_for_deletion/Disallow :/wiki/Wikipedia:Votes_for_deletion/Disallow:/wiki/Wikipedia%3AVotes_for_deletion/Disallow:/wiki/Wikipedia:Pages_for_deletion/Disallow:/wiki/Wikipedia%3APages_for_deletion/Disallow:/wiki/Wikipedia:Miscellany_for_deletion/Disallow :/wiki/Wikipedia%3AMiscellany_for_deletion/Disall ow:/wiki/Wikipedia:Miscellaneous_deletion/Disallow:/wiki/Wikipedia%3AMiscellaneous_deletion/Disallo w:/wiki/Wikipedia:Copyright_problemsDisallow:/wiki/Wikipedia%3ACopyright_problems

(They also disallow certain specially generated pages like Special:Random, and any of the pages which actually let you edit the site).

Dynamically generated content is one example reason; sometimes certain pages can be a big drain on your website

And dynamic content is, of course, the answer. If I'm going to put up copyrighted content in the future, I'd use one of a dozen schemes that regenerate the download link on a per-session basis. Obviously they're not going to honour robots.txt, but why are your links readable by such a basic spider? You need to:

Disallow anonymous downloads. You need to be logged onto the site to download anything, torrent or otherwise

Use a CAPTCHA to prevent spiders from signing up for said accounts

Use the session id to generate unique download links on a per-session basis

Change the key on your BitTorrent tracker every 12-24 hours. This will require that a downloader get the latest torrent from the original website (which requires login), reducing the impact of a leaked torrent

Compress and possibly encrypt the content so that it's less obvious what it is

Anyone who follows the above steps (and most sites already do most or all of this) won't be found by the spider. Period.

The only thing I can think of that this product would be useful for is to find people who have blatantly copied my website, but I'm sure you could find those people equally easily with Google.

If they're fingerprinting such a small amount of source material, then they'll generate *mostly* false positives. Of course, that won't stop them from sending takedowns and auto-suits based on just the supposed match. You just can't get a very unique fingerprint with so few input bits.

I hope everyone is prepared for the massive flood of notices this is going to generate...

Imagine a tool where you could reliably return accurate and search results for images and video. Does this exist yet? No, as one who searches the web daily for pics and video for my own sordid uses, let me assure you that it most certainly does not yet exist.

And what an horrific waste to have such a tool - if it works - for policing content for copyright violations. Bearing in mind also that such "violations" are no such thing in some

As always, and tell your family and friends, only buy music directly from the artist or secondhand. It's the only way to win.

or else make it yourself... but then again you've got to pay the nickel for the bl00dy sheet music or tabs... and they don't half try to rip you off there as well... it's that or write your own... and then try and stop them from ripping you off...

You're assuming anyone is going to manually verify any of the results. From my experience with people using monitoring software (especially non-techies who are simply consumers of the technology, but who provided the money for it), the vast majority of them are simply going to call their lawyers when they see the dashboard light up. I see vast letter writing campaigns come from this, with little actual infringing being prosecuted.

This is a scary product. Not so much because of the technology behind it, but because of how it is going to be implemented and (ab)used.

The Wall Street Journal is reporting on a tech start-up that proposes to offer the ultimate in assurance for content owners.

This almost had me going until the second half of the sentence. When has anyone ever offered any product as the "ultimate" anything that ultimately proved to actually ultimately be the ultimate whatever it was?

At least in the U.S. (where I'm from), copyright is an "opt out" form of copy protection. I'd rather it was "opt in".
Early physical and psychological development in humans is spurred by, and social behavior is learned through, imitation. We are, it appears, hard-wired [washington.edu] to imitate other humans. Art and self-expression are rooted in imitation of others and almost all art forms are taught by imitation (called "technique") and most art is derivative of earlier expression.

Its purpose aside, yes, it would be a fantastic thing to be able to scan the entire web and reliably identify the context and content of any specific media file type. Video, audio, image, etc. Particularly if it could identify purposely obfuscated content.

I'm in what is almost certainly a tiny minority of Slashdotters in that I actually create copyrightable material rather than only consume it. I'm again in the minority in that I think copyrights are a good thing and again in the minority in that I can separate out the purpose of copyrights and the evil actions of the legal arms of **AA companies.

Regardless, while scanning the internet for improperly used material sounds great on paper this will probably end up being as effective as finding water with a divining rod. The current tactic of locking down things at the hardware and OS levels will get more support from the media companies, not that they seem all that good at choosing tactics when the internet is involved.

and again in the minority in that I can separate out the purpose of copyrights and the evil actions of the legal arms of **AA companies.

Let's make one thing clear: the RIAA/MPAA lawsuits are not, in any way, shape, or form, an abuse, negative side of, misapplication or malicious use of Copyrights. They fulfill the role of Copyrights in the first place; they are the logical end result of a system that says citizens are allowed to distribute ideas (or expressions of ideas), then stop any further distribution of them.

The **AA lawsuits are ridiculous, yes. But the ridiculous part is not the litigation itself, it's the laws on which the lawsuits are brought under.

Just one little niggle, but citizens are most certainly not required to stop distributing an idea once the implementation of that idea is copyrighted. Otherwise there would be no more crappy songs about high school relationships on the radio after the first, as the idea of obsessively romantic love will have been copyrighted. The idea is to expressly prohibit the copying of a specific expression of an idea while still maintaining everyone's right to love each other like idiots, for example.

I'm a patent attorney and no stranger to IP. Having said that, any IP law is, or at least should be, a balance to on the one hand freedom to operate (both for IP users and for IP creators) and on the other hand a means for compensation for IP creators. For patents, that balance is not there for patents on software. Also for patents, at least they last for 20 years max. For copyright, that balance is not there. And I'm curious to hear whether you think it is a good thing that whatever you create is still und

Copyrights and patents are there to protect the ownership of, and distribution/licensing rights to, original works created or invented by people. They should belong solely to the creator(s) or inventor(s) of the works or ideas and be nontransferable and non-inheritable.

And I'm curious to hear whether you think it is a good thing that whatever you create is still under copyright more than 40 years after you die

No I do not think that life+40 years is a good thing. Any length of time is likely to be some arbitrary guess, but anything more than the life of the creator is too long in my estimation.

These repeated attempts by media companies to extend the time periods for both their copyright and sometimes mine make a lot of news here and are often held up as examples of the way copyrights have been bent against the public. When compared with the reality of file sharing they matter very little though. A look at

Are you indeed? Then you should know better than to use the term 'Intellectual Property'.

You of all people should know that no such thing exists - certainly not under the laws of any country I've ever had the leisure to study. A lawyer of all people should know better than to bandy inaccurate, misleading terms about. I believe the reason is that unwise talk such as that can come back to... what the legal term again? Ah yes: bite you in the ass. 8^)

I'm in what is almost certainly a tiny minority of Slashdotters in that I actually create copyrightable material rather than only consume it. I'm again in the minority in that I think copyrights are a good thing and again in the minority in that I can separate out the purpose of copyrights and the evil actions of the legal arms of **AA companies.

Tiny minority? Everyone who posts to slashdot is creating copyrighted material. Everyone who sends an email or writes on a post-it note is creating copyrighted m

"I'm in what is almost certainly a tiny minority of Slashdotters in that I actually create copyrightable material"

Well aren't we all high-and-mighty. Forget something though?

"All trademarks and copyrights on this page are owned by their respective owners. Comments are owned by the Poster."

(Virtually) EVERY expression of an idea is copyrightable; including every lame post made to/.. You've fallen for the same trap as so many others (artists, politicians, even everyday people) of believing that it only

Hell, most slashdotters do create copyrightable material. That email you sent to your sysadmin? Copyrightable (oops, almost said girlfriend there). That comment you wrote on Slashdot? Copyrightable (well, nevermind. Most are dupes).

roughly equal to the entire volume of the publically available internet..think about it, to do what they say, they have to request ALL the data they can lay their hands on,and then chuck it.. and for comparative purposes, they'll have to do it again.

Great, now all the torrent sites will require captcha verification too!;P

Actually, can they even scan torrents without downloading the entire file? And whats to stop everyone from just blocking them from accessing their websites? Are they going to go in covertly, pretending to be actual users? I can see every legit website blocking their access as well, why pay for bandwidth to supply that?

Sure, youtube can be more efficiently attacked...but youtube has been dancing in front of the cannons since its inception, we all knew it was going to get shot eventually.

Here's another thought: what if your copyright license expressly forbids this kind of downloading? Can you then sue whoever downloaded your home grown musical, fanfic or picture of your cat via that tool?

Then again, this entire counter-suing point is completely moot. Very few individuals have the money to slug it out in court with large media publishers, and not too many businesses can either.

"Unaltered media files" are the exception, not the rule. Changing even a bit of metadata (stripping exif from an image, changing an mp3 tag) would change the checksum, not to mention things like putting things into an archive, resizing images, (re)recompressing music.

But yeah, it might make sense for Google to become "aware" of unique content and variations of it.. but I doubt they'd ever use that openly for (aiding in) hunting down copyright infringement, simply for PR reasons.

Hm, what about computing checksum of the actual media contents? For example, compute checksum only for sound data in MP3 or image data in image files, ignore all other data/metadata. Usualy media files are containers for smaller objects or data streams... Resampled or modified contents would not be detected though.

I wrote to google and yahoo a year or two ago suggesting they implement this but never heard back from either and have not seen it implemented. (Please, everyone WRITE!*) It would be the COOLEST THING EVER for a number of reasons. Say you downloaded a picture off the web. A year later, you stumble across it and decide you want to see if the site has any more similar pics. You could just md5 the image and search for that. (Of course this could be made very easy for non-technical users: Google could have a li

I'd mod you up if I could. Another benefit of this would be the network effect on hashing tools. Yeah, any linux/osx/unix user has them already, and they're easy to get for Windows as well. But if google started exposing this, tool makers would follow. This would really boost infrastructure and standards for things like p2p apps, desktop search, backup tools, Internet-hosted storage, etc. The ??AA would also want to use it, of course, and this might even be a reason google has refrained from making it

That's the first thing that came to mind when I saw the article.
It's been around for years. I've used a it a few times and was amazed to find one of my random website texts in other peoples's work (It was properly cited so I don't complain).

Why the fuck does everyone want to be paid for every little thing these days? Sure, wholesale piracy is one thing. I disagree with the idea that people should be trading movies and music online with no restrictions at all. If you want an album, buy it. If you want software that costs something, buy it or learn to use free/open software. If you want to see a movie, pay to watch it in the theater or rent the DVD when it comes out. But, where this all falls apart is when someone quotes someone else onlin

As long as it respects basic internet rules of conduct (including respecting robots.txt), then this is ethically neutral.

It all depends on how it's used. Many companies would prefer to avoid coypyright infringing material, and will take it down if the existence is pointed out to them. Many companies will simply be asking others to remove material which clearly and flagrantly breaches their copyright. This is perfectly reasonable behaviour.

Of course, "a few sentences of text or a few seconds of video" most likely are being used within legal fair use boundaries. So what's going to happen is that the corporate law firm will grab this program, then send out auto-takedown notices without a human being (to the extent anyone working in the legal department meets that criteria) ever looking to see if the use is even arguably a violation of copyright. Then you'll get the backlash where at least one such auto-generated letter makes its way to someone

This may be much less helpful than its promoters claim.First of all, what's the their probability of a false alarm? Even if they false alarm fairly infrequently, the vast amount of content on the Web means they could easily have a flood of false alarms, in addition to whatever actual copies are found. The user of the system is then going to have to have human beings sift through that flood to identify what's A) really a copy, B) whether that copy is infringing or not, and C) if so, is it worth taking actio

First of all, what's the their probability of a false alarm? Even if they false alarm fairly infrequently, the vast amount of content on the Web means they could easily have a flood of false alarms, in addition to whatever actual copies are found. The user of the system is then going to have to have human beings sift through that flood to identify what's A) really a copy, B) whether that copy is infringing or not, and C) if so, is it worth taking action against the infringer?

Ok, it's supposed to be unlawful to access copyrighted information on the Internet without the copyright holder's permission, right? I mean, that's the gist of the *AA's arguments right -- we hold the rights, you can't access this material unless we say so. So if the tool has to access the information to determine the copyright, wouldn't it be violating that principle? Nitpicking I know, but an interesting thought. They'd have to get dispensation from the *AAs to do it, wouldn't they?

The problem: your services as a content mitigator have been rendered useless by the appearance of a medium which is so cheap as to appear free, so fast as to appear instant, and so easy as to appear effortless.

The cure, corrosive, caustic and highly dangerous responses flooded into the arteries of your survival - a general failing of the organs of service, and an increasingly gruesome appearance as you stamp on the consumer and turn on your distributors looking for signs of theft and duplicity.

I find my stuff copied and plagiarized all the time, and it's nearly impossible to enforce without a large budget for lawyers. From inventions to source code to writing.
More then I could ever possible list here, but I have come to realize it's in the nature of things.

So now big cooperate America are going to get even better at chasing stuff down and coming after everyone that even borrows a paragraph now. Using there intimidation tactics.

They're going to be COPYING stuff from websites into their index so they can perform paid searches on it. Why isn't that copyright infringement all by itself?

If somebody were to sue them, they would have to claim that theirs is a fair use. But, many large copyright holders (i.e. their potential customers) would vehemently disagree with such a position. That's an interesting position to be in.

Of course, some nice things about fair use are thata) the creator of the copyrighted content does not get to decide whether the use is or is not fair;b) although the amount being used is one of the factors used to evaluate fair use, it is by no means the only factor, and in some situations using more than a limited amount is fair.No technology can make that evaluation, and copyright holders don't get to, either.

So now as a countermeasure someone will produce a tool to scramble the lowest order/frequency information in the file. For example, randomize the lowest order bit in an image, randomly exchanging black[#020202] and black [#020302]. For videos and music randomize the lowest frequency that is below the threshold of viewing. It will take horsepower to reencode the files, but it only has to be done once. You only need to change one bit for a fingerprint to fail.

I don't claim to know how they have or might develop this system, but it seems to me that if they plan on dealing with a file being encoded by different people in different formats, with different quality levels then your "low order bit" theory isn't going to do jack to stop them. It seems to me like a pretty trivial thing to add thresholds to these checks to allow slight to moderate variations in the finger print.Remember they don't have to 100% identify content as unauthorized copyrighted material with t

It has come to our attention that your website, [sh*touttaluck.com], does not meet compliance in terms of a variety of copyright laws of the United States and other countries. Infractions indicated by our software include, but are not limited to:

Images created with an unregistered copy of Adobe PhotoshopFlash files created with an unregistered copy of Macromedia Studio MX 2004PDFs created with an unregistered copy of Adobe Acro

I have a bunch of my books on the web, and every once in a while I do a search on some text from my own books to see who else is mirroring them. The books happen to be copylefted (dual-licensed GFDL/CC-BY-SA), but I'd like to know who's mirroring them, and check whether they're violating the license. A lot of people just seem to be hoarding the PDF files on their university servers, maybe because they're afraid my web site will disappear; that's flattering. One guy was selling them on CDs on e-bay, violating my license (claimed they were PD, didn't propagate the license). Another guy translated them to html, with lots of errors, changed the license to a more restrictive one, and put his own ads up; he fixed the licensing violation when I complained, and in a way it was a good thing, because it motivated me to make my own html versions (which are now bringing me a significant amount of money from adsense every month). One kind of annoying thing about mirroring is that the people who are mirroring never bother to update their mirrors, but in general I just figure there's no such thing as bad publicity:-)

From the other side, I once received an e-mail from a museum in the UK that was complaining that I was using a 17th century oil painting of Isaac Newton. I guess they own the original, and they may also have been the ones who did the scan that I found in a google image search, but under U.S. law (Bridgeman Art Library, Ltd. v. Corel Corp.), a realistic reproduction of a PD two-dimensional art work is not copyrightable. What really surprised me was that they came across it at all, because at that time I think my book was only in PDF format, and hadn't been indexed by google because the file size was too big.

The whole thing doesn't seem negative to me in general. It makes just as much sense as people doing a vanity search in Google before they apply for a job, or authors watching their amazon.com sales rankings obsessively. I guess the most obvious potential for abuse would be if they send a nastygram to your webhost, and your webhost is a low-end one that figures it's not worth their time to keep your account, so they just shut off your account.

It wouldn't be too hard to make this software by looking up key phrases of a web site in google. If there is an exact hit, then there may be a copyright violation.

How hard would it be to intelligently grab chunks of YOUR web site and then Google those parts. Then grep the results. If there is/are positive hits (not from your domain) then light up the dashboard. If you wanted to be extra picky, query yahoo, msn, google, and whoever else you like to search with.

Pretty sure this is a dupe, or so closely related to an earlier story as to not matter.

It's not a dupe. (Unless you count anything that appears on Digg first to be a dupe.) However, it's also not the first story of its kind. About a gazillion companies have formed with the exact same business plan (save for the "hotness" at the time being digital music) and about a gazillion of those companies have failed to develop software that catches anything but the most obvious infractions.

The legal implications of this tool greatly outweighs the technical considerations. Especially when you consider that there is a good chance somebody from another country might be infringing and then you get into a big mess of bureaucracy. But I think these sorts of ventures will ultimately fail because they underestimate the honesty of most people. See this [uchicago.edu] interesting little tidbit from Freakonomics for a telling example.

I don't see how this will change anything; copyright holders still have to pay lawyers to go after infringing sites/servers so there is still a bottleneck.

Well, since the major media conglomerates have lawyers on salary, it wont effect their costs at all. They'll just send a letter to your ISP/host and the host, fearing legal costs of their own (since they DON'T have a lawyer always available), will bow down and pull your whatever from the servers.