Share this story

Love of money can cause people to do unwise things—like stealing time on your university's resident supercomputer to mine crypto-coins. The Harvard Crimson is carrying the story of someone who did exactly that: an unnamed individual who was discovered using Harvard's Odyssey supercomputing cluster to generate dogecoins.

Further Reading

"Wow," you might say, amazed. Dogecoins are one of the multitude of roll-your-own cryptocurrencies that have lately sprouted like weeds in an unkempt vegetable garden. Like most of them, the code that powers Dogecoin's blockchain and network is forked from Litecoin, which was originally billed as a lighter-weight alternative to Bitcoin. Dogecoin (and Litecoin and Coinye and many others) use the scrypt cryptographic algorithm to generate hashes and drive the currency along; media-darling Bitcoin, on the other hand, is based around a different algorithm (SHA256). The currencies are all similar to each other, though they are (generally) incompatible and (typically) do not interoperate. (There are caveats, but cryptocurrencies are complex and I'm trying to keep this relatively short—check here for the full details on how and why cryptocurrencies work.)

It is not stated in the Crimson's piece whether the individual caught mining Dogecoins was a student or a faculty member. However, according to Harvard Assistant Dean for Research Computing James A. Cuff, the person responsible has lost access to "any and all research computing facilities on a fully permanent basis." Using university property—like the 4,096-core Odyssey supercomputing cluster—for profit or personal gain, or even any non-research tasks, is most definitely against the rules.

The obvious question to ask here, after we get all of the "Wow" and "Amaze" out of the way, is how many dogecoins the illicit mining operation was able to carve out of the blockchain before being disabled. The answer, though difficult to estimate, is "such dogecoin" or possibly "many currency." According to its page on Top500.org, Harvard's Odyssey is an x86 cluster made up of Dell PowerEdge M600 servers running on 2.3GHz Intel Xeon E5410 CPUs. It contains a total of 4,096 cores and has churned out a Linpack (Rmax) score of 32.4 trillion floating point operations per second.

That is "much" teraflop, but CPUs aren't as well adapted to running the kinds of algorithms cryptocurrencies depend on. What one cares about when examining a CPU, GPU, or ASIC's aptitude at mining is how many cryptographic hashes it can calculate per second—more is better, because the more hashes per second you can throw at the currency, the higher your chance of matching the magic hash needed to correctly identify a valid block and score a reward (again, yes, I know it's more complicated than that, but that's the short version).

Some quick checking shows that the Xeon E5410's slightly slower little brother, the E5405, kicks out about 18,800 hashes per second when mining Litecoin—and since Litecoin and Dogecoin use the same scrypt algorithm, performance is equivalent between the two currencies. If we give the E5410 another 200 hashes per second to account for its faster speed (and to make our back-of-the-napkin math easier), then we can estimate that the 4,096-core Odyssey cluster could generate a maximum of about 20,480,000 hashes per second, or 20 Mhash/second (each E5410 has four cores, so 4096 ÷ 4 × 20).

Is 20 Mhash/sec a lot? Well—yes, but it's also not ludicrous. You'd get the same performance out of 13 AMD 7990 video cards, and without using a whole data center full of electricity and cooling, too. The not-yet-shipping Acor ASICS A1 miner promises to deliver 30 Mhash/sec in a single small box—though buying cryptocurrency mining hardware sight-unseen is not without its problems.

Harvard postdoc fellow Timothy R. Peterson told the Crimson it's possible that, if the Odyssey cluster had been hashing away uninterrupted for a number of days, the cluster could have generated "hundreds, and perhaps thousands" of dollars in Dogecoins. Also unknown is whether the mastermind behind the Dogeheist has kept his or her earnings, or if they cashed out via an exchange.

Either way—wow. Many dollars. So rule-breaking.

Promoted Comments

As someone who has extensively used the Harvard Odyssey Cluster, I know that it is highly unlikely that he was running much more than 100 cores at once. This cluster although big is heavily utilized by hundreds of people so getting a ton of cores at once is an issue. Also, the max job time is a week so the person likely would have to resubmit often.

Share this story

Lee Hutchinson
Lee is the Senior Technology Editor at Ars and oversees gadget, automotive, IT, and gaming/culture content. He also knows stuff about enterprise storage, security, and human space flight. Lee is based in Houston, TX. Emaillee.hutchinson@arstechnica.com

I dislike the doge meme because it reminds me of the Shiba Inu I had to give away when I moved last year. I miss Iskander a lot, and, because of the temperament of his breed, he doesn't seem to give a damn about me anymore.

So much wow... hilarious article and thing to do I do however wonder if the perpetrator knew that the super computer was pretty bad at mining/power usage, so just done just because or if the perpetrator thought he/she would become rich...

And I do find the articles "light" language was clever way to write it, considering silliness/seriousness done here...

I thought scrypt cryptocurrency was not supposed to support ASIC-type dedicated machines?

Scrypt deliberately has a relatively heavy memory footprint (it was originally designed for hashing passwords, so easy, cheap, parallelization was not considered a virtue). This makes it more expensive to implement in dedicated silicon; but if something is computable on general-purpose silicon, a fixed-function implementation of it isn't impossible, though it may be more or less difficult.

As someone who has extensively used the Harvard Odyssey Cluster, I know that it is highly unlikely that he was running much more than 100 cores at once. This cluster although big is heavily utilized by hundreds of people so getting a ton of cores at once is an issue. Also, the max job time is a week so the person likely would have to resubmit often.

I worked for a number of years as the head admin of a university research cluster. If I was there now I probably would have set up jobs to mine *coins when the cluster was otherwise idle.

Running CPU intensive jobs like this is one way to ensure new hardware isn't faulty in any way (although you do want to stress other things like disk, network, etc. as well). At a previous job where we grew an environment to a few thousand servers in the span of a few years my former boss regularly used SETI@Home to burn in new systems. He was an avid amateur astronomer.

Well I suppose that's one way to scuttle an academic career. I was under the impression that doge coin was a joke coin spawned from the bowels of reddit. Does it actually have enough value to justify this? will anyone actually give you real money for doge?

This is basically stealing since you're converting another's paid electricity into currency.

Agreed. If the only consequence to the idiot was being banned from using University super computing resources they were let off really lightly. As long as their name's kept off the record their resume shouldn't suffer any real damage. There're plenty of other reasons for changing schools/research focus mid degree. A felony CFAA conviction would devastate your chances of employment in most professional workspaces.

I thought scrypt cryptocurrency was not supposed to support ASIC-type dedicated machines?

The current generation of ASICs can't support it no. That said, there's no reason that a new generation of ASICs can't be developed that will. The two schemes (SHA256 and Scrypt) aren't compatible so there will likely never be hardware that can do both but there's no reason that they can't each have their own specialty kit.

While the perp will presumably have the hammer dropped(they are just lucky that they were probably on campus at the time, so no interstate action to get the feds interested), this seems potentially more embarrassing for Harvard...

How underutilized is their fancy cluster if somebody managed to sneak in and mine dogecoins for a period of time with it? You don't exactly hide that level of CPU use, so either they got a relatively small slice (and looked like a legitimate research workload to somebody not inspecting the executable) or the place must have been full of tumbleweed.

I worked for a number of years as the head admin of a university research cluster. If I was there now I probably would have set up jobs to mine *coins when the cluster was otherwise idle.

Running CPU intensive jobs like this is one way to ensure new hardware isn't faulty in any way (although you do want to stress other things like disk, network, etc. as well). At a previous job where we grew an environment to a few thousand servers in the span of a few years my former boss regularly used SETI@Home to burn in new systems. He was an avid amateur astronomer.

The key difference is that as the admin you or your boss would be able to get official sanction to do so (and presumably are smart enough to know you need to do so first); and in the case of cryptocoin mining would probably required to donate all proceeds to the university or a charity instead of padding your own income with them.

OFF TOPIC but can anyone explain to me (or explain where I've gone wrong)

If cryptocurrencies are mined/created in return for calculating the hashes that drive/monitor all the transactions, for any currency designed to have a finite issue of coins, there will be a point where there is no incentive for people to keep calculating these hashes. At that point, does the currency fall apart losing all value for whoever is holding it at the time, as they can nolonger reliably enact any transactions and nobody is incentivised to help them?

OFF TOPIC but can anyone explain to me (or explain where I've gone wrong)

If cryptocurrencies are mined/created in return for calculating the hashes that drive/monitor all the transactions, for any currency designed to have a finite issue of coins, there will be a point where there is no incentive for people to keep calculating these hashes. At that point, does the currency fall apart losing all value for whoever is holding it at the time, as they can nolonger reliably enact any transactions and nobody is incentivised to help them?

IIRC (someone can correct me if I'm wrong its been awhile since I looked) You get a reward for find the correct block but you also get the transaction fees. The reward goes down the longer the currency has been around but in theory the fees will go up the more it is used. So you won't get nearly as many coins for mining but you could still get some.

I thought scrypt cryptocurrency was not supposed to support ASIC-type dedicated machines?

Support is a strange term to use in this context. Scrypt mining really depends on generating hashes. Anything that has RAM and can generate hashes can mine the currency.. you'll find that ARM CPUs in Android phones to the netbooks on display at Walmart "support" scrypt mining.

The deal with scrypt is that it's much more memory reliant. This is why the implementers of the LTC protocol chose it. Bitcoins use the SHA256 protocol which can be implemented in parallel almost to a fault and doesn't use that much memory. That means that CPUs can do it (although that will be the slowest), GPUs can do it (better, because of the parallel power of GPUs), but that also means that you can simply design a chip that does it even better. This is the fastest option out there for SHA256, known as ASIC mining (application specific integrated circuit.. that application? Churning out SHA256 hashes).

Aside from being purpose built to hash quickly, ASIC mining tech offers a few distinct advantages over CPU and GPU mining (for SHA256). First, it's very efficient. Since the circuit was designed to compute SHA256 hashes and nothing else, it's VERY fast and VERY power efficient. No extra components, no wasted power. That means you pay less of your earnings out in power costs. ASICs also are cheaper per hashing power. You may be able to get a 50GH/s miner for a few hundred dollars (guessing here, not up to speed on ASIC prices), but you would need many, many GPUs to mine as fast as a little 6" cube. Another advantage is the saved space that a small cube takes as opposed to many mining rigs.

Some of the same benefits would be true for scrypt cryptocurrencies should anyone develop an ASIC miner. ASIC miners would be more power efficient, they would save space, and they would be a tad faster. The reason they're not as much of a slam dunk as ASIC was for Bitcoin is because of the scrypt memory requirement. Building an ASIC miner for scrypt currencies means that you not only have to develop a chip that hashes quickly, you also have to integrate a lot of memory in order for it to reach its full potential. This is not impossible to do, but it makes ASIC devices less cost effective. When you're basically paying for a high powered SHA256 CPU and nothing else you can keep prices down. When you have to start integrating stupid fast memory into the electronics design, prices go up. This either means the boxes will be very expensive as well as very fast, or they will be faster than GPUs, but not exponentially faster.

TL;DR: scrypt requires more memory than SHA256. When you design an ASIC to mine scrypt currencies, you need to include a lot of memory. This makes ASIC mining more expensive for scrypt cryptocurrencies, and less "worth it".. which is part of the reason why it hasn't been done yet. This person wouldn't have gotten rich, but they would have been generating about 50LTC/week (currently worth about $700) if they would have hooked up to a mining pool like give-me-coins.

How underutilized is their fancy cluster if somebody managed to sneak in and mine dogecoins for a period of time with it?

From what I understand Harvard has a number of disjoint research clusters, and in some cases various departments band together and combine their clusters together. At the university where I worked they had a cluster where professors & departments could purchase their own compute nodes to add to the cluster. When they had work to do they had exclusive use to their compute nodes as well as shared access to the public nodes. When they weren't actively using their nodes then they were set up to take on some of the public jobs rather than sit idle.

If the person who did this was smart he could have simply renamed the application so that it's true nature wasn't painfully obvious by just listing all the jobs currently running on the cluster. In my experience it wasn't terribly uncommon to see jobs running that were simply named things like "test_1", "project_xyz", etc. An admin would have to actually log into the compute nodes and look at the process list (among other things) to determine exactly what the program was doing.

Cluster software also has the concept of queueing and prioritizing jobs. So it's possible this person just submitted a job to a low-priority queue so that it only used cycles when no other higher-priority jobs were running. That's a very common thing to do. We had one professor at the university I worked at who regularly submitted huge numbers of jobs (thousands at a time) to the lowest priority queues we had. He basically wanted to use up CPU cycles when they were available, without impacting all the other cluster users.

I worked for a number of years as the head admin of a university research cluster. If I was there now I probably would have set up jobs to mine *coins when the cluster was otherwise idle.

Running CPU intensive jobs like this is one way to ensure new hardware isn't faulty in any way (although you do want to stress other things like disk, network, etc. as well). At a previous job where we grew an environment to a few thousand servers in the span of a few years my former boss regularly used SETI@Home to burn in new systems. He was an avid amateur astronomer.

The key difference is that as the admin you or your boss would be able to get official sanction to do so (and presumably are smart enough to know you need to do so first); and in the case of cryptocoin mining would probably required to donate all proceeds to the university or a charity instead of padding your own income with them.

You'd also have access to the tools/equipment to ensure it only did this when 'idle', and not have it using resources in the background when someone was trying to use the machine for what it was designed for.

Re: If I was an admin... In this case it is a crime with a victim (whoever has to pay the electricity bill, more so if they lose some processing power on the machine(s) they paid for when they want to use it). From the university's point of view, using the campus wifi but sticking some affiliate/referral cookies (back in the day) for a few big name sites on their machine could probably have generated a tidy little earner at negligible cost to the uni/purchaser.

Cluster software also has the concept of queueing and prioritizing jobs. So it's possible this person just submitted a job to a low-priority queue so that it only used cycles when no other higher-priority jobs were running. That's a very common thing to do. We had one professor at the university I worked at who regularly submitted huge numbers of jobs (thousands at a time) to the lowest priority queues we had. He basically wanted to use up CPU cycles when they were available, without impacting all the other cluster users.

This is what I imagined when I read the article. He just submitted the job at the lowest priority in the hopes that nobody would notice. Well, I guess anybody else submitting in that queue would notice, but they have no expectations of performance to begin with.

According to an online calculator for these things, and assuming a 20mh/s rate continuously, the value of the Dogecoins mined would come out to roughly $4/hr. Without more info on how long the job was running, how much of that theoretical max the user was using, etc. it's hard to get a solid number.

This is basically stealing since you're converting another's paid electricity into currency.

Agreed. If the only consequence to the idiot was being banned from using University super computing resources they were let off really lightly. As long as their name's kept off the record their resume shouldn't suffer any real damage. There're plenty of other reasons for changing schools/research focus mid degree. A felony CFAA conviction would devastate your chances of employment in most professional workspaces.

Whoa there, assuming this is a student that student pays some $40k/year in tuition and $2500/year in student services fees. That money covers use of university facilities, including the electricity used to operate them. I am not saying what the person did was right but it is hardly a clear-cut case of theft. Depending on your major and the activities you choose you use more or less of certain resources. If I sit in the library and write a book using the library's electricity to power my laptop and the lights in the room and use their wifi and restrooms and drink their water and put wear and tear on their books to do my research have I stolen from the library when I sell the book that I wrote?

I worked for a number of years as the head admin of a university research cluster. If I was there now I probably would have set up jobs to mine *coins when the cluster was otherwise idle.

Running CPU intensive jobs like this is one way to ensure new hardware isn't faulty in any way (although you do want to stress other things like disk, network, etc. as well). At a previous job where we grew an environment to a few thousand servers in the span of a few years my former boss regularly used SETI@Home to burn in new systems. He was an avid amateur astronomer.

The key difference is that as the admin you or your boss would be able to get official sanction to do so (and presumably are smart enough to know you need to do so first); and in the case of cryptocoin mining would probably required to donate all proceeds to the university or a charity instead of padding your own income with them.

You'd also have access to the tools/equipment to ensure it only did this when 'idle', and not have it using resources in the background when someone was trying to use the machine for what it was designed for.

Re: If I was an admin... In this case it is a crime with a victim (whoever has to pay the electricity bill, more so if they lose some processing power on the machine(s) they paid for when they want to use it). From the university's point of view, using the campus wifi but sticking some affiliate/referral cookies (back in the day) for a few big name sites on their machine could probably have generated a tidy little earner at negligible cost to the uni/purchaser.

Putting a large cluster at load when it would otherwise be idle is going to consume a lot of extra power. These are old chips and have much higher idle levels than new models do; but even if putting them at load only increased power by 40W/processor that's a 40kw load from the cluster as a whole. At 10c/kwh you'd be looking at $4000/hour in additional power consumed.

This is basically stealing since you're converting another's paid electricity into currency.

Agreed. If the only consequence to the idiot was being banned from using University super computing resources they were let off really lightly. As long as their name's kept off the record their resume shouldn't suffer any real damage. There're plenty of other reasons for changing schools/research focus mid degree. A felony CFAA conviction would devastate your chances of employment in most professional workspaces.

Whoa there, assuming this is a student that student pays some $40k/year in tuition and $2500/year in student services fees. That money covers use of university facilities, including the electricity used to operate them. I am not saying what the person did was right but it is hardly a clear-cut case of theft. Depending on your major and the activities you choose you use more or less of certain resources. If I sit in the library and write a book using the library's electricity to power my laptop and the lights in the room and use their wifi and restrooms and drink their water and put wear and tear on their books to do my research have I stolen from the library when I sell the book that I wrote?

We're talking about a Harvard, you can be certain their user agreement for university resources includes not wasting them doing stupid stuff. That sort of oversight would be plausible if we were talking about a rural community college; not one of the nations premier universities.

I got my Masters (IT) at Harvard and worked as a teaching fellow in the Comp Sci dept. They encouraged students to do any crazy thing you could think of with their resources.

Just had to ASK FIRST. They had the most rigid project proposal committee system, which was somewhat painful, but I never heard of any rejection. I think it was the Psych department that rejected proposals most often, since they were very touchy about messing with people's brains. But our dept always seemed up for messing with a CPU or two.

I was lucky for my thesis: one of my professors (who also ran a private lab on the MIT campus) hired me as a researcher. When I needed lots of power (my thesis project ran on thousands of cores), I had the east coast Mitsubishi Electric Research Lab at my disposal (the lights in Cambridge would dim when I hit "run").

I thought scrypt cryptocurrency was not supposed to support ASIC-type dedicated machines?

The first mention of ASIC on this page is... your comment. It was a bit surprising when a nice, efficient method was found for mining scrypt on GPUs, though. Oh well! At least Primecoin is still CPU-only, if that's the kind of thing you worry about.