tag:blogger.com,1999:blog-17411990263086860582017-07-16T12:49:07.616-07:00Starin' at the WallSometimes you just gotta take a step back and stare at the wall.JD Conleyhttp://www.blogger.com/profile/08889156553199817990noreply@blogger.comBlogger71125tag:blogger.com,1999:blog-1741199026308686058.post-48361170300402059102013-07-18T10:42:00.000-07:002013-07-18T10:42:07.945-07:00Worry Less. Do More. Be Fearless!<a href="http://2.bp.blogspot.com/-r64U1Rh2sgM/Ud-o19arL0I/AAAAAAAAAlc/6D0TrjXgfKU/s1600/5418_10151310297357117_1205031682_n.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="320" src="http://2.bp.blogspot.com/-r64U1Rh2sgM/Ud-o19arL0I/AAAAAAAAAlc/6D0TrjXgfKU/s320/5418_10151310297357117_1205031682_n.jpg" width="240" /></a><b>"we better have health insurance"</b><br /><b><br /></b><b>. . .</b><br /><b><br /></b><b>"peyton got hurt"</b><br /><br />These are not the text messages you want to receive from your wife on Mother's Day, when you are four hours away from home.<br /><br />Our daughter Peyton is four years old. She has a twin brother. They scare me daily. I assumed she broke her arm on the swings, her wrist on her bike or something like that. Nothing to worry too much about.<br /><div><br /></div>About a month earlier I quit my awesome job at Disney Interactive making mobile games to dive in and cofound a startup,&nbsp;<a href="https://www.realcrowd.com/">RealCrowd</a>. A long time friend and cofounder Andy was at my parents' house with me, and we were on our way to the bay area to meet with our other cofounders and live under one roof. We had been interviewed and accepted into the&nbsp;<a href="http://ycombinator.com/">YCombinator</a> (YC) Summer 2013 batch and decided to get a jump start by moving to silicon valley a few weeks before the program was scheduled to officially start.<br /><br />I've been working from home the last few years after the last startup (Hive7) was acquired and consequently we are a very close family. The twins are a lot of work, and I try to support my wife Erika by being as flexible with my time as possible. I was already feeling the intense guilt of moving four hours away to Mountain View and relegating&nbsp;my family to second priority for a few months.<br /><br />"Is this startup going to be worth it? Of course this happened the day I'm supposed to go to the bay area. Am I being too selfish?" These questions haunted me.<br /><br />. . . and then the texts got worse.<br /><br /><b>"she fell out of the window"</b><br /><br />A few weeks before this fateful Mother's Day we had moved into an affordable three story condominium in Stateline, Nevada (South Lake Tahoe area) with a great lake and mountain view, cutting our monthly rent in half and saving a bit on income taxes (there are no state income taxes in Nevada). I was cutting costs to help decrease my burn. Andy had planned to call that his second home while we hacked on RealCrowd, prior to being accepted into YC.<br /><br /><a href="http://1.bp.blogspot.com/-V41Vj1Y2kpU/Ud-W2rZb6-I/AAAAAAAAAkw/7WAOZJbydhc/s1600/Window+Close.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="320" src="http://1.bp.blogspot.com/-V41Vj1Y2kpU/Ud-W2rZb6-I/AAAAAAAAAkw/7WAOZJbydhc/s320/Window+Close.jpg" width="240" /></a>I visualized every window of our three story home, floor by floor. Adrenaline started pumping. My heart rate increased. I had to&nbsp;consciously&nbsp;suppress&nbsp;the panic building inside. I <i>knew</i> the worst possible scenario had transpired. She had fallen through the screen&nbsp;<b>from the third story window</b> in the dining room, <b>onto the asphalt driveway below</b>. I almost vomited. At this point I was oblivious to conversation people were trying to have with me.<br /><br />I immediately tried to call my wife. No answer. Again. No answer. Again. No answer. Then it dawned on me she was probably on the phone with 911, and that's why she had been texting.<br /><br />"can she move? is she conscious?" I replied. "keep her still. it'll be ok. i'm coming. love you." I added.<br /><br /><b>"yes"</b><br /><br />That was the most amazing "yes" I had ever received. It trumped our marriage proposal, every deal close, every VC commitment, every new customer.<br /><br />I looked up from my phone and told my parents I had to leave, Peyton had fallen out of a window at home. I jumped in the driver's seat of Andy's car and dragged him along with me to the hospital. He tried to make conversation to get my mind off of analyzing all the possible outcomes. He's a great friend. Due to the nature of the event, Peyton was life flighted to a hospital in Reno, NV. Erika had to find a ride as she couldn't go in the helicopter. It took her about an hour, and me about three hours to get to Reno.<br /><br /><b>Peyton had suffered multiple cervical spine injuries. Her neck was broken!</b><br /><b><br /></b><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-TBZL4eiB8XA/Ud-ghUb1OuI/AAAAAAAAAlM/GPJGeuNQ2EE/s1600/Peyton+post+op.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="320" src="http://2.bp.blogspot.com/-TBZL4eiB8XA/Ud-ghUb1OuI/AAAAAAAAAlM/GPJGeuNQ2EE/s320/Peyton+post+op.jpg" width="240" /></a></div>Those first few days after the event are a blur in my memory. Full of extreme sadness, happiness, worry, guilt, lack of sleep, and everything&nbsp;in-between. Would she be paralyzed? Would she live? I researched statistics on these things. The stress of not knowing ate me alive. Peyton had a surgery to align her cervical spine and had a halo traction device applied. She woke up from anesthesia and the nurse did some tests.<br /><br /><h3>Peyton could wiggle her fingers and toes and apply normal amounts of pressure!</h3><br />She walked a few steps the next day. And then a few more. But we lived in a three story condo. There is no way Peyton could navigate the stairs. My sister took the initiative and managed a crew of our amazing friends and family, including Andy, to move us out of the home we had moved into just a few weeks earlier and into a storage unit. We'd be taking our clothes to my&nbsp;in-laws single story&nbsp;house in the Sacramento area, and my family would move there for the treatment period, near to a cervical spine specialist at UC Davis Medical Center.<br /><br />For the next week I was not sure if I'd be co-founding this company after all. Peyton would wake up in the middle of the night in extreme anger pulling at the bars and screaming to take the halo off of her. She was frustrated by her lack of mobility. She didn't want to try to walk. She was depressed and angry most of the time. <i>I couldn't leave her.</i><br /><br />But, thankfully, things got better. By the end of the first week she had adapted to her new mobility limitations, daily cleaning of the screws in her head (ouch), and started asking for help when she needed it. She stopped waking up in the middle of the night <i>every</i> night. She would walk wherever she wanted to go. We took her to Chuck E Cheese's.<br /><br /><a href="http://4.bp.blogspot.com/-pSdcB83bFQQ/Ud-sJ29eGEI/AAAAAAAAAls/vW1sKfW_j5o/s1600/971746_10151413526117117_1345713512_n.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="320" src="http://4.bp.blogspot.com/-pSdcB83bFQQ/Ud-sJ29eGEI/AAAAAAAAAls/vW1sKfW_j5o/s320/971746_10151413526117117_1345713512_n.jpg" width="240" /></a>At the end of week two I knew Peyton would be able to deal with the halo, and my amazing wife encouraged me to not pass up the great opportunity that is RealCrowd and YC.<br /><br />Peyton is still wearing the halo, and will be for a total of 12 weeks. I see her on weekends. My little girl is <b>alive</b>. But life is not normal. She has a very serious risk of infection (in fact she's on antibiotics now for one), a risk of losing traction, a risk of her skull being punctured by the screws. She can't get her vest wet. She can't run and jump and tumble. She has a twin brother that gets to do all these things. She gets frustrated every day. At the end of dealing with all of that we'll know if she healed or if we have to do more invasive surgery to fuse some bits together. I worry about her constantly.<br /><br />Will anyone use our product? Can we close that deal? Will our business keep scaling? Will we meet our goals? Can we hire that person? What if ... ? These questions no longer keep me awake at night. I do not fear the outcome. I keep pressing on. We're building a business, not dodging death. Perspective is everything.<br /><br /><h3>Worry Less. Do More. Be Fearless!</h3><b><br /></b>Oh, and yes, we did have health insurance.<br /><br /><b>Public service announcement:</b>&nbsp;If you have a window that's near the floor, make sure nobody can pass through the screen. Many state/local building codes say openings on the 2nd story or higher that are less than 24" off the floor must not permit a 3" sphere from passing through (aka, you need bars on the windows or a way to restrict them from opening). Ironically this can conflict with fire codes, so, your mileage may vary. This applies to deck and stairway railings as well. If you are a landlord, check out your rental properties and make sure they're compliant too.<br /><div><br /></div>JD Conleyhttp://www.blogger.com/profile/08889156553199817990noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-46790989851551572742011-08-01T15:00:00.000-07:002011-08-01T15:05:25.053-07:00Install node.js in 10 Seconds or LessI've been playing around a bit with <a href="http://nodejs.org/">node</a> recently. One thing it's quite good at is quickly bringing up a developer workstation. You can just run a quick command line to have the server running rather than futzing with config files, virtual directories, software installations, and such. When your team uses different development platforms (Linux, Mac, Windows) or your development environment is different from production, a bit of scripting makes it much easier to insure everyone is on a level playing field.<br /><br />I found myself having to remember too many commands when setting up node for my application on two different OSX laptops. I wanted to be able to quickly get node up and running for a non-privileged user account and with node modules installed locally for the application. This is ideal when you're working with multiple projects that may use different versions of node or different versions of modules.<br /><br />The key ingredient is the great version management tool called <a href="https://github.com/creationix/nvm">nvm</a> (Node Version Manager). It lets you easily download, compile, and associate a version of node with a particular terminal session. Combined with <a href="http://npmjs.org/">npm</a> (Node Package Manager), making a setup script for an application is nice-n-easy. I whipped up this little script for a recent prototype project. Copy and modify as desired.<br /><br /><pre>#!/bin/sh<br /><br />#A script to setup the node environment using nvm and npm.<br />#This is intended to be run on developer workstations. <br /><br />#Check for well known prereqs that might be missing<br />hash git 2>&- || { echo >&2 "I require 'git'."; exit 1; }<br />hash make 2>&- || { echo >&2 "I require 'make'."; exit 1; }<br />hash python 2>&- || { echo >&2 "I require 'python'."; exit 1; }<br /><br />#Clean up old stuff.<br />rm -rf ~/.nvm<br />rm -rf node_modules<br /><br />#Download, source, and update nvm's package list<br />git clone git://github.com/creationix/nvm.git ~/.nvm<br />. ~/.nvm/nvm.sh<br />nvm sync<br /><br />#Install latest stable version of node<br />#Change 'stable' to a version number to install a specific node<br />#See `nvm help` for more info<br />nvm install stable<br />nvm use stable<br /><br />#Install all the modules we desire<br />node_modules="mime qs hashlib connect ejs express hiredis policyfile redis uglify-js socket.io-client socket.io multi-node generic-pool"<br />for m in $node_modules;<br /> do npm install $m;<br />done;<br /></pre><br /><ol><li>Copy and paste this script into an installnode.sh file</li><li>chmod a+x installnode.sh</li><li>./installnode.sh</li></ol><br />Make sure to run the script in the directory where you want the node_modules to be installed. That's it! Happy node hacking.<br /><br />Also, this script is... destructive. If you don't want to download and rebuild node every time you might want to remove the `rm -rf ~/.nvm`.JD Conleyhttp://www.blogger.com/profile/08889156553199817990noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-81190010925344122842011-07-28T14:54:00.000-07:002011-07-29T09:17:59.592-07:00How to Recruit Great Engineering Talent<a href="http://www.betabeat.com/2011/07/27/tech-recruiters/">Raiders of the Last Nerd</a>&nbsp;made the front page of Hacker News yesterday and went a bit viral around the geek ecosystem. For good reason. Fueled by accelerating technology innovation, VC money, and even sometimes profit,&nbsp;the job market for experienced engineering talent has been heating up all over the place. In recent months I have received an email or phone call from at least one recruiter per day and I haven't even worked for Google or Facebook or one of those other Golden Names. Last month, out of sheer frustration with the lack of quality, I wrote an open letter to recruiters titled <a href="http://blog.jdconley.com/2011/06/dearest-recruiter.html">Dearest Recruiter</a>. I'd like to expand on that a bit.<br /><br /><b>The recruiting industry is broken.</b>&nbsp;I'm not talking about in-company recruiters here, but those outside agencies like the one Mr. Carvajal runs. There is a horde of non-technical, outgoing, sales people trying to court highly analytical, mildly autistic, geeks. After fielding calls and emails from recruiters for the last 12 years I've grown a pretty thick skin and have become very defensive.&nbsp;When I speak to a recruiter I assume everything they say is an attempted manipulation. I know that both myself and the company for which they are recruiting are getting ripped off.&nbsp;I will often just hang up on them. They remind me of the slime I had to talk to every night during dinner before the <a href="https://www.donotcall.gov/">Do Not Call Registry</a>&nbsp;stopped most cold call telemarketers in their tracks.<br /><br />But on the other side of the coin, many geeks don't know their value, or don't know how to assert it, and the recruiters take advantage of that. Let's say you are recruited through one of the bigger tech recruiting firms such as TEKSystems or RHI. While you're on contract they'll probably take at least a 50% cut. Expect it to be much more if you're inexperienced and don't negotiate.<br /><br /><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">When I started out in the industry I was much less jaded. In 1999, after dropping out of my second semester of college, I was referred by a family friend and took my first programming job at a ski boat manufacturing company. School was way behind the curve on technology and utterly boring for me.&nbsp;Like most geeks in my generation, I'd been messing with computers since I was five years old and programming somewhere shortly thereafter (it's a fuzzy memory now). I grew up in a small agricultural town and had no idea what wages should be or how to find out. This job paid a whopping $8/hr when minimum wage was $4.25. By 2001 I was making a stellar $13/hr. My friends though I was rich as I was living above the poverty line. But I got bored at that job so I found this cool web site called Dice.com where I could post my resume. This was my first experience with recruiters.</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br /></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">They were all very nice people, these recruiters. They saw some fresh meat in me. A sucker. They pitched me to their client as this awesome young rock star. I was paid $35/hr (~$70k/yr). I later found out the company was billed more than 2x that amount. Was I really worth more than double that? And why didn't they tell me? My next position was also through a (different) contracting company. I was paid $45/hr (~$90k/yr) having 3 years of professional experience. I found out well after the fact that they billed about $100/hr.&nbsp;That really got me thinking about recruiters.</div><div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br /></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">If you're a company and have a contractual relationship with a recruiting firm for direct hires, they'll probably take something like a 25% cut of the first year's salary of whoever they refer to your company. Of course it's in the contract. We had one of these at Hive7. On top of that, as a manager, I've only had mild success with talent acquired through recruiting firms. The best hires I've made have always been through my own network via referrals. Ouch!</div></div><div><br /></div><div><b>Transparency is recommended.</b>&nbsp;Us geeks are information and knowledge addicts. We learn and digest everything we possibly can. We embrace transparency. Just witness the popularity of the Open Source Software movement and how much value we place on being a part of it.</div><div><br /></div><div>Look. We know how the recruiting industry works. This antiquated people-trade reminds me of another slimy role that is currently being blown to pieces. If you tried to buy a new car 10 years ago you'd have to go to a dealership, haggle with someone, get ripped off anyway, and walk away feeling dirty. Nowadays when I buy a car I do so online. I send emails to as many dealerships as I would drive to and ask for quotes. They provide quotes, and many provide a copy of the actual invoice they received when purchasing the car from the manufacturer. Of course they also receive some kick backs on the back end for volume and other special programs, but you can still walk away feeling like you weren't completely ripped off. The salesperson makes a hundred bucks for a few minutes work and you get a car.</div><br />I'm sure we'll see this level of transparency and marginalization of recruiters in the next 10 years. Just like the car salesman, technical recruiters are becoming largely irrelevant. Online social tools like Facebook, LinkedIn, Twitter, Google+ are expanding the available talent network one has access to. Up-and-coming geek-oriented job sites like <a href="http://careers.stackoverflow.com/">Stack Overflow Careers</a>&nbsp;are putting the tools into the hands of the hiring manager or HR department. All that's missing is a good aggregator of all these that can send me positions that would actually be interesting to me.<br /><br />I don't know much about the tech scene in NYC, but I have been contacted by recruiters there. They haven't had any more to offer than people over here in the San Francisco bay area. They're all the same slimy salespeople trying to convince you of their golden opportunity. I was going to use a used car salesman analogy here again, but I think something more fitting would be Viagra spammers. It is apparent that recruiters blindly send emails to hundreds of people hoping there will be some sort of a response.<br /><br />Most of the job reqs that come my way via recruiters are for Developer Lead or Senior Engineer type positions. Salaries are usually in the $100k-$150k range. There is the occasional Director level position, but salaries are roughly the same. After reading an article like Raiders of the Last Nerd, you'd think that we were back in the .com boom with companies throwing sports cars and huge signing bonuses around. But that's just not true. The cases are much more isolated.<br /><br /><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">In the interview Dave Carvajal stated that “We came from a place over the last two years where people were going to start-ups for below market [rates], People aren’t necessarily going to do that now.” This is partly true. But it really depends. It shows some of the lack-of-understanding of how the geek mind works. Maybe in the financial sector it's all about the money, but to a great hacker it's more about the project.</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br /></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">The best engineers out there&nbsp;<b>do not work for the money</b>. Sure, they'll calculate how much their upside is at various acquisition prices, but they don't really care. Things like solving interesting problems, using new technology, making a visible impact, and working with a fun team are much more compelling for those that would work for a startup.</div></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br /></div></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"></div><b>For startups it's about the equity.</b> We're in the middle of a startup boom. Services like&nbsp;<a href="http://angel.co/">AngelList</a>&nbsp;are making capital more easily accessible to good entrepreneurs, and incubators like&nbsp;<a href="http://ycombinator.com/">YCombinator</a>&nbsp;are teaching young entrepreneurs the ropes. These startups are willing to offer large amounts of equity to early employees in exchange for sub-market pay rates. Any decent founder out there deeply believes her company is going to succeed and thus believes her equity is worth significantly more than any amount of cash. That, and, they don't have much cash to throw around. &nbsp;There are a lot of very small funding rounds of &lt; $1MM happening. That doesn't give a company without significant revenue much buying power if they want that to last a year.</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br /></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">Do you want to be successful in recruiting someone? Try this on for size.</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><b><br /></b></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><b>Show me MY money. </b>On first contact, tell me how much money I will make. The exact dollar figure. If &nbsp;you would be willing to pay $300k/yr for a role for the right person, just put that amount in the job. Us engineers know that there is a <a href="http://haacked.com/archive/2007/06/25/understanding-productivity-differences-between-developers.aspx">very high variance</a> between the least productive and most productive members of our kind. Why not try paying for that?</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><b><br /></b></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><b>Tell me about the project.</b>&nbsp;The project is just as important as the cash, if not more so. Tell me what it is. Don't tell me in weird abstract terms you don't understand. If you can't tell me what it is, don't bother talking to me.</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><b><br /></b></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><b>Show me YOUR money. </b>As a recruiter, how much money are you going to make on the deal? In our heads we're already doing the math and assume you're ripping us off. You might as well tell us, and break it down by the hour.</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br /></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><b>Don't ask how much I make.</b> We know that asking how much a person makes right now is really just gathering negotiation leverage. It's also wasting our time. You'll take that, then go back to the company, then get a new number, and blah blah blah.</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br /></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><b>Tell me the company you're recruiting for.</b> The abstract job req is useless. Just say the name of the company. I will want to research them. It's more about the company fit than the particular job they're hiring for. Almost never do people do the same thing they were hired for in a company.</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br /></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><b>Let me talk.</b>&nbsp;Many recruiters all-too-often steam roll conversations. This is super annoying. That's not how you sell.</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br /></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><b>Don't call me.</b> Oh yeah, and avoid calling me at all costs. Email or LinkedIn or text or something is much preferred. Phones are horrible. I hate talking on them. I'm not alone.</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br /></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><b>Don't pretend you are technical.</b>&nbsp;You bring up some recent tech news or talk about how you used to make web sites for fun or how awesome "that C language" is. We know you are just manipulating us to try to get us to open up to you. Give it up. Admit you're just a Pimp trying to pick up a new Ho.</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br /></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><b>Have something compelling.</b> When you're recruiting experienced talent out of a comfortable position you had better have something great. Either that's a truckload of cash (think multipliers on current salary), a first-employee position, a responsibility level-up, or a really freaking cool project. A combination of this stuff is preferred. Mention it upfront and don't dance around the important facts. We do not want to interview without knowing this stuff.</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br /></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">It's a strange world we are living in right now. Jobless rates are insanely high, and here I am complaining about too many people wanting me to interview for jobs. It makes me feel extremely lucky to be in this industry.</div>JD Conleyhttp://www.blogger.com/profile/08889156553199817990noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-47125203866744004502011-06-27T14:27:00.000-07:002011-08-01T17:32:05.656-07:00Dearest Recruiter<i>Update:</i>I expanded on this a bit over here: <a href="http://blog.jdconley.com/2011/07/how-to-recruit-great-engineering-talent.html">http://blog.jdconley.com/2011/07/how-to-recruit-great-engineering-talent.html</a><br /><p>I'm taking some time out of my lunch break today, when I'd much rather be indulging in some geek porn over at <a href="http://news.ycombinator.com/">Hacker News</a>, to submit a plea on behalf of all the in-demand geeks out there.</p><p>You probably found my resume, my consulting company, my blog, my Linked In, my Facebook profile, the MySpace profile I haven't taken the time to delete, my Stack Exchange profile, and some personal blogs about my kids via Google. Did you actually read any of it? No? I am an entrepreneurial generalist that prefers to balance business, product, and technology. I do not want a heads-down, order-following, engineering role.</p><p>Yes, I joke about golden handcuffs, but I am happy in my current position. I work from home for the most part. I am working on autonomous startup-like projects that provide great satisfaction. I have some cool benefits and awesome perks. Oh yeah, and I'm paid well.</p><p>But, I digress. I do enjoy money. I am human, and a good American consumer. I like to buy stuff. My wife likes to buy stuff. We like vacationing. It's fun. So I'll make you an offer. I will consider your position if you can provide compensation like <a href="http://adtmag.com/Articles/2011/05/27/What-Highest-Paid-Programmers-Earn.aspx?Page=2">Sergey Aleynikov</a>. Let's round it up to $500k/yr total compensation, and the work better be self rewarding as well. Yes, I'm serious.</p><p>Do I feel bad for asking for such a huge sum? No, I don't. Am I entitled to it? Probably not. But I'm happy in my position and a few thousand dollars extra will not change my mind. I am not interested in a market-competitive salary. I am not interested in being the 25th employee at the next great Groupon clone. I understand how company capitalization and ESOP's work.</p><p>Now, I might be interested in being the first employee at a startup or a technical cofounder, but it had better be really darn interesting. I've got my own ideas and stack of half finished prototypes to productize. :) On that note, consulting arrangements at &gt; $200/hr will also be considered.</p><p>Stop contacting me with inappropriate jobs that have nothing to do with my experience. Stop attempting to relocate me to Indiana. Stop asking what it would take for me to move to a new position. Stop wasting my time. Please. Stop.</p><p>Best Regards,</p><p>The In-Demand Geek</p>Joelhttp://www.blogger.com/profile/13506379062750253983noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-79245133677801403022010-04-19T13:28:00.000-07:002011-07-23T10:59:08.795-07:00ValidateInput Attribute Doesn't Work in ASP.NET 4.0<p>Today I decided to upgrade some of our new projects (top secret, shhh) to <a href="http://weblogs.asp.net/scottgu/archive/2010/04/12/visual-studio-2010-and-net-4-released.aspx">Visual Studio 2010</a>, ASP.NET 4.0, and <a href="http://weblogs.asp.net/scottgu/archive/2010/03/11/asp-net-mvc-2-released.aspx">ASP.NET MVC 2.0</a>. There are about a million new features that look quite useful in all of these new releases. We have some fairly complex projects so I was excepting a few speed bumps, but, not this one.</p><p>It seems with every new release Microsoft adds annoying (isn't security always?) features to protect us from ourselves. Way back when they added the idea of <a href="http://www.asp.net/learn/whitepapers/request-validation/">Request Validation</a>. If ASP.NET thinks a user is posting something "bad" to the server (i.e. things that lead to XSS attacks), the request is denied. This is cool except when you want to, say, allow the user to input HTML or have a web service that takes XML as a parameter in a form, or use a ":" in your URL. In ASP.NET web forms you work around this feature by turning it off at the page level or globally in your web.config through the validateRequest option. In MVC you use the ValidateInput attribute on your action.</p><strong>This is really really important:</strong><br /><p>In .NET 2.0-3.5 the runtime only validated requests sent to .aspx pages, but <a href="http://www.asp.net/learn/whitepapers/aspnet4/breaking-changes/#0.1__Toc256770147">that has now changed</a> and <strong>any</strong> request will be validated, even it is sent to a custom handler, or an MVC application.</p><p>ASP.NET MVC implements its own request validation, which is also on by default. To turn it off you simply slap a ValidateInput(false) attribute on your controller action. This is fine and dandy, except with the latest ASP.NET 4.0 changes, <a href="http://connectppe.microsoft.com/VisualStudio/feedback/details/543069/validateinput-false-not-working">it no longer works</a> and an exception is thrown. So you might see an error like:</p><p><strong>A potentially dangerous Request.Form value was detected from the client </strong> or <strong>A potentially dangerous Request.Path value was detected from the client</strong><br /></p><p>The workaround is pretty easy. Just follow the instructions on the <a href="http://www.asp.net/learn/whitepapers/aspnet4/breaking-changes/#0.1__Toc256770147">ASP.NET 4.0 Breaking Changes</a> page. Stick this XML in your web.config to revert to the behavior as it were in ASP.NET 2.0-3.5. This will put request validation back in the hands of the ASP.NET MVC engine and your ValidateInput attribute will start working again.</p>Joelhttp://www.blogger.com/profile/13506379062750253983noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-61472560106499560632010-03-09T16:18:00.000-08:002011-07-27T00:02:30.617-07:00Mr. Sprite Sheet, Meet Ms. MovieClip<p> In <a href="http://pushbuttonengine.com/devgallery/youtopia">Youtopia</a> we wanted to have animations. We also wanted to have thousands of buildings on the screen at once. Anyone who has done a lot of Flash development will tell you that these two things are not compatible. You simply cannot create that many movie clip instances and have them playing. But, all is not lost! With a sprite sheet animation system like the one in <a href="http://pushbuttonengine.com/">PushButton Engine</a> (PBE) you can have your cake and eat it too! </p><p> Sprite sheets are as old as video games that have pre-rendered art. The basic idea is you draw a bunch of frames of an animation (or multiple animations) and put them on a single image. You can see some examples <a href="http://www.mariomayhem.com/downloads/sprites/super_mario_bros_sprites.php">online</a>. The software knows how to read and draw a frame of the desired animation from that one big graphic, and that's what you see on the screen. Easy! </p><p> However, sprite sheets also have their down sides. First, they aren't made to scale. We're talking bitmap graphics here. Your resized image is only going to look as good as your resizing code can make it look (which usually is not very good). Second, animations with a lot of frames can create very large file sizes. A five second animation at 30 frames per second means you end up with all 150 frames of the animation on a single image. </p><p> This is where movie clips come in. "But wait," you say with bewilderment, "didn't you tell me earlier we couldn't use movie clips." Why yes I did, so let me explain. Flash was designed from its inception to create small file sizes for downloads. It was also designed to support resizing. This is done using <a href="http://en.wikipedia.org/wiki/Vector_graphics">vector graphics</a> which can be animated in a movie clip. We should take advantage of this feature. </p><p> One of the bits of code I contributed to PBE was the SWFSpriteSheetComponent. It takes an instance of a MovieClip and converts each frame into a bitmap. It then exposes these bitmaps to the PBE rendering engine as if they were part of a sprite sheet. Viola! You get the best of both worlds. A small download size, the option to render your animation at any scale, and super awesome performance. Using this technique and well drawn vector graphics we were able to save over 5X on the download size of Youtopia as compared to sprite sheets and get the same performance! </p><p> NOTE: If you're not familiar with PBE you might want to skim through <a href="http://pushbuttonengine.com/docs/">the docs</a> before diving into my code below. I'd highly recommend the video talks. The composite entity architecture can throw you for a loop if you're not used to seeing it. </p><p> Using the SWFSpriteSheetComponent is really easy and I've created an example application to show it off. The application spawns a sleeping <a href="http://coderhump.com/">Ben Garney</a> every second and makes him move. There are some animated zzz's to indicate that, regardless of the smile, he is in fact sleeping. Ben is the lead developer on PBE, so he's getting picked on. ;) <a href="http://jdconley.com/garney/">Click here</a> to see the app. You can also <a href="https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B7Ew2HKAAmajNTI5NDBiMjktOWFlMC00M2JiLTk0ZWUtNmY0MGVmNzc3MzUx&hl=en_US">download the source</a> and follow along at home. </p><p> The process starts by creating and exporting a MovieClip in your fla with a flash.display.MovieClip base class. Give it a class name that you're not going to forget. In this example we call it "z_fx". Then publish the swf and put it somewhere that your PBE application can find it. The example has a "res" folder where I put the effects.swf. The CS3 format effects.fla (created for me by Jesse Tudela here at Hive7) is in the res folder in the <a href="https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B7Ew2HKAAmajNTI5NDBiMjktOWFlMC00M2JiLTk0ZWUtNmY0MGVmNzc3MzUx&hl=en_US">download</a> if you want to check it out. Now on to the code! </p><p> For easy deployment we embed the effects.swf and the garney.png file using PBE's ResourceBundle like so:<br /></p><pre class="brush: as3">package<br />{<br /> import com.pblabs.engine.resource.ResourceBundle;<br /><br /> public class Resources extends ResourceBundle<br /> {<br /> public static const EFFECTS_PATH:String = "../res/effects.swf";<br /> public static const GARNEY_PATH:String = "../res/garney.png";<br /> <br /> [Embed(source="../res/effects.swf",mimeType='application/octet-stream')]<br /> public var effects:Class;<br /><br /> [Embed(source="../res/garney.png",mimeType='application/octet-stream')]<br /> public var garney:Class;<br /> }<br />}<br /></pre><p>Now on to our "main" method. We start off by creating a SceneView, which is the target where PBE will draw stuff. Then we call startup, load our embedded resources, and create the scene. This is all standard PBE initialization. A ThinkingComponent is added to the scene entity in order to spawn Garney instances based on the game's virtual time. And finally, we register our garney entity factory with the TemplateManager and spawn a Garney!<br /></p><pre class="brush: as3">// The SceneView is where PBE will draw to<br />var sv:SceneView = new SceneView();<br />addChild(sv);<br /><br />// Start the logger, processmanager, etc<br />PBE.startup(this);<br /><br />// Embed my resources<br />PBE.addResources(new Resources());<br /><br />// Create a basic scene through code<br />var scene:IEntity = PBE.initializeScene(sv);<br /><br />// ThinkingComponent is an efficient Timer based on virtualTime rather than real time<br />scene.addComponent(new ThinkingComponent(), "spawnThinker");<br /><br />// Register the callback for my "garney" template that will be used to instantiate garneys<br />setupTemplate();<br /><br />// Spawn a garney then kick off the timer. They keep coming, eeek!<br />spawn();<br /></pre><p> The setupTemplate method is where all the interesting stuff happens. In here we choose all the components that make up our entity and determine how they relate to each other.<br /></p><pre class="brush: as3">private function setupTemplate():void<br />{<br /> PBE.templateManager.registerEntityCallback("garney",<br /> function():IEntity<br /> {<br /> var e:IEntity = PBE.allocateEntity();<br /> <br /> // Spatial component knows where to put the garney<br /> var spatial:SimpleSpatialComponent = new SimpleSpatialComponent();<br /> spatial.spatialManager = PBE.spatialManager;<br /> <br /> // Rendering component knows how to draw the garney<br /> var render:SpriteRenderer = new SpriteRenderer();<br /> render.fileName = Resources.GARNEY_PATH;<br /> render.positionProperty = new PropertyReference("@spatial.position");<br /> render.scene = PBE.scene;<br /> <br /> // Here's the SWFSpriteSheet magic!<br /> var fxSheet:SWFSpriteSheetComponent = new SWFSpriteSheetComponent();<br /> fxSheet.swf = PBE.resourceManager.load(Resources.EFFECTS_PATH, SWFResource) as SWFResource;<br /> fxSheet.clipName = "z_fx";<br /> <br /> // Fx Rendering component knows how to draw the z's from the spritesheet<br /> var fxRender:SpriteSheetRenderer = new SpriteSheetRenderer();<br /> fxRender.positionProperty = new PropertyReference("@spatial.position");<br /> fxRender.positionOffset = new Point(30, 10);<br /> fxRender.scene = PBE.scene;<br /> <br /> // Need an animation controller to assign and animate the sprite sheet on the renderer<br /> var animator:AnimationController = new AnimationController();<br /> animator.spriteSheetReference = new PropertyReference("@fxRender.spriteSheet");<br /> animator.currentFrameReference = new PropertyReference("@fxRender.spriteIndex");<br /><br /> var idle:AnimationControllerInfo = new AnimationControllerInfo();<br /> idle.loop = true;<br /> idle.spriteSheet = fxSheet;<br /> idle.frameRate = 30; // In PBE your animation framerate can be independent of your stage framerate<br /><br /> animator.animations["idle"] = idle;<br /> animator.defaultAnimation = "idle";<br /> <br /> // Garneys self destruct after a random amount of time<br /> var suicide:ThinkingComponent = new ThinkingComponent();<br /> <br /> // Add all the components to the entity<br /> e.addComponent(spatial, "spatial");<br /> e.addComponent(render, "render");<br /> e.addComponent(fxSheet, "fxSheet");<br /> e.addComponent(fxRender, "fxRender");<br /> e.addComponent(animator, "animator");<br /> e.addComponent(suicide, "suicide");<br /> <br /> e.initialize();<br /> return e;<br /> });<br />}<br /></pre><p> The "garney" template is made up of six distinct components. Each of these components perfmorm a small piece of highly specialized work. I created two components for rendering, one for the garney sprite and one for the animated z's. The z's use a SpriteSheetRenderer and a SWFSpriteSheetComponent. Both renderers are positioned based on the position property on the spatial component. The AnimationController is a very powerful class that lets you do things like automatically change out the animation being rendered based on an event firing. But, that's probably a post for another day. In this case it just plays the z's animation on the fxRender component. </p><p> All of this gets tied together in the spawn method, which creates a new garney entity based on the template, assigns it some random values for position and velocity, makes sure it draws the most recently spawned entity on top, picks a random time for the entity to commit suicide, and schedules the next spawn.<br /></p><pre class="brush: as3">private function spawn():void<br />{<br /> // Create a garney!<br /> var garney:IEntity = PBE.templateManager.instantiateEntity("garney");<br /> <br /> // Randomly position a garney!<br /> var spatial:SimpleSpatialComponent = garney.lookupComponentByName("spatial") as SimpleSpatialComponent;<br /> spatial.position = new Point(Math.random() * 800, Math.random() * 600);<br /> spatial.velocity = new Point(Math.random() * 50 * (Math.random() &lt; .5 ? -1 : 1), Math.random() * 50 * (Math.random() &lt; .5 ? -1 : 1));<br /> <br /> // Choose when this garney commits suicide, up to 20,000 virtual MS from now<br /> var suicide:ThinkingComponent = garney.lookupComponentByName("suicide") as ThinkingComponent;<br /> suicide.think(garney.destroy, Math.random() * 20000);<br /> <br /> // Set the zIndex so our components render consistently in spawn-order<br /> var render:DisplayObjectRenderer = garney.lookupComponentByName("render") as DisplayObjectRenderer;<br /> render.zIndex = _zIndex;<br /> <br /> var fxRender:DisplayObjectRenderer = garney.lookupComponentByName("fxRender") as DisplayObjectRenderer;<br /> fxRender.zIndex = _zIndex;<br /> <br /> // Grab the global spawn thinking component and schedule a think<br /> var thinker:ThinkingComponent = PBE.lookupComponentByName("SceneDB", "spawnThinker") as ThinkingComponent;<br /> thinker.think(spawn, 1000);<br />}<br /></pre><p> That's all there is to it! There are some caveats, though. Make sure you keep your source MovieClips really simple. Just like it is CPU intensive to play a complex MovieClip, it is CPU intensive to render each frame to a bitmap. In addition, nested clips with separate timelines and as3 code in the clip that is not based on the timeline will not be executed. This works really well for simple frame based animations, but is not designed for complex interactive clips with tweening. YMMV. </p><p> Let me know if you have any questions or if you're interested in any other PBE related topics. Also, PBE 1.0 is out! Download it from the <a href="http://pushbuttonengine.com/download/">project site</a>. This example includes the 1.0 release swc.<br /> </p><ul><li><a href="https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B7Ew2HKAAmajNTI5NDBiMjktOWFlMC00M2JiLTk0ZWUtNmY0MGVmNzc3MzUx&hl=en_US">Download the source code</a></li> <li><a href="http://jdconley.com/garney/">See the demo</a></li></ul>Joelhttp://www.blogger.com/profile/13506379062750253983noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-32984979827868633532009-11-22T15:48:00.001-08:002011-07-27T04:20:09.327-07:00Angry Players Make Sunday More Interesting<p>Youtopia has been growing quickly the last couple of weeks. It's fun to watch and the team is really excited about it. Of course, with the growth comes a lot of performance tuning with our code. Today we hit an issue I wasn't expecting at all. . .</p><p>We've been running Windows 2008, IIS7, and ASP.NET 3.5 in production for a while now, but haven't had to do much of any performance tuning. It <em>just works</em>, and is fast. Which is awesome!</p><p>But today, Youtopia was running slowly and requests were hanging so I investigated. The databases were performing normally and not having any locking issues. The network looked good. The memcached cluster was healthy. The queueing service looked great. The ASP.NET performance counters even looked good at first glance.</p><p>None of the diagnostic performance monitors I'd used in the past (such as Requests in Application Queue) showed the issue, but requests were absolutely being queued -- or otherwise not processed immediately. There were also plenty of free worker and IOCP threads. The only thing that clued me in was the Pipeline Instance Count and Requests Executing counters were exactly the same (96) on all the servers. So I started investigating from there.</p><p>It turns out that <a href="http://blogs.msdn.com/tmarq/archive/2007/07/21/asp-net-thread-usage-on-iis-7-0-and-6-0.aspx">due to the way IIS7 ASP.NET integrated mode threading model</a> functions there is a (configurable) request limit of 12 per CPU. We hit this limit in Youtopia today because we hold open requests for asynchronous Comet-like communications and there were over 288 people online simultaneously. Our three eight core web servers each had 96 (8*12) people connected to them and weren't really serving any other requests. We aren't running into any thread configuration limits as the long running requests are asynchronous and not using ASP.NET worker threads.</p><p>Here are a few great links that came out of my research.<br /> </p><ul><li><a href="http://blogs.msdn.com/tmarq/archive/2007/07/21/asp-net-thread-usage-on-iis-7-0-and-6-0.aspx">http://blogs.msdn.com/tmarq/archive/2007/07/21/asp-net-thread-usage-on-iis-7-0-and-6-0.aspx</a></li> <li><a href="http://support.microsoft.com/kb/816995">http://support.microsoft.com/kb/816995</a></li> <li><a href="http://msdn.microsoft.com/en-us/library/ee377050%28BTS.10%29.aspx">http://msdn.microsoft.com/en-us/library/ee377050(BTS.10).aspx</a></li> <li><a href="http://blogs.msdn.com/webtopics/archive/2009/02/13/asp-net-hang-in-iis-7-0.aspx">http://blogs.msdn.com/webtopics/archive/2009/02/13/asp-net-hang-in-iis-7-0.aspx</a></li></ul><p>With ASP.NET 3.5 SP1 it boils down to a simple configuration file change. Use something like this in the aspnet.config file (in x64 it's at C:\Windows\Microsoft.NET\Framework64\v2.0.50727\aspnet.config). This is the default. Adjust maxConcurrentRequestsPerCPU to suit your needs.<br /></p><pre class="brush: xml">&lt;system.web&gt;<br /> &lt;applicationPool maxConcurrentRequestsPerCPU="12" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000"/&gt;<br />&lt;/system.web&gt;<br /></pre><p>In addition, the application pool needs to be configured to allow more requests. By default it only allows 1000 concurrent requests. This is done under the Advanced Settings for the application pool in the IIS 7 manager. Set Queue Length to 5000 to match this system level configuration.</p>Joelhttp://www.blogger.com/profile/13506379062750253983noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-77932147383930958022009-11-16T11:30:00.000-08:002011-07-27T00:12:59.280-07:00Ditch Your Events (Part 1)<p> About four months ago Max, Hive7's <a href="http://corp.hive7.com/about/">Lawful Evil CEO</a>, decided we needed to take our games to the next level and build something fun and accessible that everyone who plays "those farming games" would want to play. We all brainstormed, pitched our ideas to the company, and everyone voted by comparing every idea against every other – I wish we had a digital photo of the giant matrix on the whiteboard. There were a bunch of great ideas, but in the end... I won! <a href="http://apps.facebook.com/you-topia/landing?ref=jd">Youtopia</a> was born. </p><p> Youtopia was released to the public about three months from its inception. Hats off to the dev and art team for pulling this one together. A new technology for the developers and fully animated objects for the art team led to much blood, sweat, and tears, but we got 'er done! Of course, we're still actively developing Youtopia, and there are lots of great things planned for the future! But, back to my tech article... </p><p> It's been a long time since I've stepped out of my comfort zone and learned a new (to me) technology. Don't get me wrong, I'm always experimenting with the lastest .NET based thingie-ma-bobbers out there, but I haven't used a completely foreign development environment since C#/.NET came out over eight years ago. But for this project I needed to learn Flash/AS3, and it needed to be done yesterday. Luckily for me nobody else on our dev team knew Flash so I could still pretend like I knew what I was talking about and make lots of (un)educated architectural decisions without anyone being the wiser! </p><p> One such recent decision was to use an event driven property binding system. Youtopia's engine is based on a great open source game engine, brought to you from some of the Dyamix/GarageGames people, called the <a href="http://www.pushbuttonengine.com/"> PushButton Engine</a> (or PBE). In PBE there is a class called PropertyReference. This class facilitates a late-bound approach for one component to read the value of a property (member variable or getter/setter) on another component. It's a pretty cool pattern, but requires you to poll the target component whenever you want to know if the property changed. This works fine when you're talking about 10's or 100's of components. But in Youtopia we have thousands of entities in the scene at once. We needed this binding to be event-driven. </p><p> Of course, with my .NET background I immediately reached for the <a href="http://msdn.microsoft.com/en-us/library/system.componentmodel.inotifypropertychanged.aspx"> INotifyPropertyChanged</a> pattern used in .NET's data binding infrastructure. With INotifyPropertyChanged it is the responsibility of the object owning the property to raise an event whenever a property value changes. Any listeners will then immediately know they need to poll for the new value if they want it. </p><p> This works great in .NET and is very performant. But in Flash, events are a whole other story. They are an extremely feature-rich subsystem that I don't really want to get into. In the end, all the features and memory allocations when you raise an event lead to poorer performance than we needed for Youtopia. We need every bit of CPU power on that single Flash thread and really shouldn't be wasting it raising events. </p><p> So, I shamelessly copied the .NET patterns and brought them over to AS3. Let's start at the core. In order for things to perform their best, I <a href="http://troygilbert.com/2009/09/events-vs-callbacks/"> couldn't use</a> the built-in Events. Though Troy did the benchmarking legwork, he didn't provide an implementation we could use to register callbacks and call multiple functions. So, I wrote a MulticastFunction that behaves a whole lot like the <a href="http://msdn.microsoft.com/en-us/library/system.multicastdelegate.aspx"> MulticastDelegate</a> in .NET. Usage is really straightforward. </p><pre class="brush: as3">var func:MulticastFunction = new MulticastFunction();<br /><br />//register my listener callback<br />func.add(<br /> function():void <br /> {<br /> //this callback does amazingly cool stuff<br /> trace(&quot;hello from the callback&quot;);<br /> });<br /><br />//calls all the callbacks that have been added, in the order they were added<br />func.apply();<br /></pre><p> As you can see, dealing with the MulticastFunction is a lot like the EventDispatcher, but each MulticastFunction is only designed to be used for a single event. So, to use it for events, create a public getter on your class named something reasonable and add your callbacks to it. Done! </p><p> Ok, I realize I keep talking about event dispatching speed, but haven't put my money where my mouth is. I wrote some benchmarks of my own and here is the output with a release build, in the latest standalone Flash 10 player. It does five test runs. <a href="https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B7Ew2HKAAmajMTM5NmIzMTUtYTJiNC00MTcxLWFhOGYtMjRmM2Q2ODA0YThh&hl=en_US">Download the Source</a> </p><pre>running tests...<br />Event dispatching took 848ms<br />MulticastFunction took 355ms<br /><br />running tests...<br />Event dispatching took 846ms<br />MulticastFunction took 351ms<br /><br />running tests...<br />Event dispatching took 834ms<br />MulticastFunction took 352ms<br /><br />running tests...<br />Event dispatching took 836ms<br />MulticastFunction took 351ms<br /><br />running tests...<br />Event dispatching took 823ms<br />MulticastFunction took 343ms</pre><p> Yup, that's right. MulticastFunction is nearly 2.5x faster, and I haven't spent much time tuning it. For example, it's using an Array under the hood and doing more work than it needs to during the apply call. Events will also become less performant over time as you have to create (and potentially clone) Event objects for every dispatch, causing a lot of garbage collection pressure. Here's the MulticastFunction, with lots of comments or you can <a href="https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B7Ew2HKAAmajMTM5NmIzMTUtYTJiNC00MTcxLWFhOGYtMjRmM2Q2ODA0YThh&hl=en_US">download the source</a> </p><pre class="brush: as3">package com.jdconley<br />{<br /> /**<br /> * A wrapper that mimics the synchronous behavior of the MulticastDelegate used in .NET for events.<br /> * This doesn&#39;t support any of the async methods, as we don&#39;t have free threading here.<br /> * It also doesn&#39;t support return values.<br /> * See: http://msdn.microsoft.com/en-us/library/system.multicastdelegate.aspx<br /> */<br /> public class MulticastFunction<br /> {<br /> private var _functions:Array = [];<br /> private var _iterators:int = 0;<br /><br /> /**<br /> * Adds a function to be called when apply is called.<br /> * If the function is already in the list it won&#39;t be added twice.<br /> * Returns true if the function was added.<br /> **/<br /> public function add(func:Function):Boolean<br /> {<br /> var i:int = _functions.indexOf(func);<br /> if (i &gt; -1)<br /> return false;<br /><br /> //add new functions to the end so they are picked up live during an apply<br /> _functions.push(func);<br /> return true;<br /> }<br /><br /> /**<br /> * Removes a function to be called when apply is called.<br /> * Returns true if the function was removed.<br /> **/<br /> public function remove(func:Function):Boolean<br /> {<br /> var i:int = _functions.indexOf(func);<br /> if (i &lt; 0)<br /> return false;<br /><br /> if (_iterators == 0)<br /> _functions.splice(i, 1);<br /> else<br /> _functions[i] = null;<br /><br /> return true;<br /> }<br /><br /> /**<br /> * Synchronously applies all functions that have been added.<br /> * Functions can be safely added or removed during an apply and changes will take effect immediately.<br /> * Added functions will be called, and removed functions will not.<br /> **/<br /> public function apply(thisArg:*=null, argArray:*=null):void<br /> {<br /> _iterators ;<br /> var holes:Boolean = false;<br /> <br /> for (var i:int = 0; i &lt; _functions.length; i )<br /> {<br /> var f:Function = _functions[i];<br /> if (f == null)<br /> holes = true;<br /> else<br /> f.apply(thisArg, argArray);<br /> }<br /><br /> //cleanup holes left by removing functions during this apply call.<br /> //if any of the function apply&#39;s throw an error the state of _iterators will be off.<br /> //but, we&#39;ll only leak array slot memory if functions are removed.<br /> //putting a try/finally or try/catch block here significantly decreases performance.<br /> if (--_iterators == 0 &amp;&amp; holes)<br /> {<br /> for (i = _functions.length - 1; i &gt;= 0; i--)<br /> {<br /> if (_functions[i] == null)<br /> _functions.splice(i, 1);<br /> } <br /> }<br /> }<br /><br /> /**<br /> * Removes all functions from the list. Stops the current apply call, if there is one.<br /> **/<br /> public function clear():void<br /> {<br /> _functions = [];<br /> }<br /> }<br />}</pre><p> Although capture, bubble, weak references, and priority are handy features of the Flash eventing system, they're not always necessary and will hurt your performance when you might have thousands of them firing per frame.</p><p> In Part 2 we'll put this MulticastFunction to use in a more meaningful way with the INotifyPropertyChanged implementation.</p>Joelhttp://www.blogger.com/profile/13506379062750253983noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-37591757214135876762009-11-13T14:49:00.000-08:002011-07-23T10:35:07.041-07:00Anyone still out there?<p> Wow, I haven't posted in a while. In recent months I've been focused intently on a few things.<br /> </p><ol><li>Babies! My wife and I had twins in February.</li> <li>Learning a new technology while shipping an <a href="http://apps.facebook.com/you-topia/landing?ref=jd"> amazing game</a> at Hive7.</li> <li>Working on a <a href="http://www.pushbuttonengine.com/">cool open source project</a>.</li></ol><p> I won't bore all you geeks with the baby stuff. If you can find the link to my personal blog you can go look at lots of pictures.</p><p> You should all check out <a href="http://apps.facebook.com/you-topia/landing?ref=jd"> Youtopia</a> (the new game we shipped). We're really proud of this one.</p><p> So, drumroll please... *in my most awesome announcer voice* And, the new technology is... Flash! That's right, this Microsoft fanboy is now in the Flash camp. I really wish I could be working with Silverlight, but well, you can't build a game that runs on Facebook and make people install something. It just won't work. Once Silverlight has a market share more like Flash Player, then we're in business.</p><p> What do I dislike most about Flash? The development environments (yes, plural) for Flash pale in comparison to Visual Studio. Compiling is slow. Stuff crashes a lot. Heck, I even got the compiler to throw a null pointer exception on a few occasions! Debugging is a pain. The garbage collector isn't very fast. You only have <strong>one</strong> thread to work with. Hey Adobe is it still 1998? </p><p> All that being said, Flash (and more specifically Actionscript 3 and Flash Player) is actually really mature now and a decent piece of technology. It has most things a developer looks for in a language/runtime. And, well, it allows us to create a really rich and interactive experience that runs in your browser and doesn't require you to install anything. Obviously the business case here wins out over my whining.</p><p> I think I've spent enough time talking. Coming very soon, a useful post that contains lots of great technical info from the perspective of a C# junky diving head first into Flash. </p>Joelhttp://www.blogger.com/profile/13506379062750253983noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-1488086621455267272009-06-26T00:00:00.000-07:002011-07-27T00:23:20.462-07:00Functional Optimistic Concurrency in C#<p>A few months ago Phil Haack wrote about how C# 3.0 is a <a href="http://haacked.com/archive/2009/02/15/the-functional-language-gateway-drug.aspx">gateway drug</a> to functional programming. (Yeah, that's how long ago I started writing this blog.) I couldn't agree more. I find myself solving problems using functional rather than imperative programming quite often nowadays. It's much more elegant for many problem spaces.</p><p>Before we go any further, here's the sample app used for this article. Even if you don't like my writing, you should play with it. Yeah, you! <a href="https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B7Ew2HKAAmajMGNmZGNjNTMtYzQwMy00MWJkLTk3MDItODJhYzJjOGIyZGZi&hl=en_US">optimistic-concurrency.zip</a></p><p> One problem space that fits very well with functional patterns is in developing apps that have to use <a href="http://jdconley.com/blog/archive/2009/01/20/concurrency.-its-like-doing-the-dishes.aspx">optimistic concurrency</a> to maintain data consistency at scale. Here at Hive7 we build PvP games. In such games, multiple people and background processes are often affecting the same entity at the same time. We can't use coarse grained locks or high isolation levels in MS-SQL, or the whole game would come to a halt. Here's a common scenario in a game like <a href="http://corp.hive7.com/games/knighthood/">Knighthood</a>:<br /></p><blockquote style="height: 175px;"><p><img alt="" src="http://knight.fb.hive7.com/res/img/area/wall.png?63356112540" style="border: 0pt none ; float: left;" /> Multiple rival lords are attacking my Kingdom at once trying to steal my most prized vassal, my wife! My wall is staffed with a heavy defense, and my hospital has a strong set of medics healing my kingdom over time. But to keep a handle on the attack I also have to continuously spend gold to heal my defensive army.</p></blockquote><p> In this common use case there are a number of subtleties. First, multiple people are attacking me at once. That means they're doing damage to my defenses in real time, and at the same time. My hospital is healing my vassals over time. This occurs in a background process once every few minutes. And I'm triggering an instant heal to my defensive vassals using my gold supply. My Marketplace is also generating gold for me over time in another background process. To top it all off, this is happening across a cluster of application servers that are certain to be processing multiple requests simultaneously. Phew! </p><p> So what does all that mean? Well, basically, there are a lot of possibilities for change conflicts. And we have to deal with those conflicts to both keep a consistent data model and perform well. </p><p> There are a a number of potential strategies for managing these change conflicts in the persistent store – a few beefy Microsoft SQL Server databases in our case. We chose to go with optimistic concurrency and an abort on conflict transaction strategy. That basically means when we write data to the database we make sure we are always writing the most recent version of a row. If an application attempts to write an old version of the row, the data access layer throws an exception and aborts the transaction. Knighthood uses <a href="http://www.hibernate.org/">NHibernate</a> so the validation is done for us automatically using a simple version number on the row. The basic algorithm is:<br /></p><ol><li>Read data and serialize into objects (done by NHibernate)</li> <li>Modify objects in code</li> <li>Tell NHibernate to persist the changes, which does the following<br /><ol><br /> <li>Increments the version number</li> <li>Finds all the changes and batches up insert/update calls</li> <li>Uses the version number in the WHERE clause of updates like: "UPDATE Table SET Col1='blah' WHERE Version=36"</li> <li>Checks the rows modified reported by SQL server and throws an exception if it's an unexpected number</li> </ol> </li></ol><p> As you can imagine, this fails regularly in a high concurrency scenario, but it succeeds orders of magnitude more often than not. It's also pretty standard for any web app nowadays. </p><p> The only problem is, to preserve consistency, an exception is thrown and the transaction is aborted when change conflicts occur. That means whatever request the application or user issued fails. We could show the user a friendly error message, but that would be a frustrating experience. Nobody likes seeing errors for non-obvious reasons. And in the case of headless software running in the background the error would just be in a log somewhere. If it's something important that needs to happen, then we have to make sure it gets done! So us imperative programmers devise a retry scheme and write a loop with an exception trap around our code. Maybe you get clever and create a class that does this which raises an event any time you need to execute your retry-able code. But, this gets pretty cumbersome. Enter functional programming! </p><p> We have a little class named DataActions that is used to simplify and consolidate this retry process and make it painless to use. I'm going to use LINQ to SQL as the example here. Here's some usage code: </p><pre class="brush: c-sharp;">DataActions.ExecuteOptimisticSubmitChanges&lt;GameDataContext&gt;(<br />dc =&gt;<br />{<br /> var playerToMod = dc.Players.Where(p =&gt; p.ID == playerId).Single();<br /> SetRandomGold(playerToMod);<br />});<br /></pre><p> As you can see it's really straight forward. Notice all the goodness going on there. We don't have to instantiate our own DataContext, manually submit the changes, or worry at all about transactions. It's all handled by the wrapper. And, you just have to provide some code to execute once the DataContext has been instantiated. </p><p> The ExecuteOptimisticSubmitChanges helper method itself is pretty simple as well: </p><pre class="brush: c-sharp;">public static void<br />ExecuteOptimisticSubmitChanges&lt;TDataContext&gt;(Action&lt;TDataContext&gt; action)<br /> where TDataContext : DataContext, new()<br />{<br /> Retry(() =&gt;<br /><br /> {<br /> using (var ts = new TransactionScope())<br /> {<br /> using (var dc = new TDataContext())<br /> {<br /> action(dc);<br /> dc.SubmitChanges();<br /> ts.Complete();<br /> }<br /> }<br /> });<br />}</pre><p> And, finally, we have the Retry method: </p><pre class="brush: c-sharp;">public static void Retry(Action a)<br />{<br /> const int retries = 5;<br /> for (int i = 0; i &lt; retries; i )<br /> {<br /> try { a(); break;<br /> }<br /> catch { if (i == retries - 1) throw;<br /><br /> //exponential/random retry back-off. var rand = new Random(Guid.NewGuid().GetHashCode());<br /> int nextTry = rand.Next(<br /> (int)Math.Pow(i, 2), (int)Math.Pow(i + 1, 2) + 1);<br /><br /> Thread.Sleep(nextTry);<br /> }<br /> }<br />}</pre><p> When you string all this together you get pseudo-stacks that look like: </p><pre>MyCode<br />ExecuteOptimisticSubmitChanges<br />Retry<br /> ExecuteOptimisticSubmitChanges<br /> MyCode<br /></pre><p> So, why should you care? The calling code is really easy to read, and you get a number of other benefits with this code. In addition to handling exceptions caused by concurrency errors, you also get retries on deadlocks, and more common Sql Connection errors. </p><p> I put together a little sample application you can play with. It uses these helpers and has a SQL Database with it. The sample simulates really high concurrency and you can watch it deal gracefully with deadlocks. Then you can change line 29 of Program.cs and execute the same concurrent code without retries enabled. It ouputs the number of failed transactions and a bunch of other interesting stuff to the console. Here's some example output: </p><pre> ... Retrying after iteration 0 in 1ms Retrying after iteration 0 in 0ms Thread finished with 0 failures. Concurrency at 3<br />Retrying after iteration 1 in 3ms<br />Retrying after iteration 1 in 4ms<br />Thread finished with 0 failures. Concurrency at 2<br />Retrying after iteration 2 in 5ms<br />Thread finished with 0 failures. Concurrency at 1<br />Retrying after iteration 3 in 15ms<br />Thread finished with 0 failures. Concurrency at 0<br /><br />0 total failures and 7 total retries.<br />All done. Hit enter to exit.<br /></pre><p> And the same test run with retries disabled: </p><pre> ... Starting worker. Concurrency at 8<br />Thread finished with 0 failures. Concurrency at 7<br />Thread finished with 0 failures. Concurrency at 6<br />Thread finished with 1 failures. Concurrency at 5<br />Thread finished with 1 failures. Concurrency at 4<br />Thread finished with 1 failures. Concurrency at 2<br />Thread finished with 2 failures. Concurrency at 3<br />Thread finished with 0 failures. Concurrency at 1<br />Thread finished with 2 failures. Concurrency at 0<br /><br />7 total failures and 0 total retries.<br />All done. Hit enter to exit.<br /></pre><p>Here's the download link again: <a href="https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B7Ew2HKAAmajMGNmZGNjNTMtYzQwMy00MWJkLTk3MDItODJhYzJjOGIyZGZi&hl=en_US">optimistic-concurrency.zip</a></p><p>Let me know if you have any questions.</p>Joelhttp://www.blogger.com/profile/13506379062750253983noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-66303364722100260472009-06-02T23:46:00.000-07:002011-07-23T10:17:04.088-07:00A new basket for my eggs<p>Hopefully after reading that title you're thinking of the old adage "Don't put all your eggs in one basket" and not something crude. Ok, I admit, either way it works for me. You're still reading.</p><p>Let me preface by saying I love what's going on at Hive7, but a guy's gotta have a side project. In fact I wrote about <a href="http://jdconley.com/blog/archive/2009/01/19/dont-hire-a-programmer-if-they-dont-code-for-fun.aspx">this phenomenon</a> a while back. And, in my mind, that side project might as well make me some lunch money.</p><p>For the last two years or so I've been really interested in digital photos and the untapped markets that lie within. In fact, I got introduced to Hive7 while trying to sell myself to an investor to get some angel funding in the space. I'm not a pro photographer wannabe or anything like that. I just think digital photos are a great medium for sharing life with friends and family. I have built <a href="http://www.facebook.com/apps/application.php?id=2466327790">Friend Photosaver</a> for Facebook (a screen saver using Facebook photos), <a href="http://www.facebook.com/apps/application.php?id=7515202213">Photo Feeds</a> for Facebook (automatic photo RSS creator for Facebook), and <a href="http://www.facebook.com/apps/application.php?id=5577408502">Photozap</a> (a tool to download Facebook photos as a zip file).</p><p>Those applications are all pretty cool, but didn't really strike me (or anyone else) as especially compelling. But, they did lead me down the path of building something that I think is pretty interesting.</p><p><strong>Everyone</strong> has a digital camera or cell phone camera. When you go to a social gathering of any sort there are usually tens to hundreds of photos taken. Think of weddings, birthdays, graduations, family bbq's, night clubs. . . . What happens to these photos? Someone copies them to their computer, or uploads them to a photo sharing web site. They send out links or maybe share the photos through a social network's tagging or posting features or some such. That's all fine and good, but I think there's more to be had.</p><p>Enter <a href="http://www.pixur.me/">pixur.me</a>. Quoting the about page:<br /></p><blockquote><p>Pixur.me is a different kind of online photo sharing service. Our mission is to focus on the person receiving photos, rather than the one taking them. There are a lot of great services where you can organize your own photos and share them with people, but we think that's only half of the equation.</p><p>Can you find all the cute pictures of your kids from your last family vacation? Or how about all the photos from your wedding that your guests took? Could your mother find those same photos?</p><p>You could if your family was using pixur.me! What if all the photos that everyone took at that last vacation or your wedding were in one spot? Even though Aunt Sue uses Flickr, and you use Facebook, and your mother uses Picasa. That's pixur.me. Create a Stream and see for yourself! Once your stream is created anyone can add photos to it, regardless of where they are stored online.</p></blockquote><br /><p>That's it. Another basket awaiting some eggs. <a href="http://www.pixur.me/">Give it a spin</a> and let me know what you think. Of course, it's not very interesting if you just use it by yourself. Create a stream and give out the link at your next gathering. Or maybe start a stream that your extended family can add photos to so grandma can see them all in one spot.</p><p>Oh yeah, I almost forgot this is a technical blog. This project started out as a technology experiment so it's built on Windows Azure and ASP.NET MVC. Very cool stuff. I'll have to write more about them later...</p>Joelhttp://www.blogger.com/profile/13506379062750253983noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-30397676052463295052009-03-12T12:30:00.000-07:002011-07-27T00:32:46.120-07:00ioDrive, Changing the Way You Code<strong>Introduction</strong><br /><div><p> In my lifetime there have been very few technologies that have created a paradigm shift in the software industry – I was born just after the spinning magnetic hard drive was created. Off the top of my head I can think of: the Internet (thanks Al!), optical disks, Windows, and parallel computing. From each of these technologies entirely new software industries were born and development methodology drastically changed. We're at the beginning of another such change, this time in data storage. </p><strong>Contents</strong><br /><ul><li><a href="http://www.blogger.com/post-create.g?blogID=1741199026308686058#intro">Introduction</a></li> <li><a href="http://www.blogger.com/post-create.g?blogID=1741199026308686058#random">Random Story for Context</a></li> <li><a href="http://www.blogger.com/post-create.g?blogID=1741199026308686058#system">System Configuration</a></li> <li><a href="http://www.blogger.com/post-create.g?blogID=1741199026308686058#test">Test Configuration</a></li> <li><a href="http://www.blogger.com/post-create.g?blogID=1741199026308686058#results">Test Results</a></li> <li><a href="http://www.blogger.com/post-create.g?blogID=1741199026308686058#conclusion">Conclusion</a></li></ul><a name="random"></a> <strong>Random Story for Context</strong><br /><p> At <a href="http://www.hive7.com/">Hive7</a> we make web based social games for platforms like Facebook and Myspace. We're a tiny startup, but producing a successful game on these platforms means we're writing code to deal with millions of monthly users, and thousands of simultaneous users pounding away at our games. Because our games are web based they're basically written like you'd write any other web application. They're stateless, with multiple RDBMS back end servers for most of the data storage. Game state is pretty small so we don't really store that much data per user. We don't have Google sized problems to solve or anything. Our main problem is with speed. </p><p> When you're surfing the web you want it to be fast but can live with a page taking a few second to load here and there. When you're playing a game, on the other hand, you want instant gratification. A full second is just way too long to wait to see the results of your action. Your character's life might be on the line! </p><p> To accomplish this speed in our games we currently buy high end commodity hardware for our database servers, and have a huge cluster of memcached that we tap into. It works. But, properly implementing caching is complex. And those DB servers are big 3U power hungry monsters! Here's a typical disk configuration of one of our DB servers: </p><br /><a href="http://4.bp.blogspot.com/-GHkiqR-vS1A/Tir-P11cunI/AAAAAAAAAEo/Ogu_o9Wt6PQ/s1600/h7raidsetup.png"><img style="cursor:pointer; cursor:hand;" src="http://4.bp.blogspot.com/-GHkiqR-vS1A/Tir-P11cunI/AAAAAAAAAEo/Ogu_o9Wt6PQ/s1600/h7raidsetup.png" alt="" id="BLOGGER_PHOTO_ID_5632593832082979442" border="0" /></a><br /><p> Each of those drives are 15k RPM 72 GB SAS (or whatever the fastest is at the time of build). And the RAID controllers are very high end with loads of cache. And here's the kicker! We can only use about 25% of the capacity of these arrays before the database write load gets too high and performance starts to suffer. They cost us about $10k a piece. Sure, there are much more complex architectures we could use to gain performance. Or we could spend a few hundred grand and pick up a good SAN of some sort. Or we could drop some coin for <a href="http://www.youtube.com/watch?v=96dWOEa4Djs&amp;fmt=22">Samsung SSD's</a>. But, those options are a bit out of the price we want to spend for our hardware, not to mention the necessary rack space and power requirements. </p><p> Enter the <a href="http://www.fusionio.com/Products.aspx">ioDrive</a>. With read/write speeds that are very close to the 24 SSD monster that Samsung recently touted, at a way lower price, I have a hard time imagining choosing the 24 drive option. Maybe if you had massive storage requirements, but for pure performance you can't beat the ioDrive price/performance ratio right now. I don't remember if I'm allowed to comment on pricing, but you can contact a sales rep at Fusion-io for more info. </p><p> Last month we picked up one of these bad boys for testing. In summary, "WOW!" I spent a few hours this week putting the ioDrive through the ringer and comparing it to a couple different disk configurations in our datacenter. My main goal was to see if this is a viable option to help us consolidate databases and/or speed up existing servers. </p><a name="system"></a> <strong>The Configuration</strong><br /><p> <em>ioDrive System (my workstation)</em><br /></p><ul><li>Windows Server 2008 x64 Standard Edition</li><li>4 CPU Cores</li><li>6 GB Ram</li><li>80 GB ioDrive</li><li>Log and Data files on same drive</li></ul><p> <em>Fast Disk System</em><br /></p><ul><li>Windows Server 2008 x64 Standard Edition</li><li>8 CPU Cores</li><li>8 GB Ram</li><li>16 15k RPM 72 GB SAS Drives (visualization above)</li><li>Log and Data files on different arrays</li></ul><p> <em>Big and Slow Disk System</em><br /></p><ul><li>Windows Server 2008 x64 Standard Edition</li><li>4 CPU Cores</li><li>8 GB Ram</li><li>12 7200 RPM 500 GB SATA Drives</li><li>Log and Data files on different arrays</li></ul><a name="test"></a> <strong>Test Configuration</strong><br /><p> For this test I used <a href="http://support.microsoft.com/kb/231619">SQLIOSim</a> with two five minute test runs. We were really only interested in simulating database workloads. If you want a more comprehensive set of tests check out <a href="http://www.tomshardware.com/search.php?s=iodrive&amp;x=0&amp;y=0">Tom's Hardware</a>. I should also mention that this was obviously not a test of equals. Both disk based systems have a clear RAM advantage and the fast disk system has a clear CPU advantage. The hardware chipsets and CPU's are also slightly different, but they're the same generation of Intel chips. In any case, when you see the results you'll see how this had a negligible effect. We're talking orders of magnitude differences in performance here... </p><p> I ran two different configurations through SQLIOSim. One was the "Default" configuration that ships with the tool. It represents a pretty typical load on a SQL Server disk system for a general use SQL server. The other was one I created called "Write Heavy Memory Constrained". The write heavy one was designed to simulate the usage in a typical game, where, due to caching, we have way more writes than reads to a database. Also, the write heavy one is much more parallel. It uses 100 simulated simultaneous random access users where the default one has only 8. And, with the write heavy one there is no chance the entire data set can be cached in memory. It puts a serious strain on the disk subsystem. </p><p> I took the output from SQLIOSim and imported it into Excel to do some analysis. I was primarily concerned with two metrics: IO Duration and IO Operation count. These two things tell me all I need to know. First, how long does it take the device to perform IO on average, and how many can it get done in the given time period. </p><a name="results"></a> <strong>Test Results</strong><br /><br /><em>Write Heavy Memory Constrained Workload</em><br /><table> <tbody> <tr> <th>Metric</th> <th>ioDrive</th> <th>Slow Disks</th> <th>Fast Disks</th> </tr> <tr> <td>Total IO Operations</td> <td> 10,625,381 </td> <td> 1,309,673 </td> <td> 3,260,725 </td> </tr> <tr> <td>Total IO Time (ms)</td> <td> 17,625,337 </td> <td> 1,730,147,612 </td> <td> 356,839,912 </td> </tr> <tr> <td>Cumulative Avg IO Duration (ms)</td> <td> 1.66 </td> <td> 1,321.05 </td> <td> 109.44 </td> </tr> </tbody></table><br /><a href="http://1.bp.blogspot.com/-50RJkZbVPSM/Tir-Pj5ehBI/AAAAAAAAAEg/yUlckCQrtHo/s1600/mc-avgtime.png"><img style="cursor:pointer; cursor:hand;" src="http://1.bp.blogspot.com/-50RJkZbVPSM/Tir-Pj5ehBI/AAAAAAAAAEg/yUlckCQrtHo/s1600/mc-avgtime.png" alt="" id="BLOGGER_PHOTO_ID_5632593827268035602" border="0" /></a><br /><p>Wow, 100x faster IO's on average!</p><a href="http://4.bp.blogspot.com/-uT4Gn1wIwOI/Tir-Pmtt6lI/AAAAAAAAAEY/NCrG1UXlhss/s1600/mc-time.png"><img style="cursor:pointer; cursor:hand;" src="http://4.bp.blogspot.com/-uT4Gn1wIwOI/Tir-Pmtt6lI/AAAAAAAAAEY/NCrG1UXlhss/s1600/mc-time.png" alt="" id="BLOGGER_PHOTO_ID_5632593828024019538" border="0" /></a><br /><p>Over 20x less time spent doing IO operations!</p><a href="http://2.bp.blogspot.com/-CGQtyzSmPkg/Tir-Pftgf1I/AAAAAAAAAEQ/JC6CMxU23Ps/s1600/mc-ops.png"><img style="cursor:pointer; cursor:hand;" src="http://2.bp.blogspot.com/-CGQtyzSmPkg/Tir-Pftgf1I/AAAAAAAAAEQ/JC6CMxU23Ps/s1600/mc-ops.png" alt="" id="BLOGGER_PHOTO_ID_5632593826144091986" border="0" /></a><br /><p>And over 3x more operations performed. This would have been way higher, but the ioDrive system was CPU constrained, taking 100% CPU. Looks like we'll be loading up at least 8 cores in any database servers we build with these cards!</p><em>Default Workload</em><br /><table><tbody><tr><th>Metric</th> <th>ioDrive</th> <th>Slow Disks</th> <th>Fast Disks</th> </tr> <tr> <td>Total IO Operations</td> <td> 690,753 </td> <td> 287,180 </td> <td> 456,300 </td> </tr> <tr> <td>Total IO Time (ms)</td> <td> 3,616,903 </td> <td> 231,859,576 </td> <td> 93,991,055 </td> </tr> <tr> <td>Cumulative Avg IO Duration (ms)</td> <td> 5.24 </td> <td> 807.37 </td> <td> 205.99 </td> </tr> </tbody></table><br /><a href="http://3.bp.blogspot.com/-l-Zsp3t9Dkc/Tir-PdQhKEI/AAAAAAAAAEI/D2zDlYB5QJo/s1600/d-avgtime.png"><img style="cursor:pointer; cursor:hand;" src="http://3.bp.blogspot.com/-l-Zsp3t9Dkc/Tir-PdQhKEI/AAAAAAAAAEI/D2zDlYB5QJo/s1600/d-avgtime.png" alt="" id="BLOGGER_PHOTO_ID_5632593825485629506" border="0" /></a><br /><p>40x faster on average in this workload! Looks like the bulk operations and larger IO's present in this workload narrowed the gap a bit.</p><a href="http://4.bp.blogspot.com/-FEJo94sGJVE/Tir95RNy5EI/AAAAAAAAAEA/UT_PJzePCpg/s1600/d-time.png"><img style="cursor:pointer; cursor:hand;" src="http://4.bp.blogspot.com/-FEJo94sGJVE/Tir95RNy5EI/AAAAAAAAAEA/UT_PJzePCpg/s1600/d-time.png" alt="" id="BLOGGER_PHOTO_ID_5632593444295861314" border="0" /></a><br /><p>This time, a little under 30x less time spent doing IO operations!</p><a href="http://3.bp.blogspot.com/-GrTWHOU01sk/Tir9tbUxC6I/AAAAAAAAAD4/nKX9N_KphUY/s1600/d-ops.png"><img style="cursor:pointer; cursor:hand;" src="http://3.bp.blogspot.com/-GrTWHOU01sk/Tir9tbUxC6I/AAAAAAAAAD4/nKX9N_KphUY/s1600/d-ops.png" alt="" id="BLOGGER_PHOTO_ID_5632593240851024802" border="0" /></a><br /><p>Only 1.5x more total operations this round. This time we weren't CPU constrained, and I didn't take the time to dig in to the "why" on this one. Based on the raw data I would guess this is caused by IO blocking a lot more often for ioDrive than the fast RAID system. This probably has to do with the caching system in the RAID cards under this mixed write workload. You'll notice if you look at the raw report, that the ioDrive has no read or write cache at the device level. It doesn't really need it.</p><p> In case you want to see the raw data or the SQLIOSim configuration files, you can download the package here: <a href="https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B7Ew2HKAAmajZjJkNjNkZjItYmRhMS00ZWFlLTlhM2ItMGVkNzZjOGM2NzNk&hl=en_US">ioDrive Test Results</a> </p><a name="conclusion"></a> <strong>Conclusion</strong><br /><p> Wow! ioDrive is going to be scary fast in a database server, especially when it comes to tiny random write IO's, parallelism, and memory constraints. I think we'll be seeing a lot of new interesting software development and system architectures due to this type of technology. The industry is changing. You no longer need either tons of cache (or cash) or tons of RAM to get great performance out of your data store. We're talking 100x better performance than our fast commodity arrays. I think it's safe to say we'll be using these devices in production in the near future. Since this device is currently plugged into my workstation, maybe I'll post another review about how it's improving my development productivity so you can convince your boss to buy you one. :) </p></div>Joelhttp://www.blogger.com/profile/13506379062750253983noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-81434394529658289952009-02-09T19:14:00.000-08:002011-07-23T09:45:13.050-07:00You might be a great hacker if you...<p> A number of years ago I was doing some <a href="http://www.kieferconsulting.com/Pages/Mentoring.aspx">mentoring</a> at a California state agency that shall remain nameless. I got my butt up in time to be into their office at 8am. (Ok, I'll be honest, usually I got up in time. I was late on a few occasions.) I led them down the path of learning ASP.NET from scratch. Together we built a great product that is still in use today on a highly trafficked web site. Some time late in the mentoring project a student came up to me and asked the strangest question. He wanted to know how I learned everything I was teaching them. He wanted to take the same classes. </p><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/-xcrMjX287RQ/Tir6EQNbqyI/AAAAAAAAADw/JgpDDqXQuC0/s1600/nails.jpg"><img style="float:left; margin:0 8px 0 0;cursor:pointer; cursor:hand;width: 300px; height: 189px;" src="http://1.bp.blogspot.com/-xcrMjX287RQ/Tir6EQNbqyI/AAAAAAAAADw/JgpDDqXQuC0/s400/nails.jpg" alt="" id="BLOGGER_PHOTO_ID_5632589234957953826" border="0" /></a><br /><br />The guy was an amazing engineer. He was methodical, had great documentation, dotted all his i's and crossed all this t's. But he wasn't a <a href="http://www.paulgraham.com/gh.html">great hacker</a>. He was a bit slow, and didn't have much creativity. It was around that time I started paying more attention to traits of great hackers, before Paul coined the term. The bastard! At that time I was really just looking for people that learned quickly and could get things done faster than the rest. Some day maybe I'll figure out how to be all witty and important and coin terms.<br /><p> So, here we go, in no particular order – except the first one, which is the obvious transition from my enticing story above:<br /> </p><ul><li><strong>started off as a script kiddie</strong><br /> <p> A typical scenario looks something like the following, though could occur in any area (not just video games):</p><ol><li>Play Quake 2</li> <li>Get stupidly good at Quake 2</li> <li>Get bored with Quake 2</li> <li>Figure out how to cheat by writing scripts to rocket jump or speed hack</li> <li>Realize what you just did was coding, and it's fun and amazingly rewarding</li> <li>Change all direction in life (yes, even when you're 12 years old) so you can do more coding</li><br /> </ol> </li> <li><strong>often forget to shave</strong><br /> <p> "Wait a minute, you mean people do their <em>own</em> laundry?" Yes, you are exceptionally lazy. That's ok. You have more important things to do in life than worry about how you smell/look. </p> </li> <li><strong>have ever worked on a project for 24 hours straight</strong><br /> <p> "Hold on! You don't work without sleep until a problem is solved?" And no, last minute procrastination doesn't count, and neither do production outages. Everyone's been there. I'm talking about all night sessions working to solve a problem that could have been done over a few normal working days </p> </li> <li><strong>instantly quiet a room when you speak</strong><br /> <p> You don't talk much. You spend most of your time listening to others. You find idle chit chat to be boring and have quantitatively determined it's a waste of time. Maybe it's because you don't have much to say. But, what I think is more likely is that you only say things that are relevant and important. Thus, when you speak, the room listens. A room full of strangers doesn't count. They won't know you only say important things until you've trained them that way. </p> </li> <li><strong>have ever gone to visit a friend and proceeded to ignore them because you must finish that stupid puzzle they had on their coffee table before putting it down</strong><br /> <p> See photo at top of post. </p> </li> <li><strong>get asked to help debug other people's code</strong><br /> <p> There's a certain amount of pride a developer has over their code. No matter how logical it is to call someone in for help, it's always the last thing we do. If you're the guy people call for help, you're on the right track. </p> </li> <li><strong>are naturally good at video games</strong><br /> <p> Ever pick up a game, and within minutes beat or come really close to beating, someone who's been playing it for months? This is a sure fire sign of your analytical and problem solving skills. Come to think of it, I think I'm going to start adding this to my interview process. </p> </li> <li><strong>use every operating system in existence</strong><br /> <p> Sure, you think Windows sucks, but you use it because you play games on it and deep inside you know it doesn't really suck much worse than the competition. You know Linux is the best (DUH!) but you play with FreeBSD. You have OSX running on your laptop because those big icons and MacBooks are sexy. But really, it's more about curiosity than anything. </p> </li> <li><strong>make a habit of picking up a new technology over the weekend</strong><br /> <p> Lego Mindstorms, anyone? Oooh, how about Adobe AIR or Microsoft Azure or iPhone development. You catch my drift. </p> </li> <li><strong>are extremely critical of everything</strong><br /> <p> You find fault in everything from your takeout food to web sites to world economic systems. The world is an imperfect mess that needs to be cleaned up. And, of course, you could do it with a weekend and your new favorite development platform (that you haven't used yet)! </p> </li></ul><p><br /></p><p>This is all I could come up with in the time I set aside for this blog post. So what do you all think? What am I missing? I'll update the post with your ideas as they come in, if they don't suck. </p>Joelhttp://www.blogger.com/profile/13506379062750253983noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-65813870701079286662009-02-02T16:27:00.002-08:002011-07-27T01:14:31.571-07:00Abstracting Away Azure: How to Run Outside of the Cloud<p> I had a lot of fun over our holiday break this December working on prototype projects for up and coming technologies. One of those projects dealt with Windows Azure, or, the Azure Services Platform. Azure is basically a cloud application hosting environment put together by Microsoft. The idea is, you build your web apps in .NET and publish them to the nebulous cloud. Once in the cloud they scale and perform well and you don't have to deal with any of the headaches of managing things at the OS/System level. </p><p> But with the recent <a href="http://www.microsoft.com/presspass/press/2009/jan09/01-22fy09Q2earnings.mspx">economic news</a> out of Redmond I've been wondering about the future of its more experimental CTP/Alpha/Omega/Whatever-They-Call-It projects such as Azure. If you're not familiar with the project, I suggest you <a href="http://www.microsoft.com/azure/default.mspx">venture on over</a> and check it out now. </p><p> Unlike other cloud hosting platforms out there, with Azure you don't have to maintain the operating system. Not only do you get the benefits of cloud computing, but you don't even need a system administrator to run the thing. Of course, the fact that you don't have control of the operating system has its drawbacks. </p><p> With Azure you can't run unmanaged code, you're stuck in Medium trust, and you can only build a port 80/443 HTTP application. If you want to run memcached or Velocity or streaming media codecs, well, you can't. If you want to host a game server that communicates with UDP or some non-http protocol, you can't do that either. But, for most custom web applications, everything you need is there. They host a "database" for you, a queue service, you can run background services, and you even get a shared logging service. </p><p> All of the services they provide seem to work as advertised and are promised to be extremely scalable. But, one thing they don't talk about (and I can't say I blame them) is how you might run your applications if they're not hosted in the cloud. In our company this just isn't acceptable. If we put out a game and our hosting provider ceases to exist, or no longer meets our needs, we had better be able to move to a new hosting provider! So, I'll give you some tips based on my experiences building prototype Azure applications on how you can easily easily design your applications to run outside of the cloud. </p><strong>The Main Azure Features</strong><br /><ul><li>Table storage</li> <li>Queue services</li> <li>Blob storage</li> <li>Logging</li> <li>Background services (Worker Role)</li></ul><strong>Table/Queue/Blob</strong><br /><p> Abstracting away tables, queues, and blobs is fairly simple but takes a bit of up front planning. You do basically the same thing you'd do if you were building an application on a large team that is designed to work with any data storage back end. At a high level: </p><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/-pKo40eF-KKY/Tir4i11WkRI/AAAAAAAAADo/Oqkf0TMjp54/s1600/highlevel.png"><img style="cursor:pointer; cursor:hand;width: 337px; height: 221px;" src="http://2.bp.blogspot.com/-pKo40eF-KKY/Tir4i11WkRI/AAAAAAAAADo/Oqkf0TMjp54/s400/highlevel.png" alt="" id="BLOGGER_PHOTO_ID_5632587561430323474" border="0" /></a><br /><p> In order to maintain the abstraction it's very important that your UI and background services don't interact directly with the Azure services. First off, use <a href="http://en.wikipedia.org/wiki/Data_Transfer_Object">DTO</a> entities. If all else fails and your new back end storage isn't compatible with Azure, you can always fall back to re-writing the layer that talks to it and you don't have to change any of your UI code. <strong>Do not</strong> expose the PartitionKey and RowKey values on your DTO entities. Leave the partitioning scheme as an implementation detail of your Service/Model layer. It will change if you have to move your data into Amazon's <a href="http://aws.amazon.com/simpledb/">SimpleDB</a>, for example. Since Azure Table Storage uses the <a href="http://msdn.microsoft.com/en-us/library/aa697427%28VS.80%29.aspx">ADO.NET Entity Framework</a> at the core, there actually isn't much you need to do to the entities in order to make them portable to other Table-like storage systems. Also, the Blob and Queue storage services are quite simple and abstracting their interface is a matter of tens of lines of code. </p><p> Create interfaces for the layer that the UI communicates with and use a <a href="http://en.wikipedia.org/wiki/Dependency_injection">dependency injection</a> (DI) framework such as <a href="http://structuremap.sourceforge.net/">StructureMap</a> or <a href="http://www.castleproject.org/">Castle</a> to inject your implementations that communicate with Azure. </p><p>I use StructureMap on a day to day basis, and I was dissapointed that it didn't work out of the box. I had to make a couple modifications to the source to get it to run under medium trust. First, you need to add an AllowPartiallyTrustedCallersAttribute to the assembly and then remove the security assertion that's asserting the right to read the machine name (you don't have access to the machine name in medium trust). You can download my updated version here (patch and binary): <a href="https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B7Ew2HKAAmajNzVkNjlmNGMtYTAxNS00MGU0LTk0MjctN2U4Yzg3NTMxYmRh&hl=en_US">StructureMap-2.5-PartialTrust.zip</a> </p><p> That's it. With your UI not talking directly to the Azure services you'll have an extra layer of code to maintain, but you'll be thankful if you ever need to pull it out of the cloud. </p><strong>Logging</strong><br /><p> For all my non-Azure projects I use <a href="http://logging.apache.org/log4net/">log4net</a> for logging. It's a simple, flexible, open-source logging engine. You might want to use Enterprise Framework. Whatever. Just like with the storage engines the key to being able to move off of the Azure logging service some day is to not use it in your applications directly. I wrote a little Appender plugin for log4net that writes logs to the Azure RoleManager if the app is loaded into the Azure context. Most of the code is mapping the multitude of log4net log levels to the Azure event log names. Here's the code: </p><pre class="brush: c-sharp;">public class AzureRoleManagerAppender<br />: AppenderSkeleton<br />{<br />public AzureRoleManagerAppender()<br />{<br />}<br /><br />public AzureRoleManagerAppender(ILayout layout)<br />{<br /> Layout = layout;<br />}<br /><br />protected override void Append(log4net.Core.LoggingEvent loggingEvent)<br />{<br /> if (null == Layout)<br /> Layout = new log4net.Layout.SimpleLayout();<br /><br /> var sb = new StringBuilder();<br /> using (var sr = new StringWriter(sb))<br /> {<br /> Layout.Format(sr, loggingEvent);<br /> sr.Flush();<br /><br /> if (RoleManager.IsRoleManagerRunning)<br /> RoleManager.WriteToLog(GetEventLogName(loggingEvent), sb.ToString());<br /> else<br /> System.Diagnostics.Trace.Write(sb.ToString(), GetEventLogName(loggingEvent));<br /> }<br />}<br /><br />protected virtual string GetEventLogName(LoggingEvent loggingEvent)<br />{<br /> if (loggingEvent.Level == Level.Alert)<br /> return "Critical";<br /> else if (loggingEvent.Level == Level.Critical)<br /> return "Critical";<br /> else if (loggingEvent.Level == Level.Debug)<br /> return "Verbose";<br /> else if (loggingEvent.Level == Level.Emergency)<br /> return "Critical";<br /> else if (loggingEvent.Level == Level.Error)<br /> return "Error";<br /> else if (loggingEvent.Level == Level.Fatal)<br /> return "Critical";<br /> else if (loggingEvent.Level == Level.Fine)<br /> return "Information";<br /> else if (loggingEvent.Level == Level.Finer)<br /> return "Information";<br /> else if (loggingEvent.Level == Level.Finest)<br /> return "Information";<br /> else if (loggingEvent.Level == Level.Info)<br /> return "Information";<br /> else if (loggingEvent.Level == Level.Notice)<br /> return "Information";<br /> else if (loggingEvent.Level == Level.Severe)<br /> return "Critical";<br /> else if (loggingEvent.Level == Level.Trace)<br /> return "Verbose";<br /> else if (loggingEvent.Level == Level.Verbose)<br /> return "Verbose";<br /> else if (loggingEvent.Level == Level.Warn)<br /> return "Warning";<br /> else<br /> return "Information";<br />}<br />}</pre><p> Then you just configure log4net as usual, and go on your merry way. Write your logs to log4net rather than to the Azure log manager. </p><pre class="brush: c-sharp;">&lt;log4net&gt;<br />&lt;appender name="azure" type="AzureRoleManagerAppender,MyAssembly"&gt;<br />&lt;layout type="log4net.Layout.PatternLayout"&gt;<br /> &lt;conversionPattern value="%logger - %message" /&gt;<br />&lt;/layout&gt;<br />&lt;/appender&gt;<br /><br />&lt;root&gt;<br />&lt;level value="ALL" /&gt;<br />&lt;appender-ref ref="azure" /&gt;<br />&lt;/root&gt;<br />&lt;/log4net&gt;</pre><pre class="brush: c-sharp;">private ILog _log = LogManager.GetLogger(typeof(WorkerRole));<br /><br />...<br /><br />_log.Info("Starting worker process");</pre><br /><strong>Background Services</strong><br /><p> Background services (Worker Roles) are basically Windows Services. The key difference, though, is in the behavior of the Start method. In Windows Service land you're expected to exit the Start method when the service has started. In Azure, the Start method is more like a Main and when it exits Azure assumes your service has completed its task and is restarted. I'd just write all your code in your RoleEntryPoint and not worry about any abstraction for the Worker Role. It's simple enough to just refactor and move to a Windows Service model if need be. But, just like in your UI, don't communicate directly with Azure back end services like Table, Queue, and Blob storage. </p><p>So there you have it. The basics of abstracting away Azure. I don't think Microsoft plans on canceling this project any time soon, but if they do (or you want to host elsewhere) you'll be ready! I, for one, am really excited about the future potential of Azure and we may even use it here, but we will be designing our applications so they can easily be ported to a different platform just in case.</p>Joelhttp://www.blogger.com/profile/13506379062750253983noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-73144561034849899632009-01-26T12:58:00.000-08:002011-07-23T09:23:32.911-07:00Put down the abstract factory and get something done<p> It seems as a group, us programmers have our priorities screwed up. Programmers value clean, concise code. Code that requires no documentation. Code that perfectly uses design patterns and best practices. Code that other programmers will look at and think "wow, I wish I was as l33t as this guy." </p><p> But, let's get real. That stuff <em>doesn't matter</em>. </p><p> Why do you write code? Well, chances are someone pays you to do it. Of course the best programmers also love what they do and do it for fun. But at the end of the day it's your profession. Maybe you're (un)lucky enough to be doing it for yourself in your own startup. </p><p> In the early years of a startup only one thing should matter to a programmer: shipping your product to meet your customers' needs. Everything else we do is simply a result of this, the humblest of goals. Without the product people want you have no revenue. Without the revenue you have no company. Without the company, well, you get the idea. </p><p> Startups are widely considered the purest form of a company. You exist to meet a perceived need with a pretty small scope. There aren't layers of management or TPS reports to get in the way of getting things done. The only barrier to getting something done is yourself. No excuses. It is in this environment where an engineer's need for perfection must be replaced with a hacker's passion to get things done, and get them done fast. Nothing else matters. </p><p> Maintainability isn't a factor. Best practices don't matter. Design patterns don't matter. All that matters is getting things done. Don't worry about scalability until you have to. Instantiate that object. Who cares about the factory. Skip the interface, and create a static class. Some day if you need the interface come back and <em>re-factor your code</em>. With the power of the IDE these days re-factoring is a lot less scary than it used to be. </p><p> This may sound short sighted, and it is. In fact, that's the point. Who knows if the company will even exist in a year to have anything to maintain. Projects change. You have to adapt. You will never know how your code will be used 5 years from now. Stop thinking about it. 5 years ago did you think you'd be integrating your [random business application] with this Facebook thing? I bet you've thought about it now. Not to mention, 5 years from now it's likely the entire programming paradigm will have changed. Were they thinking about AJAX when they designed ASP.NET? How about 3D graphics in desktop applications when the window message pump was developed? Such is life in our fast paced world. No amount of overly designed or perfectly formatted code will change it. </p><p> If you find yourself maintaining this horribly designed, hacked together legacy code from the early days of a company be thankful and bask in its glory. Without that spaghetti nightmare you wouldn't have that job. It was that short sighted thinking that was able to get something done and create a profitable product/company. </p><p> Of course, I'm not advocating you just toss all your code in a button's click event or anything that silly. Be smart, organize things well, but don't waste time overly designing code to be flexible. If you have to spend more than a couple hours sketching out your design, it's probably too complicated. Write some code. Re-factor it if you need to. You don't need a proper <a href="http://en.wikipedia.org/wiki/Representational_State_Transfer">RESTful architecture</a>, or a perfect <a href="http://en.wikipedia.org/wiki/Domain-driven_design">DDD</a>. Your application isn't going to change from Microsoft SQL to MySQL some day. </p><p> Alright, I'll admit it. If you're building enterprise server products, or work on a large team, or are building framework products for developers to use, then ignore everything I've said. Of course, then I'd question why you're a startup in that position in the first place.... </p><p> So I urge you, especially if you're in a startup, to put down put down the abstract factory and get something done. </p>Joelhttp://www.blogger.com/profile/13506379062750253983noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-38454815211149175212009-01-20T10:22:00.000-08:002011-07-23T09:21:45.657-07:00Concurrency. It's like doing the dishes<p>Since we moved to Palo Alto I've had the luxury of walking to work every day. Usually that's where I do my deep thinking. By the time I cruise by the Whole Foods it's really easy to ignore the activist-of-the-day petitioning something about global warming. But yesterday was different.</p><p>My walks to and from work were pretty normal. When I got home I decided to clean up a bit around the house, was doing the dishes, and had an odd moment of clarity. I threw down the sponge and ran over to my laptop to jot this down.</p><p>Usually I'm at a loss for analogy when explaining how concurrency works to a developer who has never had to deal with it before. So I throw out all kinds of highly technical terms and their eyes glaze over. But you know, it's actually really simple.</p><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/-O1eN8UnIImA/Tir0qDHYbnI/AAAAAAAAADg/WKH-qj0FeLg/s1600/160px-Dirty_dishes.jpg"><img style="float:left; margin:0 8px 0 0;cursor:pointer; cursor:hand;width: 160px; height: 120px;" src="http://1.bp.blogspot.com/-O1eN8UnIImA/Tir0qDHYbnI/AAAAAAAAADg/WKH-qj0FeLg/s400/160px-Dirty_dishes.jpg" alt="" id="BLOGGER_PHOTO_ID_5632583287208177266" border="0" /></a>Managing concurrency is like doing the dishes. You can hand wash everything and be sure it gets cleaned perfectly every time or you can stick the dishes straight into the dish washer and take your chances. Most of the time everything will come out clean, but every couple loads you'll get a dish you need to wash again. Going straight into the dishwasher is way faster, and you can even do more than one dish at a time (assuming you have two hands).<br /><p>If you want the technical description, I leave that as an excercise to the reader. Here's a <a href="http://en.wikipedia.org/wiki/Optimistic_concurrency_control">Wikipedia</a> article. And another over at <a href="http://msdn.microsoft.com/en-us/library/aa0416cz%28VS.71%29.aspx">Microsoft</a> that's specific to database concurrency. See, told ya it's like doing the dishes.</p>Joelhttp://www.blogger.com/profile/13506379062750253983noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-41356658831356532492009-01-19T10:39:00.000-08:002011-07-23T09:17:44.958-07:00Don't hire a programmer if they don't code for fun<p> I'm not the first person to talk about this paradigm and I won't be the last. Every single programmer I've seen that is <em>exceptionally good at their job</em> also does it for fun. They have an itch. It must be scratched. No matter how fun and lenient the work place, they always have their own project to work on. Their own passion. But, I think there is more to it, or I'd just site some previous articles and be done with it. </p><p> But first, what do I mean by <em>exceptionally good at their job</em>? Well, Steve McConnell has made quite a name for himself in recent history bringing forth the research on <a href="http://blogs.construx.com/blogs/stevemcc/default.aspx">10x Software Development</a>. This is the level of exceptional I'm talking about. The guy that takes a 10 minute set of verbal requirements, extrapolates, and builds a Web 4.0 Whooziwhatsit in a day, before you even know what Web 4.0 is. Paul Graham calls these guys <a href="http://www.paulgraham.com/gh.html">great hackers</a>, Joel Spolsky says they're <a href="http://www.joelonsoftware.com/items/2007/06/05.html">smart and get things done</a>. We just call them rock stars. </p><p> But, to possibly be a rock star, it's not enough for the programmer to just have a side project. The side project has to be fun (for them). Maybe they get a kick out of programming Lego Mindstorms to walk their chinchilla or creating an app for their mobile phone that synthesizes unique farting noises for ring tones based on the names in their address book. Whatever it is, they should be doing it for the pure joy they get by flexing their creative muscle. </p><p> Next time you're doing a phone interview ask the candidate about side projects early in the call. Dig in a little bit. Expect the rock star to change her mood and instantly become a lot more talkative. The passion will be self evident. If it isn't, this person isn't your rock star. </p><p> Obviously fun coding projects aren't the only indicator of a rock star, but they're a good way to filter out programmers that just do it for a paycheck. </p>Joelhttp://www.blogger.com/profile/13506379062750253983noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-76403073914663231372009-01-16T22:46:00.000-08:002011-07-23T09:15:59.616-07:00ASP.NET MVC sucks and so does jQuery and PHP<p> Apparently, saying something sucks gets you a lot of hits. I think I'll use this tactic more often. My post on <a href="http://www.jdconley.com/blog/archive/2009/01/12/10-reasons-asp.net-webforms-suck.aspx">10 Reasons ASP.NET Webforms Suck</a> has been quite the talk in our tiny little .NET blog world this week. Who knew you all had such strong opinions on the matter! </p><strong>ASP.NET doesn't <em>completely</em> suck</strong><br /><p>Saying something sucks doesn't mean it isn't good enough, or isn't the best option. In my mind everything sucks. My cell phone sucks, my laptop sucks, my operating system sucks, my car sucks. They all need improvements. They are all far from perfect. It is this mindset that drives me to build better software. If you can't see the flaws around you how can you improve on them? ASP.NET 4.0 didn't come around for fun. It came around because 3.5 sucks and needs improvement, and so on, and so forth.</p><p> Quite a few of you wrote some great rebuttals. Some were utter nonsense, but hey, this is the internet. That's to be expected. I'd like to talk about my favorite comments: </p><blockquote> "It's obviously not a perfect design, but, it did it's job." – Robert Sweeney<br /><br /><br /></blockquote><p> Indeed, my thoughts exactly. It did its job. The internet has moved along really fast. Webforms are lagging behind a bit. Sure, it's still perfect for RAD business type applications. But build a web game on it, or a "Web 2.0" web site, or other consumer facing web product. The level of customization you end up doing to work within the bounds of the framework's abstraction starts to become silly. </p><blockquote> "I agree that is does suck now." ... "over time, better ways to do things are created and naturally, the old ways get laid to rest" – shaun<br /><br /><br /></blockquote><p>Yeah, this is the nature of things. The technology will be around forever, but the development world will pass it by. We like shiny new things.</p><blockquote> "Anyway, this 'hidding how HTTP works' philosophie that ASP.NET follows in every single corner of the framework is the real problem. Django, Ruby on Rails, and PHP doesn't try to hide the fact that you are building a website/page/app and help you in the process of coding with helpers, decorators or simple functions." – Angel<br /><br /><br /></blockquote><p>YES! HTTP, HTML, CSS, Javascript. These are the technologies we work with on the web. They're simple. Learn them, love them, embrace them. It'll also make your skills a lot more transferable should you ever be looking for work.</p><blockquote> "Newb" – rabbit<br /><br /></blockquote><p>pwned!</p><blockquote> "loooks like newly migrated from php/java." – web spider<br /><br /><br /></blockquote><p>Did you read my post? I have been using ASP.NET since it was in beta. I live and breathe this stuff and use it every day. I'm simply pointing out the flaws I see.</p><blockquote> "I've used PHP, Drupal, Rails and even FastCGI in the bad old early days and find I'm always coming back ASP.Net. Security, data abstraction layers, controls, validation, scalability, application recycling, caching, session management and great development/debugging environments are just to hard to pass up." – Mike<br /></blockquote><br /><p>Yeah, I love ASP.NET too. I don't use anything else on a serious basis. You definitely mentioned my favorite parts of it, especially the last one.</p><blockquote> "This model has been created back in what 1999/ 2000 when MS started working on .NET 1.0 (was released in 2002). So we are talking the model/architecture is almost a decade old, way before the Web 2.0/Ajax days." – Bart Czernicki<br /><br /></blockquote><p>Yes! It's old (mature as some say)! Is it at all possible there is a better way now?</p><blockquote> Chris Vanderheyden said <a href="http://jdconley.com/blog/archive/2009/01/12/10-reasons-asp.net-webforms-suck.aspx#128">a lot</a><br /><br />"Honestly, after more than 8 years of professional experience: YES, your SHOULD be out of that highschool mentality. (Look my editor can do only 2 colors i am so L33T...) "<br /><br />"I am a developer, i write logic, not translations. I WANT my HTML abstracted. I don't want to write zeroes and ones down to the NIC now do i. "<br /><br /></blockquote><p> Since you likely know me only from my 'it sucks' post, you're going to find this shocking. I agree with most of the justifications you made (except the ones I listed above). My biggest beef is with your comment on my #1. You obviously have no sense of humor. ;) And, why wouldn't you want to write html? It's a simple human readable markup language, not binary networking protocols. XHTML+CSS is abstraction at its best. In fact it's usually just as simple as the abstractions provided by ASP.NET controls. I mean, really, can you actually point at one of your ASP.NET apps that would run outside of the context of your modern web browser? Something other than html 4.0 or whatever you're using? You have to learn a lot about the quirks of ASP.NET to get things done well. Why not learn the quirks in html/css/js? Oh wait, you <em>do</em>have to do that too. The leaky abstraction abounds. </p><blockquote> "I only have 1 reason... Leaky abstraction over HTTP that introduces instead of removing complexity. Every other reason is a derivative or effect of this one reason" – Greg Young<br /><br /></blockquote><p>Thanks Greg. You always have a nice way of distilling things down. But that wouldn't make nearly as fun of a blog post!</p><blockquote> "I think that JD Conley is just sarcastic. Actually he loves .NET" - br_other<br /><br /></blockquote><p>No, I'm not being sarcastic. Perhaps dramatic. Yes, I love .NET. But it's not without its flaws.</p><blockquote> "sos un boludo" – Sebastian<br /><br /></blockquote><p>This is cooler than the "newb" comment! Thrasing in a foreign language!</p><blockquote> "You don't have to use &lt;%= ClientID%&gt; stuff at all There is a much better way. I claim ASP.NET webforms has the "best" integration with client side DOM. You think I am kidding ? Have you ever heard IScriptControl ? I guess you didn't" – onur<br /><br /></blockquote><p>Indeed I have heard of IScriptControl and use it quite a bit. It's an interesting and often useful abstraction. Though I always laugh at myself since to use it I add some C# code to generate some js code to call some other js code I could have just called in the first place if I were working in the markup.</p><p>And finally, we have the people who decided to write a full rebuttal on their slice of the net. Cool. Thanks for the link backs! Hope you enjoyed my comments.<br /></p><ul><li><a href="http://azamsharp.com/Posts/161_ASP_NET_WebForms_DO_NOT_SUCK_.aspx">ASP.NET WebForms DO NOT SUCK</a> – Mohammad Azam</li> <li><a href="http://ra-ajax.org/mythbusters-busting-the-myths-about-webcontrols.blog">Mythbusters – Busting the myths about WebControls</a> – Thomas Hansen</li> <li><a href="http://leedumond.com/blog/10-reasons-asp-net-webforms-still-rock/">10 Reasons ASP.NET Webforms (Still) Rock</a> – Lee Dumond</li></ul><p><a href="http://mikepope.com/blog/AddComment.aspx?blogid=2092">Mike Pope</a> also posted an interesting commentary on the matter. Us silly kids and our toys! Aren't we allowed to change our minds, backpedal, or get excited by new technology?</p><p>Happy hacking!</p>Joelhttp://www.blogger.com/profile/13506379062750253983noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-67178420528035192122009-01-14T22:19:00.000-08:002011-07-27T01:18:09.847-07:00Fire and Forget Email, Webservices and More in ASP.NET<p>Often times when you're working on a web site you want to fire and forget an email, a web method or, most common in our case, a Facebook call. There's a good chance there's a Framework method available to do that for you quite simply. They're suffixed with the word Async. For email there's the <a href="http://msdn.microsoft.com/en-us/library/system.net.mail.smtpclient.aspx">System.Net.Mail.SmtpClient</a> class. The following dirt simple code will send an email for you asynchronously:</p><p><a href="https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B7Ew2HKAAmajN2MzNDEwMTEtZTU5Ny00NjBiLWIyODQtYTdjZjc3M2UzMTEy&hl=en_US">Download</a> the sample code<br /></p><pre class="brush: c-sharp;">var s = new SmtpClient();<br />s.SendCompleted +=<br /> (sender2, e2) =&gt;<br /> {<br /> //do something when the send is done.<br /> //retry if error, etc.<br /> };<br /><br />s.SendAsync(from.Text, to.Text, "", message.Text, null);</pre><p>Well, that's pretty darn simple! Create a new SmtpClient. Call SendAsync and pass in your message data. Cool. There's even a whole set of classes to help you with attachments, multiple formats (like html and text), etc. From your console app or Windows Service this will work beautifully! The problem is, in an ASP.NET page this won't work. If you do this in a Page_Load or button click event, for example, you'll get the following helpful error message.</p><blockquote> Asynchronous operations are not allowed in this context. Page starting an asynchronous operation has to have the Async attribute set to true and an asynchronous operation can only be started on a page prior to PreRenderComplete event.<br /><br /></blockquote><p>Basically what ASP.NET is saying is that it's not prepared for you to make an Async call. No problem! ASP.NET has a nifty page directive. Just set Async="True". The MSDN documentation says: "Makes the page an asynchronous handler (that is, it causes the page to use an implementation of IHttpAsyncHandler to process requests)." What does that mean? Well, there are a whole bunch of posts on this, so if you're not familiar, search around for <a href="http://www.google.com/search?q=asp.net+async+page">asp.net async page</a> and come back here. Also do a search for "async" in my blog. I've posted about it a lot. It's one of my favorite features in ASP.NET.</p><p>So, now you've got the Async page directive down and you think all is good. But then, suddenly, you notice page load times start to increase. Your phone is ringing. Users are complaining. After mere minutes of debugging (after all you're a kung fu debugger right?) you realize your ASP.NET page is <em>waiting for the email to send</em>. "What the heck is going on here? This was an Async call," you mumble under your breathe. You curse Microsoft, and write an angry blog post about it. What happened?</p><p>When you set that Async="True" directive on your page you told ASP.NET that you want to do page rendering asynchronously. However, what you didn't realize is that you're doing things asynchronously with regards to the use of threads, and not the serving of the page. Let me clarify. With Async="True" ASP.NET <em>waits for all Async calls to complete before finishing page rendering</em>. It's designed so you can kick off long running IO operations like calling a database, web service, writing files, and sending email, without tying up a valuable worker thread in your ASP.NET threadpool. Instead, the IO operation gets queued up down in unmanaged Windows land and IOCP magic and the shared IO threads kick in. If you truly want to fire-and-forget, and not have your Async calls affect your page load time, here's your answer.</p><pre class="brush: c-sharp;">using (new SynchronizationContextSwitcher())<br />{<br /> var s = new SmtpClient();<br /> s.SendCompleted +=<br /> (sender2, e2) =&gt;<br /> {<br /> //do something when the send is done.<br /> //retry if error, etc.<br /> };<br /><br /> s.SendAsync(from.Text, to.Text, "", message.Text, null);<br />}</pre><p>It should be noted that in this sample code when the SendCompleted anonymous method is called, you are <em>no longer in the ASP.NET context</em>. The SynchronizationContextSwitcher removed this context and put you in no context, so you're just free ballin'. This is important. You can't mess with the Request, Page, Response, etc. We're talking serious multi-threading now. In fact it's even likely that delegate will be executing at the same time as some other method in your page's lifecycle, on a whole other thread. So, pass anything you want to use from the page via the last parameter on the SendAsync call, pull it out of the EventArgs in your SendCompleted handler, and don't touch that page object or anything in it.</p><p>I must confess. I didn't write this SynchronizationContextSwitcher class. It was another developer on our team (Boris) and then was improved by a random good Samaritan named <a href="http://haacked.com/archive/2009/01/09/asynchronous-fire-and-forget-with-lambdas.aspx#70428">Richard</a>. It's also based on <a href="http://www.codeproject.com/KB/threads/SynchronizationContext.aspx">this one</a> that's quite a bit more featureful/complicated.</p><p>Anyway, Simply wrap your send (or any Async) call in a using block like this and, for the scope of that block, any Async operations will happen as if you were not even in ASP.NET and didn't have a Request context to worry about. Your page will be served immediately without waiting for your Async call to complete. Of course, this does have caveats. By doing a true fire and forget there is now the potential your email won't get sent and you won't even know about it. ASP.NET could shut down your app domain 1/2 way through the send and you and the user would be none the wiser. So, care must be taken to either store these things in some other reliable place before the Async call, or (as in our case) usually whatever you're firing off isn't critical, so a few missed ones here and there won't matter.<br /></p><pre class="brush: c-sharp;">public class SynchronizationContextSwitcher<br /> : IDisposable<br />{<br /> private ExecutionContext _executionContext;<br /> private readonly SynchronizationContext _oldContext;<br /> private readonly SynchronizationContext _newContext;<br /><br /> public SynchronizationContextSwitcher()<br /> : this(new SynchronizationContext())<br /> {<br /> }<br /><br /> public SynchronizationContextSwitcher(SynchronizationContext context)<br /> {<br /> _newContext = context;<br /> _executionContext = Thread.CurrentThread.ExecutionContext;<br /> _oldContext = SynchronizationContext.Current;<br /> SynchronizationContext.SetSynchronizationContext(context);<br /> }<br /><br /> public void Dispose()<br /> {<br /> if (null != _executionContext)<br /> {<br /> if (_executionContext != Thread.CurrentThread.ExecutionContext)<br /> throw new InvalidOperationException("Dispose called on wrong thread.");<br /><br /> if (_newContext != SynchronizationContext.Current)<br /> throw new InvalidOperationException("The SynchronizationContext has changed.");<br /><br /> SynchronizationContext.SetSynchronizationContext(_oldContext);<br /> _executionContext = null;<br /> }<br /> }<br />}</pre><p>I whipped up a <a href="https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B7Ew2HKAAmajN2MzNDEwMTEtZTU5Ny00NjBiLWIyODQtYTdjZjc3M2UzMTEy&hl=en_US">small sample project</a> to demo the effects I talk about here. There are two pages. One that is async, and one that isn't. It demos the error you get if you try to use an Async method on a non-async page, and simulates a slow email server on the async page. Then you can see the fire and forget in action.</p><p>Async methods are extremely useful, even if you're not using fire and forget. Most of the samples you see for doing asynchronous ASP.NET pages use the IAsyncResult and Begin*/End* methods. Those are pretty complicated, and if the Async method is available why not use it? I've written about the benefits of async programming quite a lot. Search for "async" up at the top right of the page.</p>Joelhttp://www.blogger.com/profile/13506379062750253983noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-81500119869713854232009-01-12T19:24:00.000-08:002011-07-23T05:14:41.201-07:0010 Reasons ASP.NET Webforms Suck<p>I've always been a .NET Fanboy. I've been on the bandwagon since its inception. I've developed quite a few shipping .NET products for the web, Windows, and Linux. I've given talks at user groups, created a consulting company, and mentored developers new to .NET. I always experiment with the latest toys and try to stay ahead of the technolgoy curve. In my most recent role at Hive7, I've been focused on web technology. We have some pretty large scale games (millions of players) built on ASP.NET webforms and ASP.NET AJAX. It's been about 8 years since I've written a full blown web app that wasn't in ASP.NET webforms. Sure, there's the occasional small PHP or static html site, but no "real" applications have been built on anything but ASP.NET. I think I've been missing out.</p><p>I'm going to preface this by saying one thing. Ever try to train someone new to ASP.NET? Especially someone with any other web programming experience. It's not easy. That to me is a sign of suck, or maybe fail.</p><strong>The Reasons (in order of frustration)</strong><br /> <ol><li>Other web developers assume you're inferior<br /> <p>Let's face it, if you're coding in ASP.NET you are NOT initially considered one of the cool kids. It's automatically assumed you're a corporate lackey with no programming fu. You have to prove yourself. It sucks. Yes, this is #1. Afterall, don't you want other people to think you're cool? Or am I the only one still living in high school...?</p> </li><br /> <li>One form to rule us all, one form to bind us<br /> <p>I don't think I have much to say on this, other than. Why? What was the design decision behind overloading the html form and only lettings us have one? Why? Why? Why?</p></li><br /> <li>Viewstate<br /> <p>Ever accidentally generated a 1MB (simple) page by just using standard controls?</p> </li><br /> <li>ID insanity<br /> <p>Mapping id's in html elements to id's in code starts out innocent enough. But, throw in nested controls (a recommended design practice) and hold on for your life. Once you get used to it everything makes sense. But try showing your dhtml/javascript guy how to use codebehind to grab a ClientID and pass that to his javascript code...</p><br /> </li> <li>Html abstraction<br /> <p>I truly hate that in webforms that you don't really write html code. Browser independent rending is just a bad, horrible idea. The abstraction sounds nice on the surface, but some day it will bite you. Web developers should know how to write html code, understand the web programming model, and the cross platform implications of their code.</p> </li><br /> <li>Postbacks everywhere<br /> <p>Linkbuttons, and any of the controls with 'autopostback' should be taken out on the street and shot. Posting back to the initial page as the default to perform an action is just counter intuitive. And then, how do you consume this action? An event handler? Weird.</p> </li><br /> <li>Request lifecycle<br /> <p>Init, Load, PreRender. WTF? Try explaining that one to your javascript guy. The fact that we need a 10 step lifecycle for things to work sends off warning bells in my head.</p> </li><br /> <li>Getting data to the client<br /> <p>Ok, I've got this cool data driven web site. And now I want to do some AJAX. How do you interface your server code with your Javascript? You can pick one of 20 methods, none of which are simple, and all leave the developer scratching his head. Sometimes things magically work. Usually they don't.</p> </li><br /> <li>Ugly URL's<br /> <p>Ok, so this one is low on the list becuase the latest service pack added a routing engine. But hey, it's bugged me for the last 8 years. Customers want pretty url's. Webforms did not deliver without much hackery.</p> </li><br /> <li>Codebehind<br /> <p>I love c#. But, the concept of codebehind just seems weird. Why is there a separate file that's coupled to the html code? This nifty abstraction has been the cause of so many developer questions and Visual Studio environment issues I don't even want to go there.</p> </li><br /> <li>The odd feeling that you have to beat the framework into submission to get it to do what you want<br /> <p>Ok, this is #11, I know. But hey, something just <em>feels</em> wrong in webforms. Like you're trying to stick a square peg through a round hole.</p></li></ol><p><br /></p><p>As I look over this list I realized that most things I hate about ASP.NET Webforms related to the choices that were made about abstractions. I don't understand what was so scary about the web programming model that these decisions were made. In fact, now that I think about it, I'd have been happier sticking with the classic ASP programming model than using webforms. Oh well. The last 8 years will now be known as "the time in my life when I had to code on that ASP.NET webforms junk". Ok I'm done complaining for now. Posts in the forseeable future will be about happy things like ASP.NET MVC, Azure, and jQuery.</p><p>Most recently I've been working with the ASP.NET MVC framework and I have to say, wow. What a relief. It reminds me of doing web programming when I actually wrote simple Html code for my first web site. It's not really the MVC pattern per-se that attracts me. The joy comes from not being coupled to the desires of the ASP.NET framework developers whims. I can write javascript, html, and css. I can write server side code. And guess what, it's not necessarily coupled! Maybe in 8 years I'll be singing a different tune. But for now, I'm happy again.</p><p> Edit: I posted a <a href="http://www.jdconley.com/blog/archive/2009/01/16/asp.net-mvc-sucks-and-so-does-jquery-and-php.aspx">follow up</a>. </p>Joelhttp://www.blogger.com/profile/13506379062750253983noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-41843395826277269522008-09-24T18:18:00.000-07:002011-07-23T05:10:06.923-07:00Debugging after a power outage<p>Here at <a href="http://www.hive7.com/">Hive7</a> we host all our servers with a hybrid co-location/hosting provider. When you host your servers in a colo facility there are a few key things you look for. Stuff like multiple redundant internet routes, clean power, zero interruption backup power systems, adequate cooling, and decent security. While our host has all of these, after all they come standard in any decent co-location, they also provide us with a few services above and beyond a bare bones colo like good prices on rented servers of any configuration and hardware load balancers. We've had our share of small mishaps, but things have been pretty smooth sailing. That is, until, about 36 hours ago.</p><p>Sometime around 5am PDT PG&amp;E had a major power outage. Normally when this (rarely) happens in a colo, the backup batteries and generator carry you through without even a hiccup. Well, not this time. Backup power systems faltered and servers went down. Things quickly came back up, but, there was a catch. The Air Conditioner in our data center did not! Servers rapidly began overheating. Quick to react, the colo's on site engineers hard powered off a bunch of our servers (I'm still not quite sure why we didn't get a phone call to do this on our own the safe way).</p><p>Text messages flew (<a href="http://www.zenoss.com/">Zenoss</a> is your friend) and our chief sys admin dude rushed down to the data center to assess the damages as soon as he was alerted. On the surface there were a few major issues. Many of our larger RAID Arrays were running degraded and needed some love. Some servers had not powered back on and were stuck. He spent all day trying to right the wrongs made by our colo's (lack of) backup power.</p><p>While he was doing that, guess what I was doing? Yeah, trying to make <a href="http://apps.facebook.com/knighthood">Knighthood</a> run. Knighthood is a pretty big web app. We have millions of users, and roughly 10,000 actively playing at any given point in time. With a few database servers running with degraded arrays, virtual machine hosts not running, and some other systems still not powered on I set forth scouring logs and troubleshooting. One by one we got the necessary systems back up and running. First it was the email service, then the email invitation service, then the Active Directory domain controllers. With all of these up and running properly, the game was still performing poorly. It was exhibiting performance traits I'd seen before. It'd go fast for a bit, then completely hang, then go fast again.</p><p>In the past, a performance pattern like this has been caused by some reader/writer lock contention where an upgrade to a write lock causes everything to stop. It's also been caused by transactions hanging in Distributed Transaction Coordinator, or SQL Blocking, or contention on a big cache item in Memcached, or some code that is infinitely recursing. So, I did my normal troubleshooting in this scenario.</p><p>I popped open perfmon to look at request rates, bandwidth, cpu, threads, memory, etc across the farm. Every single front end IIS server (7 of them for this game) was processing absolutely nothing for a 20 second period. And by nothing, I mean nothing. IIS wouldn't even serve up an image! And CPU was 0% utilized. But, once that 20 seconds was over we'd get a good 5 seconds of processing done. Everything was queued and no requests were dropped. I thought for sure the power outage had severely damaged one of our databases, DNS servers, or a big chunk of cache (we run about 60GB of memcached), or something else obvious like that. It really <em>felt</em> like something was timing out and then letting the flood gates open.</p><p>At this point the logs weren't showing any errors to lead me down a debugging path. Since the behavior was happening consistently I decided to grab a <a href="http://blogs.msdn.com/tess/archive/2006/10/16/net-hang-debugging-walkthrough.aspx">hang dump</a> of the w3wp process. The dump was, well, completely surprising! Guess what was happening?</p><p>No really. Guess.</p><p>Give Up? <strong>Nothing!</strong> Yeah, nothing. During that 20 seconds all the managed and unmanaged threads were completely idle doing absolutely nothing. No locks were held. No pages were processing. I know that because of Tess's neat blog post about which <a href="http://blogs.msdn.com/tess/archive/2005/12/20/505862.aspx"> threads you can ignore</a>. It's as if IIS just decided we didn't really want to process any more requests. This had me scratching my head. I repeated the process another 4 or 5 times with the same result. It seemed the problem must be somewhere in kernel space. None of my user mode dumps found anything at all. That scared me.</p><p>I have no experience doing kernel debugging. So, well, this is where I partially threw in the towel. I called Microsoft support and opened a premier support case. It had been about 8 hours and we had a LOT of pissed off paying users. After a few hours on the phone we captured some more user mode dumps (they didn't believe me that there wasn't anything interesting there) and uploaded them. I wish saying "I am experienced doing production crash dump debugging" meant something to these guys... I'm not sure how many times I had to say "No, you see, when it hangs it uses 0% CPU!".</p><p>The Microsoft crash dump engineers went about their business and said they'd call me back when they found something (though I knew they wouldn't find anything and no amount of whining could make them skip this step). To their credit, since it was production and affecting our main line of business, they offered to do the debugging immediately rather than withing 2 business days which is the normal turnaround.</p><p>A couple hours later everything magically started working perfectly. I changed nothing. I called our sysadmin and he said he changed nothing that should have affected Knighthood. It had already been a long day and we decided to wait for the next day to find what had fixed it. Sysadmin dude sends me an IM this morning to let me know he figured out the problem. Guess what it was?</p><p>No really. Guess.</p><p>Well of course, it was IIS logging! We have all our IIS logs pointed at a NAS. I never thought this would be an issue since, well, logging happens in a background thread right? It can't possibly interfere with actual request processing. Turns out that is incorrect! The NAS was one of the last things that Sysadmin dude had brought up at the end of the day because it was only used for archiving and log files, a very low priority in a crisis scenario. Well, apparently not!</p><p>Microsoft called me back about an hour later to let me know that the hang dumps did not uncover anything and it appeared that requests were simply being delayed before they could be processed. "Well, no $@*! I told you that 12 hours ago", I thought. So, I let them know we had fixed the problem and closed the case. I'm sure we could have seen it in a kernel hang dump in the IIS kernel mode stuff, but the problem was not reproducing anymore and I didn't want to bother...</p><p>Now you know. IIS logging can clog up all your requests. So, if you're logging to a remote system over a windows share, make sure it never goes down! Or, well, don't do that and log locally and ship them out on occasion.</p>Joelhttp://www.blogger.com/profile/13506379062750253983noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-24131663267870521702008-07-17T10:33:00.000-07:002011-07-27T01:26:47.034-07:00ASP.NET - It's not just for DataGrids after all<p>Last night I gave a <a href="http://www.baynetug.org/DesktopModules/DetailXEvents.aspx?ItemID=327&amp;mid=49">talk</a> at the San Francisco chapter of the Bay.NET User's group. It was a lot of fun. Thanks for the great interaction everyone! I am also thoroughly impressed that I finished nearly on time and didn't have 10 slides left! Usually I have way too much material for these things.</p><p>As promised, here are <a href="https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B7Ew2HKAAmajOTVhNTU3ZjItYjYyYi00MTE0LWIwYTEtYzI1NWIxZjAxZWM4&hl=en_US">the slides</a> from the talk.</p><p>If any other groups out there are interested in this talk, let me know!</p>Joelhttp://www.blogger.com/profile/13506379062750253983noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-69300635356980478982008-05-24T10:26:00.000-07:002011-07-23T05:04:35.929-07:00Windows Server 2008<p>For anybody who's been watching you will have noticed that I have had <a href="http://jdconley.com/blog/archive/2006/06/02/fun-installing-vista-beta-2-on-amd-x64.aspx">some </a> <a href="http://jdconley.com/blog/archive/2007/10/07/vista-rant.aspx">fun</a> trying to get x64 Windows Vista stable on my workstation. Well, because of all that fun I've been running good ole trusty XP Pro 32 bit (and Ubuntu) for the last 8 months or so. I noticed both Server 2008 and Vista SP1 came out and I thought, "hey, it's time for an upgrade!"</p><p>After my last <a href="http://jdconley.com/blog/archive/2007/10/07/vista-rant.aspx">episode</a> installing Vista x64 on my workstation, I decided I should instead go for Windows Server 2008 x64 – I know, it shouldn't really make a difference, but it made me feel better! So I login to MSDN, download it, burn it to disc, and away I go. Before I know it, everything is installed and working. It's been over a week now and it's still working. Not a single crash! Amazing... It's almost like, dare I say it, I got a Mac crossed with Open BSD and a touch of Linux! ;)</p><p>I have to say, they did something right with Windows Server 2008 for us hardcore workstation users. It is the perfect blend of security, cutomizability, and sexiness. You gotta love a server operating system with all the IIS 7 goodness that lets you turn on Aero. :) Both of my printers even have drivers now!</p><p>However, I must confess, I did cheat a little bit. I disabled my on-board sound card that was the culprit for many of my BSOD's with the prior attempts at Vista x64 and bought a PCI sound card. Ah well, Server 2008 rules, Vista sucks! There. I said It.</p><p>Oh yeah, I did have one issue. For some stupid reason it didn't want to activate, giving me the stupid error:</p><blockquote> Windows Activation Error: A problem occurred when Windows tried to activate. Error Code 0x8007232B. For a possible resolution, click More Information. Contact your system administrator or technical support department for assistance. DNS name does not exist.<br /><br /><br /></blockquote><p>Luckily there are about a million hits on Google on the subject. Here's the <a href="http://www.chapterzero.co.uk/articles/fix-vista-activation-dns-error-0x8007232b.aspx">most concise one.</a> Yeah, you read that right, enter the same exact product key and click activate again. You would think that's an error that would have been fixed in over a year...</p>Joelhttp://www.blogger.com/profile/13506379062750253983noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-62755559937871820642008-03-19T14:54:00.000-07:002011-07-23T05:02:37.969-07:00A SQL Table Manhunt<p> As I mentioned in my last post I recently took on the exciting new position of Chief Software Architect at <a href="http://www.hive7.com/">Hive7, Inc.</a> We're building all kinds of great stuff. Our most popular game <a href="http://apps.facebook.com/knighthood">Knighthood</a> has over a million registered users and over 100,000 daily actives. This game is growing quickly. Over 125,000 people added the game two weeks ago, and over 150,000 added it in the last week. The game came into existence in December. </p><p> This massive growth leads to some exciting scalability challenges. I'll be spending a lot of time talking about that in the future. Today, is a simple tidbit related to databases. Our current performance bottleneck is with database write I/O. We have enough memory in the systems and a caching layer, so the disks barely need to read. Tracking this down is a whole other post, but it's fairly simple. Once we knew we were write I/O limited we set out to find out why. </p><p> The original DB physical layout started out pretty simple. There was one File for data, one for logs. In the next 3 or 4 revisions more and more files were created. Why? Well, so we could run this nifty little query and find out <em>which</em> of our db tables/indexes/etc were causing the write bottlenecks:<br /></p><pre class="brush: sql;">select<br /> db.name as DbName,<br /> f.name as FileName,<br /> f.physical_name as FilePhysicalName,<br /> vf.TimeStamp,<br /> vf.NumberReads,<br /> vf.BytesRead,<br /> vf.IoStallReadMS,<br /> vf.NumberWrites,<br /> vf.BytesWritten,<br /> vf.IoStallWriteMS,<br /> vf.BytesOnDisk<br />from fn_virtualfilestats(-1,-1) vf<br /> inner join sys.databases db on db.database_id = vf.DbId<br /> inner join sys.database_files f on f.file_id = vf.FileId<br />order by vf.NumberWrites desc</pre><p>If you have physically separated your various database tables and indexes into different files, the output from this function will give you all kinds of useful information about which ones are most accessed, and which put the most strain on your I/O subsystem. Optimizing it, of course, is up to you. :)</p><p>If you enjoy big scale, fast moving, tough problems, we're hiring for a <a href="http://www.linkedin.com/jobs?viewJob=&amp;jobId=493724&amp;fromSearch=0&amp;sik=1205963575638">Lead Web Designer</a> and <a href="http://www.linkedin.com/jobs?viewJob=&amp;jobId=493713&amp;fromSearch=1&amp;sik=1205963575638">Brilliant Lead DBA/Sysadmin</a> and <a href="http://www.linkedin.com/jobs?viewJob=&amp;jobId=493706&amp;fromSearch=2&amp;sik=1205963575638">Web Games Developer (.NET)!</a></p>Joelhttp://www.blogger.com/profile/13506379062750253983noreply@blogger.com0tag:blogger.com,1999:blog-1741199026308686058.post-91684158332158468982008-02-28T10:14:00.000-08:002011-07-27T01:33:17.958-07:00C# 3.0 Overview<p>It's been forever since my last post. I promise I'll do better. I've just been juggling three jobs. ;) But that has changed (more on that soon)!</p><p>The last couple of nights I did the same talk at two different user groups. <a href="http://www.sacnetug.org/">Sacramento .NET User's Group</a> and the <a href="http://www.centralcaldotnet.com/">Central California .NET User's Group</a>. Thank you guys for having me, and not throwing any tomatoes. I think we had a good time at both events. Though, I did take up the whole two hours both times.</p><p>The talk was based on <a href="http://www.yoda.arachsys.com/csharp/">Jon Skeet's</a> upcoming book titled <a href="http://www.manning.com/skeet">C# in Depth</a> which I had the pleasure of reviewing and providing technical feedback. We went through the evolution of C# from 1.0 to 3.0, explored a bunch of the new features, played <a href="https://msmvps.com/blogs/jon.skeet/archive/2008/02/14/human-linq.aspx">Human LINQ</a> (hilarious). Oh yeah, it was pointed out to me in the Sacramento group that the word "jumped" should really be "jumps". That's what I get for copying my work! hah! If anybody in the Sacramento area wants to help, I'd like to do it again and video tape it...</p><strong>Stuff to Download</strong><br /><ul><li><a href="https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B7Ew2HKAAmajYWJmYzFjOWEtNThhYy00YzdkLThiZWMtMzVlODBiNGY1MGM1&hl=en_US">C# 3.0 Overview Presentation</a> – In both talks I didn't have enough time to bore you guys with the "in depth" slides. Pick up Jon's book to learn the nitty gritty about how all that stuff works.</li> <li><a href="https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B7Ew2HKAAmajMmYwYjgzOTMtZDhjMy00Yzg5LWFmZTYtOTg3M2Q2ZTU3Njk3&hl=en_US">A Sorted Affair 2</a> – - A few weeks ago I published the first version of this on my blog. This one is much cooler.</li> <li><a href="https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B7Ew2HKAAmajNmMyODc4ZmMtNzUzZC00ZTgzLThhYzUtMjlhOGQzMTYyOGQ4&hl=en_US">Human LINQ</a> – The code we executed with our Human LINQ provider.</li> <li><a href="https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B7Ew2HKAAmajZTc4Mjg3ZTYtZGFlNy00MTgwLWJhNjEtODlhYjA3YWFlYjZj&hl=en_US">Sort Performance</a> – A quick exploration of the relative sorting speeds using different sort methods.</li></ul><br /><p>I'll write up another quick blog in a bit on the sort performance. It's quite interesting, indeed.</p>Joelhttp://www.blogger.com/profile/13506379062750253983noreply@blogger.com0