TheHappTroll:mstang1988: TheHappTroll: All you guys calling for SSDs why not more RAM? Win7 Pro 192GB

*Yawn*, why not 8TB'sLink

If only I had a true server at home.

If you had the use for that at home I would be surprised plus I suspect your power bill might go up... Only once have every been able to consume my 24GB's of memory and hexacore 3.2Ghz i7 100% and that was video editing 1080P rendering and rotating the video 90 degrees.

mstang1988:TheHappTroll: mstang1988: TheHappTroll: All you guys calling for SSDs why not more RAM? Win7 Pro 192GB

*Yawn*, why not 8TB'sLink

If only I had a true server at home.

If you had the use for that at home I would be surprised plus I suspect your power bill might go up... Only once have every been able to consume my 24GB's of memory and hexacore 3.2Ghz i7 100% and that was video editing 1080P rendering and rotating the video 90 degrees.

The things that seems to give my computer the most hangups are websites that think they have to act like full featured apps.

This. I am tired of websites written in flash with 20 javascript imports. I tire of just poorly written sites, and shiatty underlying technologies. I'm tired of all of the Gawker media hub of sites requiring that you enable javascript just to display their main body text. I'm tired of error checking in form submissions being so broken that valid information has to be reentered. I'm tired of google using the www.google.com domain for captchas instead of using something like apis.google.com, so that my script blocking is even less sane. I detest when password constraints actually enforce poor practices instead of encourage good ones; where hxWoby2$ is a good password, but theysaythestrawthatbrokethecamelsbackwasecma is not. I find of standards and style both of: A) not being well-established B) not being adhered to. The standards bodies have failed, and web development in general is broken from all angles. To those who disagree, I ask why a CSS reset is neccessary. Why is so much vendor-specific and version-specific code neccessary?

As for shiat in general, perhaps if there was less focus on flair and more on performance and maintainability, we might actually be progressing beyond just having games with bigger explosions and videos with more pixels, interface devices just as shoddy as a qwerty keyboard and voice recognition that can understand a bit of klingon but cannot recognize the single syllable english word "no". I pray that one day word suggestion has advanced enough to recognize from context and without manual input that yes, I do mean the word coont, and not cantata. For every single increase in processor cores, hypertransports, front side bus speed, bit depth and resolution, poor practices and the general abuse of the very principle of technology to begin with, to enable and not to hinder, are fivefold increased. We'll never have truly sane multitasking in a world where an application is allowed to behave in a manner that they even be allowed 100% of cpu bandwidth to draw a page of text and half a dozen lowres jpegs. Instead, I live in an age of arbitrary progress indicators, of metalanguages meant to aid in writing javascript and css because they are so bad at what they do, yet are essential, of cross-site scripting being standard behavior and not the fringe. Where sql injection is [still] commonplace occurrence on sites that retain sensitive data. Where a line in css like a { width: 0; } is commonly neccessary to keep hyperlinking from bleeding through to where there is no valid content to apply linking to.

mstang1988:TheHappTroll: mstang1988: TheHappTroll: All you guys calling for SSDs why not more RAM? Win7 Pro 192GB

*Yawn*, why not 8TB'sLink

If only I had a true server at home.

If you had the use for that at home I would be surprised plus I suspect your power bill might go up... Only once have every been able to consume my 24GB's of memory and hexacore 3.2Ghz i7 100% and that was video editing 1080P rendering and rotating the video 90 degrees.

I sometimes use several VMWare virtual machine networked together, they like RAM.

I'm pretty sure it's voodoo-practicing goblins on coffee break trying to keep everything at a leisurely pace so they don't make the working environment too hot from all the friction of goblins flying all over the place.

A: Because shiatty software developers insist on coding their software to use every last scrap of system resources it can, to prove to you how awesome, shiny and packed with features their shiatty software is.

LineNoise:Shakespeare's Monkey: dopeydwarf: You're full of spyware from all the porn sites you surf without protection, you filthy animal.

Yup, basically this. Get an iPad for your dirty deeds.

I swear my ipad has gotten really slow lately. I even restored the thing. I'm sort of going into tin hat land and thinking that apple is intentionally slowing down the 1st gen ones to get me to upgrade.

I have wondered the same thing. However, logically each OS upgrade does more than previous one. Since the hardware is not designed for that it will not be able to keep up. So it will slow down. What pisses me off is that Apple forces you to upgrade via iTunes. So even if you dont upgrade right away they will shove it down your throat because iTunes will become incompatible. If you do upgrade, your iPad will become slow and useless . So either way we end up with useless POS.

etherknot:You sure about this? Either your resolution/bit depth is incorrect or you are mistaken with the CPU.

100% Positive.

Packard Bell 286/AT with 640K of memory. And it would do VGA, but there wasn't too many VGA games back then, and the few that did exist rarely worked right. The EGA stuff (16 color) tended to work better even if it didn't look near as pretty.

It even had a nice optical mouse. Needed its own special gridded mousepad, but it worked great.

OOP & Virtual FunctionsPervasive use of non-native (aka script) languagesCPUs are thousands of times faster, but RAM has not been scaling anywhere near as fast.Odd stuff like code touching 1 byte in a cache line, the CPU's have to read 64 bytes or more to do this.Poorly written multicore programs can fight for access to the same memory and destroy performance

On the plus side if you care about speed modern compilers for C and C++ are really really good.

akula:It takes quite a bit more to push a full graphical interface with transparent elements at high resolution while running several other programs at once while also maintaining constant connectivity with various outside servers.

Then they should nix the component heavy parts of the GUI and make everything nice and light again. This way your box will go FASTER and be able to do more things that aren't what some designer geek at microsoft thought was fun to code.

If you had the use for that at home I would be surprised plus I suspect your power bill might go up... Only once have every been able to consume my 24GB's of memory and hexacore 3.2Ghz i7 100% and that was video editing 1080P rendering and rotating the video 90 degrees.

8 TBs? I wave my dick at 8 TBs!

I'll take 16 TBs of PCI-E SSD, rocking 52 Gb/s transfer speeds and 1.4 million IOPS, ALL ON ONE CARD, please and thank you.

8TB RAM drive would dominate a PCI-E drive in transfer speeds. Not even close. Standard desktops are measured around 4-8GB/s with the current RAM if I remember correctly. That's with only 1 memory controller running in dual/tripple/quadtruple channel on i7's. The IOPS's again, not even close. The only bad is they are volatile but why not just keep a UPS running? Remember, an SSD is only as fast as it can transfer it to RAM. Granted 16TB's would be nice but meh, that's moot here. Thanks for playing.

InfamousBLT:I recall tales of one game (I don't remember the name, and I have never played it), but it was a chess game. For some reason, the difficulty of the game increased with the processing power. So, the faster computer you had, the harder the game was.

Apparently it is unplayable on modern computers by anyone except chess geniuses, because they didn't put a cap on the difficulty.

That sounds like a little bit of an urban legend to me, since the smartest chess program I know of right now is Deep Fritz 13, which was only recently updated to be multicore capable, and that software can only analyze around 8 million positions per second by brute force, compared to the 200 million positions per second that the Deep Blue supercomputer was able to analyze back in the mid-90s.

For an old chess program from the DOS era, it seems like you wouldn't be able to get anywhere near the performance of Fritz 13 because of inefficiencies in the older chess algorithms (especially if it was written before the Deep Blue matches) and limitations of apps that were built for DOS (lack of multicore support, limits on the amount of memory that a 16-bit app can address, etc.) I'm sure there might have been some chess programs from the DOS era that would benefit from the power of current hardware, but I don't see them getting to the level where it would take a Kasparov to beat them. Maybe a UNIX based program might be different, since that OS is more scalable, but I can't see a consumer level chess program scaling like that.

If you had the use for that at home I would be surprised plus I suspect your power bill might go up... Only once have every been able to consume my 24GB's of memory and hexacore 3.2Ghz i7 100% and that was video editing 1080P rendering and rotating the video 90 degrees.

I sometimes use several VMWare virtual machine networked together, they like RAM.

Yes, VM's eat resources. Still, if you are running that many VM's at home you might want to figure out why you need that many systems.

Mad_Radhu:InfamousBLT: I recall tales of one game (I don't remember the name, and I have never played it), but it was a chess game. For some reason, the difficulty of the game increased with the processing power. So, the faster computer you had, the harder the game was.

Apparently it is unplayable on modern computers by anyone except chess geniuses, because they didn't put a cap on the difficulty.

That sounds like a little bit of an urban legend to me, since the smartest chess program I know of right now is Deep Fritz 13, which was only recently updated to be multicore capable, and that software can only analyze around 8 million positions per second by brute force, compared to the 200 million positions per second that the Deep Blue supercomputer was able to analyze back in the mid-90s.

For an old chess program from the DOS era, it seems like you wouldn't be able to get anywhere near the performance of Fritz 13 because of inefficiencies in the older chess algorithms (especially if it was written before the Deep Blue matches) and limitations of apps that were built for DOS (lack of multicore support, limits on the amount of memory that a 16-bit app can address, etc.) I'm sure there might have been some chess programs from the DOS era that would benefit from the power of current hardware, but I don't see them getting to the level where it would take a Kasparov to beat them. Maybe a UNIX based program might be different, since that OS is more scalable, but I can't see a consumer level chess program scaling like that.

I would suspect that there was an algorithm that allowed the computer to go X-nodes deep into the analysis tree based on a given amount of time utilization. As CPU frequency went up so did X, or the number of nodes in the analysis tree. I would agree that it's not the best algorithm and likely not unbeatable. God I have forgotten the proper terms for AI trees :-(

If you had the use for that at home I would be surprised plus I suspect your power bill might go up... Only once have every been able to consume my 24GB's of memory and hexacore 3.2Ghz i7 100% and that was video editing 1080P rendering and rotating the video 90 degrees.

I sometimes use several VMWare virtual machine networked together, they like RAM.

Yes, VM's eat resources. Still, if you are running that many VM's at home you might want to figure out why you need that many systems.

Urgh, reminds me of my friend's computer I looked at a week or two ago. It was running a bit slow, Flash crapped itself when trying to update, the usual "OMG BROKEN COMPUTER!" stuff, right? Ehhh, not so much. First, it was a Compaq, which I haven't seen in some time. Then, after it FINALLY booted up, it took its damn sweet time simply letting me check the device specs, much less open a basic file folder. It was some old 500-ish megazertz chip with 356 megs of RAM running XP SP3. Sweet candy coated Christ. o_O

It got worse when I opened up MSConfig and saw all of the extraneous bullsh*t that was in the startup. Y'know, all of that bloatware and other garbage that comes pre-installed on laptops, plus all of the other extra nonsense that software coders just stuff into startup without considering that maybe, just MAYBE the poor system would load faster and cleaner without it. (And why the SH*T was the AV running a scan right as the OS was loading? Save it for later!!) I understand that technically programs will load faster if some stuff starts when the computer does, because it's already partially loaded into memory and all, but FFS, some programs don't need that, and some systems can't handle it...

So, you remember in Apollo 13 where the engineers were trying to figure out how to boot up the Command Module with only 20 amps of power? Yeeeeeah, I spent a few hours doing something similar. I pulled out every damn thing that I could out of startup, and somehow I was still able to leave enough in there that it ran properly. A quick replacement of the AV and some miscellaneous software updates and it's running MUCH better. As soon as she gets a couple of major bills paid off, we're going shopping for a new system for her so I don't have to do that again for a long time. :)

RichieLaw:weiserfireman: JohnnyC: Egoy3k: My PC is super fast. Honestly the coolest (IMO) "new" tech are SSDs essentially your OS runs as if it were completely loaded into RAM, which is pretty sweet.

SSDs are extremely nice to have. :)

I have four windows 7 pcs at work with SSDs in them. They take 22 seconds from power on, to ready to login. the Users had them for 3 months before I started getting "they aren't as fast as they used to be". Luckily I benchmarked them. They were still taking exactly the same amount of time to do everything. The user's perception had changed.

Sounds like typical users.

/Stupid users.

22 seconds on a SSD, what the hell?

From cold boot to usable and chrome open on my box w/ a i5 3570k + m3 Crucial SSD takes between 5 and 6 seconds... (with chrome set to open on startup)

etherknot:akula: The first PC I ever used was a 286 running DOS. It doesn't take too much to run a command line interface, text based everything, and 256 color graphics at 640x480 resolution.

You sure about this? Either your resolution/bit depth is incorrect or you are mistaken with the CPU.

Actually, the first very VGA machines ever made were 286s -- PS/2 Model 50 and 60, 1987. And you could buy a VGA card from IBM for an ISA machine, so you could upgrade an older 286.

Now, standard-official IBM VGA gave you a choice of 640x480 (and 16 colors) or 256 colors (and 320x200), though they could be tricked into a 360x480 256 color mode with sufficiently devious code. But many of the VGA clone cards went ahead and made the fairly minor adjustments necessary to support 640x480 and 256 colors simultaneously.

And there were several years where you had new 286s being made after the VGA clone cards came out. The 386 DX required more expensive motherboards than a 286 or 386SX, and a 286 was clock-for-clock almost the same speed running 16-bit code as a 386SX while costing less. As a result, 286s with VGA clone cards that could support 256 colors shipped in the millions.

One thing that hasn't been discussed and which I'm curious about is the cumulative effect of cookies.

Sure the needs of each one is tiny, but every web page you go to seems to require them to be enabled, and every one of them is reporting back to base what other sites you're visiting, and how long you linger etc.

Has anybody ever tried to estimate what proportion of traffic is due just to cookies data going back and forth completely independent of what humans are doing?

I'm assuming bloated code has been mentioned already. Cause that's what does it. Basically programmers have gotten sloppier since with faster chips and more memory it doesn't matter if you waste a bit here and there. So you ended up with bloated code. Hell the overhead for windows could be a lot less than it is.

WhyteRaven74:I'm assuming bloated code has been mentioned already. Cause that's what does it. Basically programmers have gotten sloppier since with faster chips and more memory it doesn't matter if you waste a bit here and there. So you ended up with bloated code. Hell the overhead for windows could be a lot less than it is.

Does... what? The premise is stupid, things don't take as long today as they did on subby's first computer. Yes, there is code bloat. Yes, programmers are lazier today because they can be. Still, things don't take as long as they used to. If you believe they do, you should go find one of those old systems and try working on it for a bit.

Yup. When I started with computers there was no such thing as spell check.

Then spell check was something you ran to check over your work.

Next came an option to use it all the time.

These days it's simply there all the time, flagging anything not in the dictionary.

Hyperbolic Hyperbole:umm.. were you even AROUND back in the days of dialup BBSes? seven minutes to download a single sexy jpg? squee'd your pants creamy when mom bought you a 28.8 modem? i mean that shiat was awesome but DON'T pretend it isn't any better than it was in '95.

Not only is your computer doing more, but your developers are doing less. Back in the day, when hardware constraints were exceedingly tight, developers were forced to develop within those constraints. This was very expensive- it requires much more developer time to write highly optimized code that spares memory and CPU and disk space. Developer time is expensive. CPU time is very cheap.

Yup. Across multiple version of DOS one of the things I did very soon after upgrading was to dig through command.com and locate the default environment size and increase it. IIRC it was version 6 that added many layers of crap making it *MUCH* harder to find. I'm sure it represented a change in the underlying language.

ObscureNameHere:Point -of-Order: Isn't the BIOS in most machines basically the same BIOS from 30 years ago?

No. I've flashed this machine--the BIOS image in it is bigger than the total supported memory of the original PC.

Meh, software nowadays is 100 times more complicated. So you can either spend 100 times as long writing it, or use 100 times as much abstraction (like using a managed language, or multiple frameworks, or COM, or what have you). Abstraction adds overhead. It's business, unfortunately.

I just upgraded to a Intel i5 running at 4.3GHz, 16GBs of RAM, and a SSD boot drive. It sucks because this computer is so fast it really makes all other computers feel slow. I can alt-tab out of a running game and just let it sit in the background using 1 to 2 gigs of RAM with no noticeable effect, I routinely get Firefox up to 2GBs. It is also awesome for running Virtual Machines.

Not only is your computer doing more, but your developers are doing less. Back in the day, when hardware constraints were exceedingly tight, developers were forced to develop within those constraints. This was very expensive- it requires much more developer time to write highly optimized code that spares memory and CPU and disk space. Developer time is expensive. CPU time is very cheap.

not only that but back then it was expected that to make things fast nobody really used interpreted code or safe libraries to protect against buffer overflow exploits, etc. these days you have things like java and python and .net that trade a bit of execution time for a lot of developer time and a lot more application hardness against exploits. (feed a python or java or .net program a giant pile of garbage data and you will get a program generated error message or, if the developer was lazy/careless an interpreter generated exception message, throw some c/c++ application running windows 3.1/95/98 a pile of garbage data and either your execution pointer goes into la la land or goes straight into executing whatever binary is in your garbage pile (easy to exploit)

this does not make modern code invincible a python developer could still go out of his/her way to be a knucklehead and read data objects out of a save file with eval()/exec()/execfile() and then a malicious file will when loaded instruct the interpreter to do Bad Stuff™