Adding more memory to 32-bit laptop

Hi. I've got a 32-bit laptop with 2gb of memory and I'm thinking about maxing it out.

Exact model: Toshiba Satellite L305-S5918.

I am running Windows 7. I aware that 32-bits don't fully utilize all 4gb, but can anyone give me an idea of what kind of improvement I'll get by adding another 2gb, and under what circumstances I'm most likely to see it? And how much difference would enabling the 32-bit switch make?

You can download and run CPU-Z to be certain. Note that even if your processor supports 64-bit operation, you will still need a 64-bit operating system. If you're up to re-installing Windows 7, there's a great how-to guide here on Ars.

That being said, depending on your normal workload, you may not even see a benefit from increasing RAM at all. The advice continuum provides is sound.

The chipset only takes 4 GB though, so an upgrade to 64-bit might be of marginal benefit at this point.

Seriously, a single 2GB stick will take it out to about 3.25-3.5 GB, give or take with a best guess towards the lower end of the range. That's going to generally show some benefit and is a reasonable price/performance stopgap measure to breath a little life into the old laptop before an eventual replacement.

Improvement in what? 2 GB is about the comfortable minimum for light use in Windows Vista/7. What is it you expect to see most improvement in? Some things may be processor-limited (and that T3400 is a limited processor!) for example.

If it's cheap (should be) then I'd tap it to 4 GB regardless, because having an active machine with less is pointless.

Granted, but RAM is cheap and really easy. Upgrading the SSD is not cheap and not particularly easy (if you care at all about your data.)

With the age of the system, it might make sense to get it upgraded enough for now and then set it on a glide path towards retirement in a year or so. That SSD money could go towards a replacement system.

If you want to be really cheap, hunt down a 1GB stick in case there is a single 2GB stick in there now. If it is DDR3, a 2GB stick might be cheaper than a 1GB stick even.Those $10 are well spent. The bigger disk cache alone is worth the money.

I wouldn't go through the work of profiling swap etc for deciding whether to spend $10 (but for learning about it I would).

If you think about upgrading to an SSD, maybe think about upgrading the entire machine or choose something that would make sense to drag over to the next one.

$110 to breathe new life into a system isn't too bad compared to the cost of a new notebook with an SSD.

Quote:

That SSD money could go towards a replacement system.

It could, but throwing ~$150 (including a 2GB DDR2 SoDIMM) at a laptop if the person's happy with it could easily extend its lifespan for another 2-4 years, depending on what is being done with the device.

Laptops able to store six pictures and an MP3 never seem to sell well.

I'd go with double the RAM over a tenth capacity SSD any day (8 GB and 640 GB or 4 GB and 64 GB?). Of course ideally I'd have my silicon both ways: Monocrystalline Si for booting the OS, 7,200 RPM SiO2 for my shit. Laptops don't have that luxury.

For most use cases the old Toshiba with an SSD should be better than this with a spinning disk. To get a real benefit, you have to add the $100-$300 for the SSD (depending on what size etc) to that cost to really get something worth the hassle of an upgrade.

Of course it does and it will be slower than the older laptop _with_ ssd.

I replaced my hdd in my 5 year old laptop about 1.5 years ago and i'm still not thinking about replacing it. I get the impression that the aging hardware gets better utilization with an ssd. The CPU spends considerably less time waiting for the hdd to catch up and spends more time in a working state as it keeps getting fed the data it needs in a much more timely manner. Generally speaking, not only do you get much higher read/write speeds but it's also a huge overall I/O booster.

Did you bother to look up the specs on the Toshiba in question? It has a 160GB HDD. The Intel X25-M G2 I suggested is a 160GB SSD. No change in size.

Quote:

Laptops don't have that luxury.

That's a decision made by the manufacturers. Hybrid systems that combine a caching SSD with a HDD offer the best of both worlds. There's little noticeable difference between the all-SSD original Razer Blade and the hybrid system in the second generation Razer Blade. It's a shame more laptop makers don't implement it.

Of course it does and it will be slower than the older laptop _with_ ssd.

Probably, but there's the rest of the benefits of a faster CPU, more memory, a fresh brand-new battery, a modern OS...

It depends on what the OP does with her laptop I guess. I just can't grasp why anyone would want to buy a new computer and let the OS live on a hdd.

As for the battery; One could easily buy a decent aftermarket battery off Ebay for $40. I replaced mine 2 years ago with a double capacity battery and it still lasts close to 6 hours. It even tilts the laptop forward a litte and makes typing a tad more comfortable.

Quote:

No, no, no! They're on Windows 7. The only OS more modern from MS was built for pre-schoolers and gerfingerpoken.

If they're going to get a new laptop, they may as well wait for touch screen ubiquity before moving to Windows 8.

I don't like it either. Feeling completely clueless on how to operate the display laptops at a store is not a good feeling at all for a would be new adopter. In my opinion Windows 8 is a clumsy attempt to shoehorn a tablet interface into what's being primarily used as a desktop OS. The marketing baboons who conjured up that idea should be hanged.

Of course it does and it will be slower than the older laptop _with_ ssd.

Probably, but there's the rest of the benefits of a faster CPU, more memory, a fresh brand-new battery, a modern OS...

So I wanna do my work but my laptops is a little sluggy. Then I spend $300 to solve this with a new one, instead of $100 for an SSD.

I get less perceived performance despite paying $200 more, still get hindered doing my work and have what on top? A new OS with a different UI to get used to. Well, the longer battery live would not go to waste at least

Whether your suggestion makes sense depends on the use case. If battery life is not a concern and it's a classic office use case machine, replacing it with another $300 class laptop is just a waste of money because the one big bottleneck stays unaffected.

You don't know this. I installed an SSD in my Power Mac G5 (single processor @ 1.8GHz) and it still transcodes video at the same speed it did with the mechanical disk.

Consumer-grade notebooks are not traditionally used to transcode video. This thread is about someone with a notebook.

Irrelevant. The point being the task at hand determines what benefit one will see from upgrading any one particular part be it memory, processor, disk, or something else. We don't know the use case for the OP so we cannot say whether an SSD will provide sufficient gain to warrant the increased cost versus diminished capacity that often comes with them.

Of course it does and it will be slower than the older laptop _with_ ssd.

You don't know this. I installed an SSD in my Power Mac G5 (single processor @ 1.8GHz) and it still transcodes video at the same speed it did with the mechanical disk.

This doesn't surprise me. Transcoding is a very GPU intensive task. My guess once the hdd filled it's buffer in the graphics memory it didn't have any problems keeping it filled if it was defragmented properly. I wonder if your situation still rings true if you upgrade the GPU and do tests with both the ssd and hdd. If the new GPU is fast enough to eat away at the buffer faster than the hdd can fill it you got a classic example of I/O starvation.

In spite of all this. Can you at least confirm that your machine cold starts just about everything much, much faster? Not to mention it's probably much more responsive under heavy I/O load.

Did you bother to look up the specs on the Toshiba in question? It has a 160GB HDD. The Intel X25-M G2 I suggested is a 160GB SSD. No change in size.

So we spend twice the value of the laptop - a 2 GB Pentium T3400 - on a very expensive (and underperforming) SSD.

How many V8s have you fitted to Lada Rivas lately? After you've refitted the transmission, suspension, steering and chassis, don't you realise you could have just not bothered starting with the Lada Riva?

x264's lookahead is well known to be very fast in OpenCL and the University of Heidelberg kicked off efforts to "GPUise" it with a stonkingly fast motion estimation engine - at a first attempt too.

What limits us here is the coding model used by x264. As the only competitive open-source H.264 encoder, its coding model shapes all open research into H.264 encoding. It is unable to track and dispatch more than about 16 threads, with optimal performance at 8 or so.

When Aimar began x264, the best available CPU was dual-core as engineering samples of AMD's early Toledo Opterons (or Intel's shitty P4 SMT, which he derided as giving a 15-20% penalty in encoding) which he almost certainly didn't have access to.

So we spend twice the value of the laptop - a 2 GB Pentium T3400 - on a very expensive (and underperforming) SSD.

How is $0.56/GB "expensive" ? Do you own an X25-M G2? I have two in regular service along with more modern SSDs and I honestly can't tell the difference for day to day. Christ, an Indilinx Barefoot is better than any spinning metal solution for day to day.

Quote:

How many V8s have you fitted to Lada Rivas lately?

Really? The best you could do was a car analogy?

I've been putting SSDs into 2006-era and later hardware for a while now. It makes the devices significantly more usable than their 4200 RPM slow-ass 2.5" mobile HDD origins. I own a Toshiba M305-S4910, a laptop not much older than the OPs. Socket P, chipset graphics, and it had a slow HDD. Pop in an SSD and the thing's as enjoyable to use as any other system not waiting on spinning metal to switch tasks.

x264's lookahead is well known to be very fast in OpenCL and the University of Heidelberg kicked off efforts to "GPUise" it with a stonkingly fast motion estimation engine - at a first attempt too.

What limits us here is the coding model used by x264. As the only competitive open-source H.264 encoder, its coding model shapes all open research into H.264 encoding.

I don't buy this. There are plenty of H.264 encoders out there. If this was just a matter of how x264 works, someone would have licensed another encoder and adapted it GPGPU. Nvidia certainly has the profit motive to do so. And in fact people have tried this, with predictably terrible results. Bits and pieces of encoding can be done on a GPU, but making 20% of the encode 10x faster doesn't really get you anything thanks to Amdahl's law. You have to do all or at least most of the bits to get ok performance gains, and thats really, really hard once you look at all the weird things an encoder has to do.

Hat Monster wrote:

It is unable to track and dispatch more than about 16 threads, with optimal performance at 8 or so.

When Aimar began x264, the best available CPU was dual-core as engineering samples of AMD's early Toledo Opterons (or Intel's shitty P4 SMT, which he derided as giving a 15-20% penalty in encoding) which he almost certainly didn't have access to.

If that were really a limitation, you could just divide up the encode into a series of parallel encoder instances each operating on sequential runs of a couple seconds of video. Video encoding is basically infinitely parallel. But parallelism isn't enough. You have to actually be good at what each thread is doing, and GPUs aren't very good at it.

Video encoding is basically infinitely parallel. But parallelism isn't enough. You have to actually be good at what each thread is doing, and GPUs aren't very good at it.

I was following you up until this point. My impression of GPU video encoding to date has been that the GPU coders just haven't put the necessary resources into the effort. Leading to poorly coded solutions that are neither reliable nor provide the quality of CPU implementations. You will need to put a little more substance into this particular claim.

Video encoding is basically infinitely parallel. But parallelism isn't enough. You have to actually be good at what each thread is doing, and GPUs aren't very good at it.

I was following you up until this point.

Whats confusing, that taking something that cannot be done well on a GPU and splicing it up into a series of threads that cannot be run well on a GPU does not give you good results?

GPUs aren't magic. They're made to process huge amounts of essentially sequential floating point math. H.264 basically comes down to evaluating a huge amount of very small (8/16 bit) integer values using a highly conditional program flow. This is basically the definition of what a CPU does well, and what a GPU does poorly. Every branch you hit in a loop basically knocks out a fraction of the GPUs stream processors. Then theres the integer thing. Hell, CUDA doesn't even have packed 16 bit integer ops. Each and every 8 bit add is zero padded into a full 32 bit add, which is itself only using a fraction of the systems throughput since almost everything is aimed at (useless) floating point operations. Hence you can parallelize encoding all you want, but a GPU won't run it well.

AndrewZ wrote:

My impression of GPU video encoding to date has been that the GPU coders just haven't put the necessary resources into the effort. Leading to poorly coded solutions that are neither reliable nor provide the quality of CPU implementations. You will need to put a little more substance into this particular claim.

ConclusionsThe unfortunate truth is that, right now, hardware-accelerated video transcoding on the PC is a mess.

Support for black-box encoders is spotty. We saw output quality at the same settings vary wildly depending on the conversion software used. Not only that, but none of the black-box encoders we used matched the quality level of unaccelerated software conversion. Sometimes, the differences were glaring, with the black boxes producing a ton more artifacts and adding ugly jaggies around hard object edges. The only upside, really, is the encoding speed.

Bottom Line: With Badaboom dead, Lousy Spam Product is the best CUDA-compatible encoder out of the ones we tested. You can also adjust preset settings in ways Cyberlink and Arcsoft don’t allow. Given that presets and automated output is something we wanted to test, however, the program can’t be said to rank that well. There’s no excuse for its poor software output and it’s also the most expensive product on the list; retailing for $59.95, as opposed to $40 for Arcsoft and Cyberlink.

I have a fair bit of knowledge of image/video compression having done PSNR studies on DCT and wavelet codecs. And I agree that GPU encoding might not be efficient for 8 bit integers if you are padding out the rest of the 32 bits. But considering that you have literally hundreds of functional units to use, how efficient does it have to be to still get a speedup? And what are all the different ways you can map a video stream onto the functional units of a GPU? There's lots of ways.

First of all, no that is not my claim, because that is a really dumb thing to say. You can make transcoding very fast on a GPU. Just disable all the hard to use parts of the encoder and output a shitty quality video. This is basically what current encoders already do (or in the case of some of the ones you've just linked, don't use the GPU and instead use a dedicated DSP on the GPU die).

Instead, what I said was:

Quote:

Nvidia certainly has the profit motive to do so. And in fact people have tried this, with predictably terrible results. Bits and pieces of encoding can be done on a GPU, but making 20% of the encode 10x faster doesn't really get you anything thanks to Amdahl's law. You have to do all or at least most of the bits to get ok performance gains, and thats really, really hard once you look at all the weird things an encoder has to do.

My claim was thus that you'd never get good results at a respectable speed up due to a fundamental mismatch between what an H.264 encoder must do to get good results and what a GPU can do efficiently.

AndrewZ wrote:

I have a fair bit of knowledge of image/video compression having done PSNR studies on DCT and wavelet codecs.

But no actual experience relevant to what you are saying... Generally when one says that they have an impression, its implicit that they have actually looked at something. You don't generally form an impression without some sort of experience. So you don't really have an impression. What you have here is probably better phrased as "wishful thinking" or less charitably as "nothing at all".

AndrewZ wrote:

And I agree that GPU encoding might not be efficient for 8 bit integers if you are padding out the rest of the 32 bits. But considering that you have literally hundreds of functional units to use, how efficient does it have to be to still get a speedup? And what are all the different ways you can map a video stream onto the functional units of a GPU? There's lots of ways.

ugh this is so ridiculous its actually painful to read

If this was an English class, you'd be struggling with coloring books. Go read up on how GPUs are actually programmed. The internal model you're working with is basically useless here, and you're not going to be able to have useful ideas until you fix that.

Listen, dude, you were making a pretty good argument rhetorically until this post. No need to go all ad hom here. This is not a hostile post. No one is impugning your expertise. Yet. Let's get back to this:

Quote:

Nvidia certainly has the profit motive to do so. And in fact people have tried this, with predictably terrible results. Bits and pieces of encoding can be done on a GPU, but making 20% of the encode 10x faster doesn't really get you anything thanks to Amdahl's law.

We know about Amdahls's Law. We know that NVidia has not succeeded, nor has AMD. We know encoding video isn't obviously easy.

Quote:

You have to do all or at least most of the bits to get ok performance gains, and thats really, really hard once you look at all the weird things an encoder has to do.

This is where you get to expound. I for one would be interested in hearing you list out some or all of the weird things an encoder has to do. The things that are hard to do on a GPU. Stuff in addition to obvious stuff like:

Yes, I follow Dark Shikari from time to time. And just so you don't have to get all up in my junk about qualifications, I have more. I don't feel that's relevant. We aren't going there. Thank you. Please continue.