Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

robotsrule asks: "Having been in the computer industry a while I distinctly remember the pain of making the 16-bit to 32-bit transition, when Windows made the change to 32-bit support. Any developer who remember the joys of thunking and other kludges that were meant to help code conversions also remembers the arcane marathon debug sessions too. I have not been keeping up with the latest Microsoft Longhorn technical news, or the plans that the Linux community has for 64-bit platform support. Does anyone out there have a reliable prediction for the amount of system shock we are facing when either Longhorn or 64-bit Linux comes out? Will I lose all my favorite 32-bit development tools again as I watch the backward compatibility support dry up as the 64-bit O/S platforms are adopted? Or are the O/S manufacturers making happy noises about long-term support for existing development languages and tools?"

Hmmm... care to show us any free of charge resources that are better than Google for web searching and high capacity mail storage? What about really nice mapping features like maps.google.com? Or is your stock portfolio so tied in with your autonomic brain functions that you need to keep shilling for either Yahoo or Microsoft since they are the ones who are hardcore anti-Google?

In other news... there IS already Fedora for 64-bit. I haven't tried it yet, but as soon as I get my dual Opteron system goin

My latest gentoo install is 64 bit, built from the ground up.
works great for the most part.
there was no lilo that i saw, but I use grub anyways.
other then that i'm not missing anything.
I've known people that've ran 64 bit in different distros for a couple months now, and they're all quite happy with it.

I have the same setup, but I cannot say that I'm not missing anything. Although almost everything I use is open source and can therefore be ported and, for the most part, has been ported, there are a few closed programs whose binaries haven't been ported that I miss a little bit.

Macromedia Flash. I know this will be among the first ported when average people actually start using 64-bit CPUs, but at the moment it is very annoying to have to switch to my 32-bit machine once a week to see the new Strong

The 32-bit flash plugin works if you run it in a 32-bit browser program, even on a 64-bit OS. As long as one of your web browsers is 32-bit, you can run flash from within that one. A good solution is to install both mozilla and firefox and make one of them the 32-bit version and the other one 64-bit.

1) Yeah no flash2) There are so 64Bit ATI Drivers. I am running them on this machine right now. Get your facts straight buddy.3) Yeah it sucks that no 64Bit Browser Plugins exist yet. I don't know why though as 64Bit versions of Java 1.5 exist for both Windows and Linux.

Yeah, I remember there were a few issues here and there about 8 years ago when 64bit linux was pretty new for the Alpha. The only issue I've had in the past 3 years has been with one software package (flac) on an Itanium, and with some binary only distributed packages that I tried to run on an Itanium. 32bit stuff runs fine on an Itanium, its getting all of the 32bit shared libraries and RPM dependancies that were a pain.

I mean, Windows has even had a 64bit release of their OS around 10 years ago. Why a

well, 64-bit linux systems have been available for quite a while now. since the kernel and practically the entire application codebase are available to the public as source code, the transition has been quite painless for end-users. 32-bit emulation libraries have ensured that 32-bit binary programs work almost flawlessly.

OpenBSD has native support for AMD64. It even ships on their CDROM. The ports collection is tested in AMD64. Since the OpenBSD ports collection programs are installed by compiling, I am inclined to think OpenBSD mostly just works. I have my AMD64 on order and my main concern is SATA drivers.

Hmm... but this is different. The 16 to 32-bit PC transition didn't require you to go out and buy new hardware.

Huh!? Sure it did. You couldn't run 32 bit code on a 286. In practice, by the time 32 bit became effectively mandatory (Win95), the sheer horsepower requirements pushed an upgrade more strongly than word size. It'll likely be the same this time around.

64-bit mode on AMD abandons the idea of segments.You don't need them to get around the 4GB limit (no need for PAE), and no operating system was using segment protection of memory anyway; relying solely on page protection flags.Everything in 64-bit mode ends up in a known, fixed location of memory (like on old Macs)

How is that different that the current 32 bit mode? If I remember correctly (and google still serves me right;) all Linux binaries start at 0x08048000. Windows does the same, just a different address. What exactly does the 64 bit add other than larger word sizes and a few extra registers?

If I remember my history right, it was the 286 that added this mode. Granted the addressing was in 24 bit but it tossed out haveing to split up your memory address across 2 pointer registers ( I still curse those damn dat

Actually segments where in the 8086/8088 chips. With 16 bit registers you could only address 64k. The 8-bit chips did things like register pairs or for the 6502 family zero page to create the 16 bit values they needed.The 8088/86 had a sixteen bit segments and 16 bit address and a 12 bit memory space! What was REALLY NASTY was that the you could have several combinations of segment+offset point the the same stinking physical memory location! That is also why a lot of programing languages where limited to 64

Finally some c compilers got around even that. When you compiled you would tell the compiler what model to use. I think your options where usually "tiny" 64k of data and code max could be made into a com file, "Small" 64k of code and 64k of data, and "large" which broke the 64k limits. I think then came "huge" which I can not remember if it broke the data structure limits or if that ran in protected mode.

Sorry I miss read your post. I thought you where saying the 286 introduced segments.If I remember the 286 still did not have a flat address space. Can't be sure since I was programming on the Amiga then. Boy going back to dos SUCKED. The 386 was the first Intel chip that was not a freaking nightmare. Now the whole protected vs real mode was still a minor nightmare.

This was done to be somewhat compatible with 16-bit instructions in 16-bit modes. Later intel introduced PAEs which used parts of the segmentation mechanism to implement "weird ass vm mode", so you could have 36-bit of physical address with 32-bit pointers.

...and quite a few userspace apps were broken on Linux/Alpha (I spent quite a bit of time with Linux on EV5).

But not because of backwards compatibility issues so much
as bad code, written by bad coders.

The minor issues you're going to come across due to the
true development environment differences between 32 bit and
64 bit code will be fairly minor in comparison to the problems
with broken code that just happened to work in 32 bit mode.

This link [yahoo.com] makes it appear that gates wants the move to be quick. It makes sense, of course, from a business perspective to discontinue support as long as possible and get everyone in the world to upgrade processors. Chances are that it won't happen as quick as he'd like.

amount of system shock we are facing when either Longhorn or 64-bit Linux comes out?

Umm.. no offense, but where have you been? 64-bit linux has been out for a LONG time. Some platforms have been 64-bit kernelspace (sparc64, ppc64, alpha, amd64) and have had 64-bit userspace (alpha) while others have had a mixed 32-bit and 64-bit userspace (sparc, mips, ppc, amd64).

Most open source apps are already ported. Are you really doing things at a low enough level where you have to worry about thunking?? You might have bigger problems then.

All the applications I am using now are 32-bit, in spite of having a 64-bit CPU and a 64-bit OS kernel. However, this is Solaris, so who knows if Microsoft will be as successful.

For people who used the first releases of Solaris 7 (my memory is fuzzy), were there many issues back then? I would think there would be more issues in converting a 32-bit program into a 64-bit one, rather than having any issues running a 32-bit program on a 64-bit kernel.

64-bit Linux has been around for about a decade, since the initial DEC Alpha port. There are at least four 64-bit architectures with Linux support at this point, and it's well tested and debugged.

As for the Windows side, the lessons of the 16->32 conversion were not wasted, abstract types created for that conversion are still in use, and will certainly make the new transition much easier. There will be some bugs that will need to be shaken out, but it's unlikely to be the sort of major effort it was last time.

There was a period of years between 32-bit hitting the market and 32-bit being taken seriously as a development target by the majority of developers.

True, a large part of that was due to MS-DOS being the platform of choice, but the speed with which you need to adapt to the 64-bit environment will be made up for by the relative ease of conversion. We're relatively insulated from the word size of the system, except for the size of 'int' in C, and we won't have to deal with memory managers or extenders -- that's all up to the OS.

Just keep in mind while you program to be flexible and avoid tying yourself to any OS particulars in an unnecessary way. It's a bump in the road, but nowhere near as bad as it used to be.

I expect to see 32-bit support in development tools for years yet. Microsoft's window of support seems to be five or more years for operating systems so you've got at least that much time.

Also keep in mind the 64k addres space limit if 16 bit systems is REALLY tiny. Back then many apps had to play games to get around it. It's one things for a document to go over 64k another for it to go over 4 gig.

Linux can be happily 64 bit, and Windows may attempt to be 64 bit, but what are people going to do about Java?

Java is supposed to be platform independent, but the implicit assumption has always been a 32-bit platform of one sort or another. Yes, Java can run on a 64-bit processor, but the int is still 32 bits unless you want to change the behavior of an awful lot of Java code.

So will there be two Java's or are they going to come up with some kind of clever 64-bit Java extension or what?

It is the same on x86_64. C ints are 32 bits. just pointers are 64 bits and your long longs are implemented in hardware so are faster. There is no need to make the default integer size bigger, it would just double the memory usage for no reason. 64 bit cpus are designed to do 32 bit math fast too for this reason.

Defined that way by the language standard and will always be that way on any platform past, present, and future. That's why it's platform-neutral, because you don't have to deal with ridiculous low-level issues like the size of standard datatypes. All primitive types are fixed by the language standard. These sizes do not change from one machine architecture to another (as do in most other languages). This is one of the key features of the language that makes Java so portable.

Need more than 2,147,483,647? Try long -- 9,223,372,036,854,775,807. Still not big enough? java.math.BigInteger is arbitrary precision.

Although programming in Java has lost some of its charm for me, I never ever again want to have to program in a language where I don't know from one platform to the next whether or not a particular bit of arithmetic will overflow.

One of the main features of C is interchangable int and void*. Somehow I think you are better of biting the bullet and making int 64 bits just like when going from 16 to 32 bits int was so promoted. That way you only need to rewrite code where structs require 32-bit fields.

One of the main features of C is interchangable int and void*.Huh? Win32 certainly made that assumption, passing pointers around in DWORDs, but there's never been any guarantee of it in the language. I happen to agree that 64 bit platforms should have made int 64 bits, but in terms of "biting the bullet", Alpha, SPARC64 and all of the other platforms mentioned elsewhere here have already gotten most of the job done for the *NIX world. If you're on Windows, there's probably more of a problem, but that ha

1. Yoda said that, not Spock2. Dr. Spock was an idiotic moron who wrote books about childcare in the 40s , 50s and 60s.3. Spock on Star Trek was never a Dr. (phd or md)4. Given that the first 3 points make your sig whacked out, where the heck did the stardate come from?

...forgotten, perhaps, regarding Windows since the Microsoft / DEC Alliance days. But I've been running NetBSD's pkgsrc [netbsd.org] on a fully 64-bit OS [netbsd.org] for many years now (not to mention some [sun.com] others [hp.com]). In the OSS world, at least, 64 bit issues have been addressed for some time now.

There is the occasional badly-behaved audio or video application, coded originally on 32-bit x86 Linux, that must be hammered into shape. But it happens quickly enough that my Alpha is, and has been for years, a fully modern 64-bit deskto

No, it's not new. This of course begs the question, "When are the 128bit processors going to hit the streets?"

I am only being partially fasicious there. With all of the attention on media processing these days, it may make sense to throw 16bytes around at a time instead of 8. In fact, aren't many vector processors and GPUs structured around 128bit words already?

I had my asm class in college on the MC68000, and remember that it had 16-bit control and data buses (with 32-bit registers!) and a 24-bit address bus. Since 2^16 can only address 64K, and 4GB would have been way overkill in those days, I guess 24 bits was somehow logical.And I too remember plenty of warning for getting "32 bit clean".

Should we start a daily poll on the dumbest question to hit Ask Slashdot on a 24 hour period.

This one is sure to hit it.

Been running a 64 bit dual proc AMD Linux for about a year. Been running a 64 bit AMD Win 2K3 Server for about 5 months. Been running a 64 bit Sparc Linux for about 2 years (personally - all of these were out long before I got to them)

Here is the big difference. When you remember the 16-32 bit port - most of the problems I saw were to memory protection, and dealing with ring transitions. We have all ready solved these problems, so the port to 64 bits is pretty painless.

I do not understand 64 bit as much as I would like. Here is where I get lost. I would have thought a 64 bit chip accesses 2^64 words which are 64 bits each. Why do they say it accesses 2^24 bytes only?

That must be in real mode, where the 8086 back from 1981(?) is emulated. All x86's boot into this mode, even today... and waste our time, money, and hardware/BIOS developers nerves. In real mode you only have 16bit opcodes and registers but needed to be able to address more than just 64kB of RAM, so they introduced a way (segment and offset) to address with 24bit (like the whole processor, this was a quick hack to bring a 16bit version of the 8080 to market as

40 bits physical:The system can actually use 2^40 bytes (a terabyte) of RAM48 bits virtual:The system may use 64-bit pointers, but the top 16-bits are ALWAYS zero.

(Virtual memory addresses are translated into physical addresses by page tables... swapping and memory mapped files and all sorts of fun stuff is done with these tricks that make you look like you have more "memory" than is actually there)

This is important because the page tables that translate virtual addresses into physical addresses are going

Well it's one of those tradeoff things.Either you make your pagetables have less entries per page with the additional address bits, or you aim for a reasonable maximum, with the knowledge that eventually you'll release a version 2 with more virtual address space and a new pagetable layout.

The pain we experienced going from 16-bit to 32-bit Windows had nothing to do with bitness. Despite having the same name and a big feature overlap, these were actually two different OSs. The 16-bit OS evolved out of DOS, a nasty, buggy and incomplete OS designed by people who didn't even understand what an OS was supposed to do. The Windows layer didn't just provide GUI services, it kludged in basic OS functionality, like pre-emptive multitasking [wikipedia.org]. By contrast, 32-bit Windows was written from scratch by OS geniuses who had previously worked on VMS [wikipedia.org]. They did their best to provide backward compatibility, but there's a limit to what you can do about that wihout screwing up the new OS.

Porting from 16-bit Windows applications to 32-bit Windows is sort of comparible to the problems you face running Windows applications under Linux using WINE. In both cases, you're going to a new OS, and relying on a compatibility layer.

A 32-bit Windows application running under 64-bit Windows just won't face these issues. There will be some 64-bit features it won't be able to uses, that's all.

Well, I know that 99% of 32-bit apps run just fine under 64-bit Solaris, AIX & HP-UX (the remainder are ones which address the kernel directly like top or lsof). Provided the hardware and the OS is designed correctly (i.e. not like the pain of DOS->Win32), there's no (ok, much less) problem in migrating.

Win95 was a 16/32-bit hybrid/bastardization. It ran both 16 and 32-bit apps natively (and you could thunk calls between them). The NT-based OS's were always 32-bits (or more now), and emulate a 16-bit layer when needed (the Windows on Windows, or WOW subsytem). I believe it was WinME that was the last of those bastardizations.

Windows 95 was not a 32-bit OS. It was just an incremental revision of Windows 3.1. You're thinking of WIN32S [wikipedia.org], a compatibility layer which kludged some 32-bit functionity onto Windows 3.1 and later. WIN32S came pre-installed on Win95 and later, but it wasn't an integral part of the OS.

I this the questioner is referring to the MS C++ and VB compatabilities that happened when DLL's were 16bit on 32bit windows.

Well, if you're using low level code still, like Win32 constructs and other windows C++ specific data types, you may indeed be faced with work to do. I remember arguments passing from 16bit OLE interfaces into 32bit C++ EXEs that was troublesome. However in this switch, the code should run fairly fine.

This is just stupid. We exhausted the 16-bit address space in the era of the Osborne and Apple Poo. Ten years later we experienced a painful "transition" to 32 bits (after completely exhausting kludge space). The present situation is that high end machines can make good use of a 64-bit address space in kernel, but 99.9% of userland processes could remain 32-bit for a long time yet. The rare exceptions, such as database servers, those have been 64-bit clean since before the Alpha was first invented.

Sure, let's compare a transition that took place ten years after the pain was universal to a transition that took place quietly ten years before most people realized that a 32-bit virtual address space could be exhausted with far less physical memory as a result of mechanisms such as nmap.

The main problem with the 16-32 bit transition was the dreaded segmentation, the code for this had to be moved into a flat mem model (the first thing the compiler people did was to expand one segment to full mem size and get rid of the segmentation at all, which Intel wanted to carry over into the 32 bit world - speaking of stubborn and shortsighted)
and that lots of code was pure assembler, the other problem was that lots of programmers used the good ole trick of number boundary overloading to zero values

Well given that the segmentation was just a plain and dirty hack to spare pins on the 8088 so that they dont have to go full 16 bit it has survived for a long time.
Btw. Intel still is from the command set and number of registers, still one of the worst processor architectures there is.
As for the other processors, no the 650x series was 8 bit, the early 68000s to my knowledge 16 bit, but none of them had segments and only a handful of special purpose registers, they all went from the beginning on to a flat

Who lets these crackheads post stories? Linux has been running native 64-bit on several platforms for years, and years, and years. Hell even in the x86 world, I've got ~9,000 Opteron CPUs chugging under the power of Linux in 64-bit mode at work, and they're just trashy lease boxes.

You may have been running 64 bit linux for a little while on the x86 but you strike me as a guy never seen the joys of real mode vs. protected mode. You should Google up some of the angst filled rants from programmers who had to deal with it back in the day.Some of that old code is just crazy.

You may have been running 64 bit linux for a little while on the x86 but you strike me as a guy never seen the joys of real mode vs. protected mode. You should Google up some of the angst filled rants from programmers who had to deal with it back in the day.

Note: my memory may serve me wrong, the following could contain errors.

The difference of real and protected mode that was alien to the developers wasn't so much about 16 or 32 bit but about the way memory was addressed. In real mode memory is address

... I'm already running 64-bit linux on both server and desktop environments (server for ~5 months and desktop for ~2) and loving it, thank you very much.
They sure compile applications fast. I couldn't be happier.

Does anyone out there have a reliable prediction for the amount of system shock we are facing when either Longhorn or 64-bit Linux comes out?

Now, 64 bit Linux has been here for a long time, and since most drivers are open source, the port is complete. There will be no shock, no pain. It's just that for optimal performance you'll want to recompile more applications than on 32 bit.

Windows is Not Quite There, though I have a 60 day trial on my desk. Here drivers are mostly closed source, which causes prob

What I worry about in the transition is lazy/bad programmers building yet bigger, bloated programs becuase they have all that memory space to work with. Give them a few years and just to install an IM client will require 4 Gb of memory and a 200Gb hard drive.

I worked on a 64bit OSF/1 from DEC back in 1994 when MS was still fannying around with 16bit and I suspect other devlopers worked on 64 bit (or larger system) long before that. In the big boys world this is nothing new and any unix coder worth their salt should have been taking 64 bit into account for th last decade anyway.

Porting from 32 to 64 bits isn't always smooth thanks to stupid use of int and long int types and other inconsistancies in many applications. Take a look at OpenOffice, the 2.0 release is rumored to be 64 bit friendly, but so far no beta versions have been compiled 64 bit.

Now add to this a processor that can go both ways without rebooting. TWO environments running at the same time (though the OS is strickly 64 bit). Do we have dual libraries or chroot? It's a LOT easier to have only a native 64 bit env

UNIX and by logical extension freenix has always been years - if not decades - ahead of gear that Joe User can buy on his salary. Anybody who thinks freenix has any sort of "catching up" or "adapting" to do to achieve 64-bit perfomance is obviously highly ignorant of computing history.:|

The big problem transitioning from 16bit to 32bit were all the kludges around 16bit limitations. These included near vs far points, overlays, and other memory related kludges. We don't have those kinds of problems any more. Low level kernel developers and those writing Big Honkin' Databases(tm) will need to worry about the transition. But most of us in userland can sleep easily.

If all of your memory needs can fit within a 32bit address space, you've got nothing to worry about.

I upgraded to an Asus A8V-Deluxe with an AMD64-3000 processor last week.The AMD64 / true64 port of debian is not an official part of Debian (waiting until after the release to add it).It took a little googling to find an install.iso and to find a good amd64 mirror, but it was frankly boring in how easily it installed. It's just another Debian install nothing special, and because it's Debian you only need to install it once.

As for hardware, everything worked out of the box without fooling around including

platform != architecture. The cost for porting from windows to linux is change the widgets and the libraries, not the bit conversion.Pretty much any open source app works on loads of different architectures. It's not hard to fix code to do that.