Page:

Re: I'd like to hear more about

It depends on your perspective.

If you are the person planning the change in a hurry, or the management committing the resource to do the change, then an over-rigorous, time consuming process is the last thing you want to add to your work or costs, so you do your best to short-circuit the process to make the change happen faster and cost less.

If you are the risk manager, who is on the line if changes cause problems, then you want as much process around you to protect your butt (and to a lesser extend, the organisation they are working for), and then a bit more. You feel most secure if there is no change at all (that's counter-productive).

If you are a diligent IT professional, then you want the *right* amount of change management to make sure that the change has been carefully considered, and has a good backout plan, but not so much as to make planning the change more difficult than it has to be.

It is this balance that is missing. You see it swinging from cost-reduction to risk management according to the current trends in risk and management style and the most recent disaster. And always nowadays, the people who understand it least are the ones dictating the processes.

If you are in a large organisation involved in change management, take a change and estimate how much the change costs in people and financial terms. Look at the time necessary to cross the 't's and dot the 'i's. Count the number of people involved. Look at the number of people who have to read and understand the change. Add up the people-hours spent sitting in the change board meetings.

You can often find in places like a bank that a change to switch servers from one DNS or time server to another (often simple but with a potential high risk and impact if it goes wrong), which may actually only take minutes to do, ends up costing you dozens of man-hours (or even man-days), involving people on quite high salaries, and many days or weeks to drive the process. All of these things cost money, one way or another.

Re: You've never needed a password to install malware on a Mac

Apart from those rare systems that really do run Java in a sand-box, user files on *ANY* platform will be vulnerable to this type of attack. The OS, however, shouldn't.

What is worrying in this article is the issue of it installing a rootkit on MacOS. I'm not sure whether I am talking about the same thing, but I define a rootkit as something that gains privileged access, and then alters the OS start-up process so that it will have running privileged components that will monitor whether the rootkit is removed from the system disk, at which point it will re-infect it.

The operative word here is "privileged". It implies that there is something that will cross the privilege barrier, which requires an OS security weakness or vulnerability. Of course, I could have the MacOS security model all wrong, but I thought MacOS was relatively robust. If it is a user-mode rootkit (is there such a thing - a process kicked off in user-land during the user's start-up, but not running as a privileged user) then I might be able to understand it.

The very nature of the x86 architecture, with it's requirement to remain backward compatible with it's 16 bit forebears, and a "Complex Instruction Set" (CISC) are the significant problems Intel have, and up until about 7 years ago, computing power was more important to them than consumed power.

The ARM was originally designed to be a very simple 32 bit processor from the outset, with a low transistor budget. Even though modern ARM processors are much more complex, the design ethos prevails. Low power consumption was actually a useful side effect.

Intel would probably very much like to discard the legacy components of the x86 design, but it's a problem, because backward compatibility is seen by most of their customers as the main strength of the processor line, as Intel found out with the Itanium, i860 and i960 lines of processors.

Re: Laws, sausages, vinyl records

Do you know it was a CD?

Could it not have been a lossless audio file at some stupidly high bit rate stored on a writable DVD or BluRay disk?

Studio master tapes have been digital for about 30 years (I remember Sky 2 coming out, and being proclaimed as one of the first records to be digitally recorded, mixed and mastered). Once mixed in a digital mixer, it is quite possible that the output may be put on a DVD.

Re: Oh dear, not this again

Nigel11. Well, it is quite true that as long as the cables deliver a clean digital signal, you need nothing better.

But even digital signals are afected by analogue issues.

A digital signal is something approximating a square wave (obviously not a pure square wave). But when you transmit it down a wire, the effect of capacitance and inductance take their toll on these nice clean leading and trailing edges due to an effect called hysteresis. This tends to round-off the nice square edges.

When you recieve a digital signal, especially an asynchronous one, you rely on the signal passing the high/low thresholds within a suitable time. Thus, a really bad cable which may cause overall loss of signal and excessive hysteresis could cause multiple single bit errors because of the signal not reaching the threshold in time.

No matter, you say. All digital signals are transmitted with error correction. True, but invoking the error correction algorithm may take time (even if in hardware), and may not actually reconstruct the packet correctly if there are too many errors. What to do then? Well, most systems when faced with potential missing data in real-time will repeat the last packet's data, which is clearly unacceptable.

I'm not saying that this happens frequently, but be aware that it can happen, and cannot be totally ignored.

@Tim99 - my sympathies

Re: Refreshing ...

I think there is a basic problem with the current generation of music listeners.

I'm perfectly aware that music is subjective, and that many people may think that they are happy with current heavily mixed, sound processed 'music' played through systems that are not ideal. This is made worse by the number of people who are unused to hear music on anything other than earbuds, headphones or computer speakers. They just don't know any better!

But what have the current generation (or in fact anybody learning how to listen to music on anything after a Sony Walkman in the early '80s) have to compare what they listen to now with? As a previous poster has commented, modern music is rarely listened to 'raw', even in a concert. It's all processed, mixed and amplified, so that what is heard is what the producer/sound engineer wants to be heard.

There is not enough live acoustic music available for people nowadays to actually have a reference to compare with. My modest audio system has cost me no more than about £600 over the 30 years I've collected and maintained it. I'm aware that the transistors and capacitors are aging, leading to more background hiss, and that the paper cones of the speakers probably are not as stiff as they used to be, but it is still quite good enough for my children's friends to listen in awe to good music played in a condusive environment on a good budget system (almost all of the components in my system at one time or another got 'best buy' awards in annual roundups of reviews in the HiFi press).

I play acoustic guitar, and they can hear how close to a real guitar John Martyn's Solid Air (on vinyl) can sound. The same with orchestrial and choral works where they can hear the individual sections seperate out across the soundstage. They may not care for the music, but they can hear a difference from what they are used to. And this also extends to their music (mainly CDs - none of them have vinyl!) played on my system.

So I am quite prepared to go along with beauty is in the ears of the beholder, but it's a shame so many of those ears are uneducated.

Re: Tape this !

If you think that modern LTO tapes are like the C15s that you used on your Trash80, then you've got no right to comment on this story.

Massive tape libraries with well designed data management systems are fine for backing up or archiving large amounts of data. The only problem is that enterprise grade tape media is still too expensive (but still cheaper TB for TB than disk). Put in a data-management system with recent data stored in modest sized disk pools, and migrated to the massive tape pools as it ages. Index the data so that you know which tape it is on, and you can retrieve it remarkably rapidly, and with relatively little effort.

And if the data falls in a category where it is no longer required to be accessed quickly, you can actually remove the tapes from the library to make space for more. And you could easily store a replica of your data in an offsite store in case of disaster (try seeing what the cost of having Petabytes of data in 'the cloud' is).

No, tape is still useful. Just be careful that you keep the drives to read it working!

Re: I forgot to mention

I've used that feature for many years. It's not news to me, nor does it alter anything I've said.

When my kids were younger, and we shared PCs, I gave them all normal user ids, kept the admin login to myself for infrequent use (I also used an ordinary account for my normal work), and created another administrator account to be used with runas which I then made unable to log in directly through a registry hack. I gave my kids the password for the runas account for applications that were stupid enough to need administrator privilege to run. This worked fine for everything until I came across the game Blockland, which needed to actually be run from an logged in administrator account.

But it did not take long for my kids to realise that they could actually run almost anything as the runas admin account, but what it did do was make their default access for browsing and mail, the most likely things to cause the system to be compromised.

I've never said that the security model of Windows NT based OSs is weak. In fact, on these forums, I've actually said that it is probably better than the default UNIX model. What I have said, though, is that it is set up on ordinary systems in a generally flawed manner, and this is compounded by application writers creating programs that need administrator rights to access certain parts of the filesystem needed by the application, but this is another story.

Re: I forgot to mention

There is a distinction between an administrator account, an account that can run commands using something like UAC, and one who can log in, but cannot even run UAC.

Up to and including XP, most default users on Windows were in the first category. Windows Vista on later, the default is in the second category, as are most Linuxes. But it is possible to configure Linux users in the third category (i.e. they are not allowed to run anything using sudo or it's ilk). Most UNIX systems are configured like this, and ordinary users do not have any abillity to do anything damaging to the OS unless there is an actual defect in the security system (and note I am not saying that there are no defects in any OS).

I find it funny how UNIX, the oldest of all of the OS's mentioned, is the one that implements, the least-risk model. Just shows that people don't learn from history.

Re: @Mr Torx

Linux, by it's very nature, is open to inspection by anybody who wants. Whether this is done is a moot point, but at least you can do it. Previous Linux exploits (like buffer overruns) certainly have been discovered before being found in the wild (you can tell these because they are normally published as 'potential' buffer overruns). Windows does not have this level of openess, so although there are more systems to attack, there is less chance to spot an exploit before it is actually used (which is why zero-day exploits are so damaging to Windows).

The autorun is another matter entirely. If the underlying OS was secure, and the default user was not privileged, then it would be relatively safe (but of course, personal information would be available even if they were not privileged). But Windows has a reputation of being unsafe, and certainly in XP and earlier, most systems were configured so that the default user was an administrator. This make autorun almost suicidal if users put untrusted media in their systems. I does not take a genius to see this.

Users on Linux and other UNIX-like operating systems can still be affected without privilege (I can think of several ways to add key-loggers to sessions on systems running X-Windows, for example), but in general, this is likely to affect the user and only that user, and the underlying OS and other users will be safe (significant, but less so if a Linux system is 'personal', i.e. only one user ever uses it - this is the problem Android has).

Because many users of commodity OSs do not really understand the differences in the security models and practice between different OSs, I see many challenges to Linux that are unfounded, and really should never be voiced if the person doing the challenging actually knew. I judge this to be one of them.

Re: @Peter

Whilst I agree with you about the personality differences between The Major in the original film, and the entirety of the rest of the franchise, my personal feeling is that they should not be regarded as different timelines. The difference could be down as much to the different English voice actresses and animators as anything else. I wish I could understand Japanese so that I could judge whether this is the case in the original soundtracks.

I have heard other people talk about this, but I can see nothing in the ARC of SAC that would conflict with the original film happening later in the same timeline, and it is quite clear that what happens in the original film is necessary for the two following films, inclusing SSS, which most people think is in the SAC line. I don't want to go into details, because some people here may not have watched them, and I would not wish to colour their experience (except to say you must watch them, especially if you think animation is just for kids - careful of the 1st Gig episode "Jungle Cruise" though)

And I believe that if you watch the retelling of "The Laughing Man", and "The Individual Eleven", there has been some re-working of the SAC animation (new/different scenes) and some re-voicing of the characters (I was quite shocked at the differences in the bits I have seen). Both I and my daughter think that these later re-packages are significantly worse than the original. These again change the feel of the series.

One thing I always find strange, and that is the way anime series are often re-cut to create a film, but the re-cutting, or re-imagination whatever you want to call it, results in a completely different telling of what looks like the same story. This started a long time ago, and the earliest I remember noticing it was Macross the series, and "Macross - Do You Remember Love". Wherever I can, I try to watch the original series in preference to the films, but that is entirely dependent on what is dubbed into English. This is why I sometimes resort to fansubs on P2P sites, as I don't speak Japanese (did I say that already?)

Re: DAT OST

Yoko Kanno, aka Gabriela Robin. Sometimes, music is credited to Yoko, and words to Gabriela. An elaborate ruse.

BTW, you can see her in videos performing with The Seatbelts and also in some of the Macross Frontier concerts on YouTube. She is generally seen behind a keyboard.

Other Anime she had written music for includes Cowboy Bebop, Macross Plus, A Vision of Escaflowne and Wolf's Rain (and more - these are what just comes to mind). Some of the best regarded Anime series, and a wide variety of music from full orchestral film scores to jazz, rock, techno, something akin to new-age and j-pop.

Re: I found this film a bit disappointing, though ultimatly enjoyable.

I found this film a bit disappointing, though ultimatly enjoyable.

The problem is that you can't really understand the entire story line unless you watch it multiple times, and spot the various 'bodies' that the Major may be using throughout the film. And when you have worked this out, you have the final enigma of whether the superficial conclusion is actually what is meant (this is probably intentional, but ultimatly leaves you with the feeling of a lack of closure).

In many ways, the story expands on the question raised in the original 1995 film about identity and self, and as such is quite thought provoking. This is the basic theme behind the whole story ARC, and is also explored by the ultimately trajic Tachikoma story (in SAC) as well.

BTW. On the timeline business, SAC 1st and 2nd Gig must be set before the original film, bearing in mind what happens at the end of that film. Innocence follows the first film after a gap of many months, and SSS is over a year later than the original film. So it is axiomatic that Togusa is in GITS (and he is).

I don't really believe that the stories in the original film and SAC and SSS are incompatible with each other. I would definiltly suggest that you need to be familier with some of the other (or even all) of the films and series before embarking on this one.

The various GITS soundtracks are permanently loaded in my 'phone, and are probably listened to more than anything else in there. Brilliant writing, and the choice and quality of the artists is superb. Yoko Kanno (primary music for everything apart from the original film) needs more exposure outside of Japan and the Anime scene.

Re: Boot Loader Locking

Yes, but Microsoft will play on the security side of what this does, pointing out the exposure that all systems without it will have, and also how sophisticated exploit writers are becoming, and how little ordinary users understand about managing their systems (lots of stats about people who install firewalls or UAC and then ignore it).

Their view is tha the colateral damage to other OSs (which aren't really important anyway in MS's view) is just unfortunate, and will only affect commodity systems, as specialist systems will be run by specialists who will not be using the type of hardware they are suggesting.

Quite honestly, it's only been a matter of time before this happened. Ross Anderson had it right all along.

It is a problem, but one that we will get around, either by ignoring Windows 8 on tablets or related devices, or finding some way to break it. I favour making sure that hardware vendors are not peanalised for selling systems that do not have Windows 8 on them (by legislation, if necessary), and then letting the market sort itself out. Discounts to vendors for *ONLY* installing Windows on their procucts should be illegal, and would eliminate this problem immediately.

Re: The Big Lie 2.0

But you don't say the 'first UNIX based phone', you qualify it with 'successful' and 'usable'. Thus it was not the first, so cannot claim patent or copyright. A failed product can still be prior art.

And I could be a pedant over your use of UNIX, and also ask why a smart phone needs to be running a UNIX like OS (think PalmOS, Nokia Communicator or Windows Mobile devices for other devices that were clearly smart before the iPhone). Apple produced a good product, but not one that was especially innovative.

Re: IBM

Re: I don't get it?

If the OP was an exception,then I must be really rare. Not only do I enjoy my job, it actually partly conditions my life as well.

I will often come home from fixing the work computers (with the associated buzz of a job well done) and open the laptop (or my new Android tablet), and spend time using computers to do other things, including reading about tech.

And before you ask, I am married, and have children. They might get annoyed by the amount of time I spend with computers, but as I was doing this before the family came along, they accept it.

But I know that things are changing, I just hope that my skillset remains sufficiently in demand that I can reach retirement before I struggle to find work. Only 14 years to go, unless they raise the retirement age again

Re: overclocked CPUs are more likely to make a Windows PC crash

Well, not strictly true.

Most CPUs are designed to run at a certain speed. When a particular member of a chip family is first spun, chances are only a small percentage of the silicon will run reliably at the full design speed, but many more will run at a fraction of that speed. So they are marked with the slower speed, and sold as slower chips. But they were still designed to run that the higher speed.

Manufacturers put pretty much every CPU through some testing, starting at lower speeds and increasing it until the chip fails to execute something correctly. They then stamp the chip with the last speed that worked sucessfully, and then move on.

What overclockers do is that they reason that when a chip runs above it's tested and rated speed, the cause of failure is probably due to heat, so they put a better heatsink on the chip, and then ramp the speed up above the rated value until it fails, and run it at the highest speed that it functioned correctly. The better the cooling, the higher the clock speed you can run it at (that is why some HPCs have direct water cooling of the CPUs, and why people like Amari [I believe] used to sell an actively refrigerated PC at one time).

Unfortunately, another aspect of heat damage is that it can be cumulative. This is, I believe, what Microsoft are trying to say. This aspect has a name, and it's called 'cooking' the CPU. Once you've cooked it, the chances if it running reliably at the same clock speed (or even at it's rated speed) is seriously reduced.

The most obvious case of this I saw was Throughbred AMD Athlon XP2600s (that was the highest speed Throughbred cores with 133MHz FSB, faster Athlon XPs were Barton cores with an FSB of 166MHz). These were actually clocked with a multiplier at something like 2.06GHz, but over time, even if you did not overclock them, they stopped performing at their rated speed. You had to gradually step down the speed to keep the PC stable. Replace the CPU, back up to full speed, at least for a few months. I went through three or four before I realised what was going on, and this happened even with overspec'd heatsinks and fans.

Damn

Re: Problematic updates are normal?

Don't know about RBS, but I've worked in other places in Banking, Government agencies and the Utility Sector.

Most large organizations will not authorize a change unless there is a fully specified back-out plan, together with evidence that the change to the live system has been tested somewhere safe first.

In some places I've been, the risk managers have wanted a "how to recover the service should the back-out plan fail" plan.

The RBS example is evidence of exactly why you have this level of paranoia, and why you spend more time writing up the change than the change itself takes, and why you sit in Change Boards convincing everybody that the change is safe.

Unfortunately, I'm sure that may of us here have complained about how much the process costs, how much time is wasted, and how quickly you could work if you didn't have this level of change control. I learned my lesson the hard way many years ago, and now follow whatever the processes are without complaining.

Maybe the higher management will learn some lessons from this as well. But I somehow doubt it.

@sugerbear

It has not always been like this. I've been working in Data Processing (remember that!) for over 30 years, and there was a time when good best practice, BS5750 and it's follow-ups like ISO9001 were actually valued. But this was back in the days when computers were expensive, and it was seen as valuable to invent in people and process to get the maximum value from your high outlay.

Of course, everybody bitched about having to write the documentation, but at least the management bought in to the overall need for it, and factored time into the project plans, because these standards said it had to be done. Sometimes the docs were junk, but often they contained useful information. And the more documentation you wrote, the better at writing it you became.

Nowadays it's all about trimming the fat, over and over again, and if the managers complain, they get trimmed themselves and replaced by others who are happy to comply. This means that the barest minimum is done to get a service kicked over the wall to support, the support teams have no way of pushing back against a poor service, and then this happens.........

I'm now seen as a boring old fart, locked in the past, so I'll just go and get my Snorkel Parka and go.

It is possible to have TV that is 'good enough'

Once people have replaced all their CRTs and small LCDs, they will stick with what they have until it breaks, and the market will reach saturation.

Once a technology matures (and this is any technology) so that further improvements no longer enhance the perceived customer experience, it becomes driven only by replacing broken instances of the technology. I think we can see this from the dip in computer sales, which will be echoed in laptop and tablets over the coming years. TVs just have had a longer journey, although if you look at LCD TVs, that chapter has been quite short.

I personally can't wait for this time to happen, because we just can't continue making new things with short lifetimes. Will break Capitalism, though!

Re: What's in a name?

Most Polytechnics had a requirement to be more business and industry focused than Universities. Normally, where there was a University and a Polytechnic in the same city, there was a requirement from the syllabus authority that the two establishments put different emphasis on what looked like similar courses. This is why a lot of Polys had courses like 'Business Computing' rather than a pure Computer Science course.

When a lot of these courses were originally designed in the 1980's, business computing was based around Cobol, the most commonly used business language at the time (RPG was also common, but I would not suffer any student with learning that as their primary language!) although there were several BASIC orientated business systems (DEC RSTS and Pick spring to mind).

For schools, BBC BASIC was a brilliant choice, because it was structured enough to satisfy most programming purists at a fundamental level (OK, while loops were missing, and complex data structures were a bit difficult), it was fast enough even on modest hardware to do quite impressive looking things to encourage staff and students to try ever more complex tasks, and it was accessible to people with very little previous knowledge.

It also encouraged teachers to learn some programming themselves to help teach their non-computing subjects (because it was relatively easy), rather than as just a support for computing related courses. Currently, teachers have no incentive to learn any programming at all because the initial learning curve is too steep.

I believe that there is absolutely nothing that I have seen then or since which was better as an introduction to computer fundamentals as the BBC microcomputer and BBC BASIC. Updated in a modern windowing OS, with hooks into the GUI and OS (as it was in RISCOS), and it could still be the best thing around.

Funny

I don't know how many of the readers remember much about the Digital Equipment Corporation (DEC), but they were involved very early on in the definition of many of the fundamentals that cloud computing is based on.

They were one of the people involved in creating Ethernet and the Internet (although they eschewed TCP/IP for their own DECNet as the preferred network for many years)

They were early adopters of the concept of mobile workloads spread across several machines (DEC-Cluster and VAX-Cluster).

They had network shared storage before almost anybody else (HSC devices) and things like LAVC (Local Area VAX Cluster).

They were one of the early pioneers of clustered desktop machines (DEC ALL-IN-1 and Pathworks) including network booted diskless PC's

I'm puzzled by the statement Ken Olsen made about UNIX, because DEC had commercialized UNIX in it's software portfolio for years. UNIX V7/11M was a port of Version 7 available through DEC in the early '80s on PDP11s, they did a System V port onto the VAX for AT&T (and I believe it was available to other companies as well), Ultrix was available from DEC in the early 80's on VAXen, you had DEC-Station MIPS based UNIX workstations in the '90s, OSF/1 was available as a supported OS, and later morphed into Tru64 UNIX on Alpha based systems later in the same decade. I can't think of a company that had as long a history of UNIX at the time DEC was subsumed into Compaq.

Maybe Ken thought VMS was the only OS needed, but fortunately other people in DEC did not agree. And others thought they had been daft to drop TOPS-20!

@STB

Just because there are text books does not mean that the way of working is correct or to everybody's liking. Take sociology for example......

I'm sure that there are many things that are completely insane that I can make rational sounding arguments to support. Try reading Douglas Adams' books for rational absurd reasoning (although, yes, I know he was a Mac User, but I'll forgive him that because of his genius)

The Mac way of working is fine if you use single or small numbers of applications. Not for many applications on a screen, like I use all the time.

And the argument about power users using key combinations is crazy. In my world, where I use Windows, CDE on UNIX, KDE, GNOME, and (god forbid) Unity, you just cannot learn every one of the myriad of key sequences. And in case you ask, I am an Emacs user, so am used to quite complex sets of key strokes.

Re: Sir

Barclay's used to be very diligent about their failover tests. I was involved in several tests over the years I worked there. But that was when they actually had people in the UK, and did not rely on it being run from Pune or Singapore.

It used to be a big issue if one of the tests failed, and they generally had a second test pencilled in when the initial one was being planned just for this situation.

Still, it's been over 5 years since I last worked there, so who knows what has happened in that time.

Re: At last

What I wan to know is what happened between June last year and early April this year that caused his caps lock key to stick down. There's a gap in his posting history (which actually includes two moderator deleted posts).

He's been registered since 2010, but up until April this year had only entered a handful of comments. Since then, something has woken him up, and caused him to SHOUT about everything he's commented on.

I actually used one for a University assignment

This was in 1979 or 1980 (I can't remember exactly - getting old).

Was my first experience of 6502 machine code, which became very useful when I got my BEEB a few years later. Had to code a sine-wave generator using an attached D-A converter. Real pain putting the opcodes directly into the keyboard, with no means of storing the program.

I think that this one must be a later one, because the ones that Durham had not only used a calculator keyboard, but also a tiny calculator display as well, mounted in what I remember to be the top half of a Commodore 8 digit calculator. My memory may be playing up though...

Re: MAC Address -AC@11:46 8th May

"..you probably aren't gaining a lot over what a typical "current" ADSL router will provide.."

You think not?

Double NAT, not relying on ISPs router firmware to leak information, capture of packet headers and GBs of log files, multiple DMZs, intrusion detection log, control of inbound connections using SSH to give access to printing and filestorage in my home (you can really do a huge amount through SSH tunnels, including CIFS and lpd), configurable DDNS (I've tried the DDNS support in routers, and given up), not needing a syslog server to capture the logs that are too large to be held in the device, traffic logging from individual systems within the home environment (useful for determining who is the traffic hog), a proper user interface (shell) to diagnose network problems, tcpdump available, serial line attached to my RS/6000 to allow me to remotely power on and off from the Internet. You want me to go on, because I don't think this list is complete.

I don't use SmoothGuardian, SmoothWarrior or any of the other paid plugins, because I do not run my own SMTP server or multisite VPNs. I find Smoothwall Express quite capable enough for my needs, and have been using Smoothwall to protect my network for over 10 years, log before ADSL routers were as sophisticated that they are now.

Expense. A 10 year old 700Mhz Pentium 3 laptop, extra USB Ethernet adapter left lying around from god knows when, and a couple of Ethernet cables. Total outlay, nothing, zero, zilch. Burns about 20 watts of power with the screen off, so is not very expensive in energy either.

And, of course, my time.

Why do you assume I download Torrents? There is enough content in iPlayer, Sky Anytime, 4OD, Demand 5, YouTube, as well as the rest of the Internet, and my kids use Steam, Wii and Xbox games a lot. There's plenty of legal content lying around on the Internet. I'm not blocking the RIAA or MPIAA scanning my systems specifically, I'm trying to keep my home network safe from anybody who might want to do damage to it. I do not want ANYBODY snooping my network out of principle.

If I download torrents, it's only fan-subbed anime for series that are not available in the UK. There's a lot of non-H series that have never been available in the US or the UK, so is very difficult to get to see without some form of copyright infringement. If it were available, I would probably buy it rather than download it.

My library of purchased downloads, DVDs, CDs and videos is quite extensive, and I do buy almost all of the content that I have, although some of it is second hand. I take exception to the implication that I am any more a copyright infringer (even with my admission about anime) than anybody else, and ask whether you live in a greenhouse? At least I post under my own name!

@me - incomplete sentence.

"But interestingly, it is possible for the MAC addresses of machines connected to a single router device performing both border routing to the ADSL or cable network, and also DHCP and/or Wireless routing"

@Chemist

Re: MAC Address

But interestingly, it is possible for the MAC addresses of machines connected to a single router device performing both border routing to the ADSL or cable network, and also DHCP and/or Wireless routing.

What runs on the router is only as good as the firmware, and as we have seen with BT and their powerline Ethernet devices for BT Vision, it would appear that some ISP's modify the firmware to allow some remote discovery. And I'm not sure I fully trust uPnP not to leak service information externally. So we could see internal MAC addresses (that the router has to know in order to function), internal IP addresses (from DHCP), and possibly system types and function available to whatever runs in the router's firmware.

Maybe I'm paranoid, but I have a ADSL router which was not supplied by my ISP (and runs NAT), with a Linux based firewall (Smoothwall, which also runs NAT), and then a wireless hub inside the firewall. DHCP is run by the firewall, not by any of the appliances. This way, I believe that it is almost impossible for anything from the broadband side to get information from inside my network. Now that I'm not relying on wireless as much (I'm using a mixture of direct Cat 5 and, I'm afraid, powerline Ethernet for most network access now - and yes, I generate my own keys), I'm toying with the idea of putting the wireless on a separate DMZ just to give most of my network protection from wireless crackers. Just need to get another Ethernet port in the firewall.

My wife thinks I'm mad, having so much kit 'just to provide the internet', but then I believe (and I check!) that we've been completely clear of intrusion type attacks since I set this up.

Re: MAC Address

@Oliver

I get good DVD playback on my EeePC 701 using a USB DVD drive running Ubuntu 10.10, and that is really an underpowered PC, being a Celeron clocked at less than 700MHz.

Methinks you need to look at the graphics options. Sounds like you've either not installed the Nvidia restricted drivers (which would be strange, as if that adapter was in the system when Ubuntu was installed, it should pick it up automatically), or something has disabled hardware rendering, and the system is using software rendering. Try installing and using the Nvidia driver settings tool from the Ubuntu repository (no, that's no more difficult than installing drivers from CD that came delivered with your graphics card).

Re: IBM ROMP vs. ARM

@starsilk. Thanks for the correction. I certainly knew about the multiply-add being missing, but I deliberately avoided talking about the multiply instruction being missing, because I just could not remember.

IBM ROMP vs. ARM

The IBM ROMP chip (aka the 801) was never intended to be a general purpose RISC processor. It was intended to power an office automation product (think of a hardware word-processor like WANG used to sell).

As a result, although it could function as a General Purpose CPU, it was not really that suited for it. It was never a success because at the time, IBM could not see justification for entering the pre-Open Systems UNIX world. RT 6150 and the 6151 were intended as niche systems mainly for education, although they did surface as channel attached display front ends for CADAM and CATIA run on mainframes (and could actually run at least CATIA themselves). This changed completely with the RIOS RISC System/6000 architecture, where IBM was determined to have a creditable product, and invested heavily.

In comparison, the ARM was designed from the ground up as a general purpose CPU. Roger Wilson (as he was then) greatly admired the simplicity and orthogonality of the 6502 instruction set (it is rather elegant IMHO), and designed the instruction set for the ARM in a similar manner. Because the instruction set was orthogonal (like the 6502, the PDP11, and the NS320XX family), it makes the instruction decoding almost trivial. It also made modelling the ARM on an econet of BBC micro's (in BBC Basic, no less) much easier, which allowed them to debug the instruction set before committing anything to silicon.

They had to make some concessions on what they wanted. There was no multiply-add instruction, which appeared to be a hot item in RISC design at the time, and to keep it simple and within the transistor budget, all they could do was a shift-add, (the barrel shifter), which although useful, was a barrier to ultimate performance, but great for multi-byte graphics operations.

It was also simple enough so that they could design the interface and the support chips (MEMC, VIDC and IOC) themselves, achieving early machines with low chip counts.

This is all from memory of articles in Acorn User, PC World, Byte and other publications. Feel free to correct me if my recollections are wrong.

Re: Am I missing something?

Yes.

What they have said is that you can't copyright something that says (using the example of another recent story) "produce a process that takes sea water as an input, and produces fresh water and brine as outputs" (which is a functional specification).

You can patent the method for doing this (reverse osmosis, for example) but that does not prevent someone from using evaporation or distillation to have the same effect.

I know that this would be a patent rather than copyright in this example, but the concept is the same.

Thus the code you write for your product is protected, but the description of what it does isn't. This has been fundamental in the concept of black-box testing and modular design for many decades, and changing this would break almost all modern industrial processes.

Just imagine not being able to replace Oracle with DB2, because the function of J/ODBC was subject to copyright, or even worse, not be able to port from UNIX to Linux because the interface to the C library was subject to copyright.

AC@16:28

This is a very defeatist attitude. It assumes that all teachers and all students decide in the same year to do next to nothing.

If this does not happen, then all those teachers and students do is to make sure that they will fall behind the ones that do try. As what I was envisioning was competition, this is unlikely to happen.

Human beings are competitive, especially kids. Watch them play. They race, they throw, they compete in games of skill (marbles, conkers, hopscotch, computer games). It's coded into our make-up. You just need to engage their competitive nature in school to ensure that the best can be achieve. You also need to make sure that lesser grades than 'A' still have merit.

On a side note. I heard a news item about a boat builder who was complaining at the number of kids who are now sucked into the academic stream, who would have previously gone into some form of apprenticeship. He said that we needed bright kids to be the skilled artisans of the future, and all he was seeing after the competent ones had gone to university were the kids who were unable to master his skill. Was a very fair point well made.

Re: Why do we have a set pass mark for grades?

Marking to the curve is a double edged sword, and I accept that it makes comparing marks year-on-year more difficult, but you have to ask what the point of the exams actually are?

When I was doing my 'A' levels in the late '70s, the primary reason was so that you could be selected for further education. As there were many fewer university places available, the marking was set so that you could tell who was 'the best' from that year's student population. If less that 10% of the students got an A, these people, who would be the most likely to excel in that subject, got streamed to the best Universities. The next tier down could select from the remainder, and on downward through the Polytechnic system, aiming at people who would excel at HND qualifications, but may not be up to a full degree.

It did not matter whether there was grade comparison between years, it would be accepted that the best people would always get better marks than the weaker candidates, so the streaming would still work, and the 'right' people would always get to the establishment that best suited them.

Quite often, it was not the grades that determined what type of work someone ended up in, it was how far they went in the education system. Students who had got to University and completed a degree course had demonstrated by that fact that they were worth employing.

It is only now that the 'A' levels that are intended to give an absolute measure of how someone's worth that this problem occurs. Since schools have been measured by result, and the curve has been discarded, it has completely devalued them as a mechanism for selecting the best students. Governments and schools each have an interest in 'improving' the results.

Part of the problem is also political. Educationalists in the '70s and '80s became convinced that non-competitive grading was the only way to avoid stigmatization of kids (abolition of the 11+ and Grammar schools is an example). Schools were not allowed to say to kids "look, you are never going to succeed in becoming a theoretical Physicist, best do some vocational training". All children are given unrealistic expectations by being told that they can achieve anything, and in order to persist this myth, the exams are set so that they think they are good at a subject, when in fact they could be only mediocre.

This is just dumb. Life is competitive, and that is never going to change. When you go for a job, the best candidate wins (unless the recruitment process is also dumbed down, but that is another rant!) And people not suited or without an aptitude for a particular job will never get it, regardless of how much they want it.

Setting kids up with realistic expectations, and giving them some taste of reaching their ceiling by allowing some of them to experience disappointment is a required life skill that they have to learn at some point, and my view is that it should be part of the school experience, instead of a post University kick in the teeth.

Re: Doubling CPU cores is also doubling transistors

One of the problems that chip designers have is how to use the vast number of transistors that can be fitted onto the large die-sizes at the smallest scale.

They got to the point where more registers, more cache and more instructions units in a single core was not making for faster processors, so they then started using the still increasing transistor budget to put multiple cores on a single die.

There is a lot to be said for a large number of cores on a single die, but this has it's own problems with access to memory, cache coherency between cores and I/O.

Another avenue is putting disparate processors (like GPUs) on the same die, or even System on a Chip (SoC), where all of the functional elements (I/O, Graphics, memory etc) of a complete system appear on a single piece of silicon (think what is going into 'phones and tablets).

In my view, to make use of the vast scale of integration, it's about time we had a fundamental rethink about how processors work. I don't have any new ideas, but I think that listening to some of the people with outlandish ideas might be worthwhile in coming up with a completely new direction to investigate.

@Kebabbert Re: Hmm...

I was not clear about entitlement in my earlier post. There were Linux only Power 5 systems back in 2005 or so. What I was trying to say was that they were the same systems with the AIX and the IBM i entitlements turned off. They were also significantly cheaper, and also made it easier to use non-IBM branded disks.

My views about proprietary UNIX being on the downward curve has not changed. I have felt this way for most of the last decade. I still see Power having a place for many years to come.

Intel becoming predominant is much more about them having volume and critical mass in the processor market than speed or technology. PowerPC is still a relatively well architected processor, but for many companies developing products, it makes sense for them to use what is fast becoming a commodity product (Intel) rather than something that they have to put significant design effort into. A high-end PowerPC SoC would be interesting, but I don't think IBM would be interested in creating one of these for the server market.

Re: @Z Eden - Games should stay on Windows

In theory, DRM is not against the Linux way of doing things. If you are careful to make sure that you only use LGPL (not GPL) code in your DRM system, then you do not 'pollute' Linux by adding a DRM API above the OS, and you don't have to publish the details of your DRM. The rest of Linux works just swell.

The main reason why this has not been done to date is that the content providers do not trust that the OS cannot be hacked below the DRM API to gain access to their content, whether it is a game, music or a film.