Integrating chips into the motherboard could allow the MacBook Air to offer high performance affordably

With a
refresh of the MacBook Air seemingly impending the rumor mill
surrounding Apple Inc. (AAPL)
is once again heating up.

According to Macotakara.jp, a Japanese Apple
fan site, the latest version of the ultraportable will use cutting edge flash memory
technology. The site cites sources at unnamed Asian chipmakers as
claiming that the latest Airs will contain 19nm flash memory chips soldered
directly onto the motherboard for blazing speeds of up to 400 megabits per
second.

Current generation MacBook Airs only have RAM chips soldered on. The SSD
for the laptops are connected using mSATA connectors.

Apple reportedly calls the new technology Toggle DDR2.0. Aside from the
rather curious title (ostensibly it involves NAND flash, not DDR memory), the
technology is expected to be Apple's proprietary implementation of the Open
NAND Flash Interface (ONFI) Working Group's ONFI 3.0 standard.

ONFI 3.0 promises faster speeds and reduces the number of pins on flash memory
chips. These factors add up to nearly "instant" boot times and
fast file copies.

If the rumor about the new flash is true, it likely comes from Micron
Technology, Inc. (MU),
Intel Corp. (INTC), or Spansion Inc. (CODE) -- all members of the ONFI 3.0 coalition. Apple's current NAND supplier, South Korea's Samsung Electronics (SEO:005930), and the world's second biggest
producer, Toshiba Corp. (TYO:6502)
are not currently members.

A split with Samsung would make sense, given Apple's ongoing legal war with the gadget and
component maker.

At times Apple has been on the bleeding edge of introducing new standards.
It was the first major manufacturer to push Intel's proprietary LightPeak
interface, which it renamed Thunderbolt. The refresh of the MacBook Air
is expected to come packing Thunderbolt as well, which requires fancy chip-laden $50 USD cables to work.

OS X Lion and the refreshed MacBook Air are
expected to launch on July 14.

Comments

Threshold

Username

Password

remember me

This article is over a month old, voting and posting comments is disabled

Apple was last to get PCIe, last to get DDR2, etc.Apple pushing new standards is the exception, not the norm. It remains to be seen if Thunderbolt will become a standard or not. It might just fail or remains niche like Firewire.

I don't know a single standard with active cables that has seen widespread consumer support. I think the question of Thunderbolt failing or not has been already been declared because of that decision alone.

Sorry but Thunderbolt will lose and lose horribly to USB 3.0. Cable prices fall, but active cable prices don't fall that much. Also Thunderbolt drives the cost up for OEM and the Enterprise segment as well because Thunderbolt takes you from having 1 central controller for all your devices like USB to having a controller for each one. One USB controller can handle up to 127 individual devices! With Thunderbolt each device has a separate controller.

Oh and "daisychaining" is supposed to be a great feature? Yeah good luck diagnosing a bad device when you have a ton of devices daisychained together. No IT professional in the world would rather put up with that than USB 3.0. Ironically "daisychaining" will give OEM's less incentive to manufacture Thunderbolt hubs, so that takes us back to my first point.

Well, daisy chaining SCSI devices did not bury SCSI at all, so excuse me for NOT marking your words.

Besides who cares if USB is mass consumer cheapo standard and Thunderbolt is more expensive fast prosumer standard? These two standards can coexist forever pretty well since they are targeted at different markets.

Judging from the long list of threads here I think you have plenty of time to "waste". And it isn't waste if you are informing people the downfalls of SCSI.

Pirks, despite your uh...controversial views, I would be happy to explain the downfalls of this system.

As explained earlier, each device has to be configured to run Thunderbolt. This means that a bit more complex and expensive hardware has to be in place. This is much like SCSI. Yes it was parallel and allowed simultaneous data, but the format was also confusing and had multiple types of I/O. Plus, at the time it had a very poor BiOS support.

Obviously Thunderbolt will not have the same downfalls as SCSI, but Thunderbolt is doomed to fail.

Frankly, in my opinion at least, I wasn't quite sure why Thunderbolt and USB 3.0 weren't one in the same thing. I don't see the reason why the Light Peak technology could not have integrated with the USB port design. That way, you can use active and passive cables on the same port, thus opening the market for multiple formats.

Although I am always for the more superior technology, I don't see Thunderbolt winning against USB 3.0. You can already use the millions (if not billions) of devices that connect via USB on it without having to get a new port and still get more than fast enough speeds for consumer and business markets. Furthermore, the cables are incredibly expensive.

I actually thing $50 is relatively cheap for the performance and reliability it gives and that price will take years to drop. But look at what USB 3.0 already has in the market. Right now you can buy a 32GB USB 3.0 drive for $50 and any computer today can get a USB 3.0 PCI slot for a mere $20, if not less.

Thunderbolt took too long to release and was released on one of the lowest selling computers in the market. If it came out on HP or Dell this would have been a different story. But on laptops that cost over $1500? On desktops that cost over $1000? There's no chance for this to become widespread.

You contradict yourself, first you say "Thunderbolt is doomed to fail" then you say "There's no chance for this to become widespread". Looks like you are just as uncertain about the Thunderbolt's future as Reclaimer is.

If a product doesn't become widespread it fails. Fail doesn't mean wiped from existence it just means that it becomes a very small, niche product. Stop splitting hairs over sentences that are essentially saying this will not be successful.

You can not share the USB connector because the USB consortium doesn't allow it.USB 3 is only a small fraction of the thunderboltThe 50 dollar cable with a cip not needed if you don't mind the performance hit and or use a short cable.You can use a hub just like USB don't know wy you are trying to make it sound like rocket science.Hmmm not sure how you missed Apples growth it is one of the top 3 PC makers if memory serves.The total cost of ownership on an apple is usually lower than a windows PC make that as you will but google the research.It powered eSATA had better implementation and adoption then yes USB 3 would be enough but you would still be talking about a minimum of 3 connectors for the same use as one thunderbolt with lower bandwidth higher latency higher CPU load and reduced battery life. SATA can AMD USB can be real power hogs for laptops in comparison to certan alternatives. Outside of people on forums like this AMD old fashioned IT guys people don't get sentimental about connectors.

Don't be surprised if it gets widely adopted on the high end if you need the function it will save money and power though only after redesigning the laptop down to the chipset level.

1) What are you talking about? Both were developed by Intel (which makes it even more confusing).

2) How much of a hit is there? My guess is its quite a significant hit going passive to make it get outpaced by 6 Gbp/s SATA and 4.8 Gbp/s USB 3.0, otherwise they would have launched it with passive and say it beats those two standards.

3) Agreed its not complicated for one device but if you want to make thousands if not tens of thousands, the money adds up for manufacturers.

4) In America it's number 3 but globally its number 5. Why wouldn't you at least put it in with the top two competitors? Who pegs a golden standard on 3rd or 5th place? Furthermore, why wasn't it just spread throughout ALL top manufacturers? Why the limitations?

5) A minimum of three connectors for the same use as one Thunderbolt? For what purpose? And what are these claims about latency? That depends on what's using it. A hard drive will give you latency (sometimes as high as a whopper of 4 milliseconds) but that's through no fault of SATA or USB. It is limited by its rotational speed. Solid State negates this latency argument.

And bandwidths on even the fastest Solid State Drives have yet to push 6 Gbp/s SATA to its limit yet (unless its PCI...but its connected via PCI and reaching its potential so who cares?). And when that day comes when it surpasses SATA, they will just upgrade SATA along with it.

6) Agreed, but it has a lot of competition and its introduction was not nearly aggressive enough for this to happen.

The article even mentions that the goal is to include display port, Ethernet, firewire,PCIe. Think of it as an opportunity to get everything down to one cable. Right now it's at 10Gbps but it has the promise of scaling to 100Gbs. USB just can't go there, at least not for a few more renditions.

It is now "SAS", which is still very much in use. (As is Fibre Channel, as is FireWire in some circles, both loosely based on SCSI; although not as much as SAS - which stands for, wait for it... Serial Attach SCSI.)

Yes, USB replaced it for the more 'consumer' focused uses, as it should have. But SCSI did not die. Likewise, I imagine Thunderbolt will live on as a more focused connector than Apple intends it to be. It will live on as a high-speed interconnect between systems and peripherals, possibly to replace Fibre Channel. (Have you priced a Fibre Channel cable recently?)

Of course it will never happen 'cause Reclaimer can't even explain all this to himself. I doubt he understands any of it, judging by his clever "I won't waste time explaining" shenanigans. Sounds more like "I won't waste time trying to understand all this complex stuff" actually.

SCSI devices are still used for high speed data acquisitions (100+ data points). Granted other techs are catching up in speed but SCSI still is useful in the right applications and are usually a lot more reliable, hence the use in critical applications and fields.

and even if Light Peak cables eventually become passive (not sure what their plan is... but still...), it'll be the firewire of the last generation of interconnects.

usb2 won because it was built into all chipsets.usb3 will be built into all chipsets next year, light peak will not. It will require extra chips/money/space/energy on top of the chipset. This is why it will be a niche product.

That said, niche products can still be awesome, but I'm not sure why apple is pushing it, who benefits? i suppose its nice to use one port for display and possible expansion out of a macbook air, as apposed to a separate minidisplayport and esata port... still...

next year when ivy bridge chipsets have built in/free usb3, it'll be sad that the macbook air will require the extra space / chip/ energy to power a chip for the thunderbolt functionality in the minidisplayport.

Then why do they buy Apple? For the blazing last generation video, or the blazing 3rd and 4th tier CPUs? It must be for the blazing speed of the whites from the LCD that get to your eye faster than any other computer's whites...

Decomposing corpse of Itanic has no place here, and if seamonkey meant "3rd of 4th place in the global speed hierarchy of all CPUs out there" then yeah, I kinda agree with that. I know Macs don't have the fastest CPUs on Earth right now. Just wasn't sure about what seamonkey meant exactly.

First, let's talk about the technical side. Currently the Thunderbolt has the most bandwidth @ 10Gbit/s, or approximately 1250MB/s, while USB 3.0 maximum throughput is 5Gbit/s, or 625MB/s. So USB 3.0 has about half the bandwidth of Thunderbolt. We also know that Thunderbolt will be used to transmit large video/audio files to an external storage. However, the fastest 7200RPM HDD, what will most likely be used as the storage medium, is 100MB/s. Even the fastest 3TB drive at this time is 170MB/s, which only consumes less than 30% of the total bandwidth of USB 3.0, and less than 15% of the total bandwidth of Thunderbolt. So why should I go out and get a $50 cable, just have a bunch of unused bandwidth lying around, when I can get a $.50 USB cable that get the same performance?

Next, the practicality side. For desktop users, if they want to use SSD RAID, they put them in the case via SATA, not externally. For laptop users, they won't use an external SSD to RAID with the internal drive because the moment you break the RAID, you have to rebuild it. You can RAID two SSDs externally, but what's the point of RAIDing two SSDs for performance, just to use as a mass storage device? So both of your scenarios make absolutely no sense at all.

Theres very little use for LighrPeak on Apples laptops, it was a stupid decision considering the type of people (idiots) that buy them. But it has many other uses for professional video editors for example, many do not use HDD's, they're not fast enough. And Sony's new Z laptops use LightPeak (and they call it LightPeak too, not Thunderbolt) to power an external GPU. USB3 has nowhere near the bandwidth for that. LightPeak is basically PCIE in cable form and that has many uses, just not so much for average consumers yet.

You can connect external GPU's with LightPeak (i'm never calling it thunderbolt). As with Sony's new laptops. I dont think USB3 and LightPeak are meant to compete. Light Peak is considerably superior though and will last a lot longer. Already USB3 barely cuts it for SSD's.

Four of these in RAID 0 will pwn ya puny little USB like there's no tomorrow :))) Are you even serious? I see you just have no idea about what SSD RAID is. Go do some basic reading about SSD RAIDs before replying please

P.S. Your babbling about HDD RAID is pretty funny too. Babble all ya want, I said NOTHING about HDDs, so I don't care what ya babble about 'em.

Copper is reaching its limits, time to go to fiber optic :) If you ever played with fiber data connections, it is not too big of a deal; true, you cannot make your own patch cables, and welding fiber optic strands require fairly pricey equipment, but fiber patch cables are way cheaper then Thunderbolt cables, and they are already made in very large numbers by many players around the world.

Number of devices or what kind of data is being moved / supported does not come into play - that layer is in software, way above the physical link represented by the Thunderbolt hardware.

BTW, SCSI was easy to troubleshoot because was daisy-chained - you could start with just ONE device, and kept of adding until you found the issue. Its bus arbitration just works, you can talk to slow devices and fast devices on the same chain with no issues. USB may be more difficult to troubleshoot if you have multiple devices going to a hub, specially when some devices are streaming, as its arbitration model is much weaker.

I do agree that USB 3.0 is likely to win, as most new motherboards / computers have a couple of ports built-in, and for most applications, cheap and good enough is THE winning combination (as compared to best and not-so-cheap)....

quote: SCSI was easy to troubleshoot because was daisy-chained - you could start with just ONE device, and kept of adding until you found the issue. Its bus arbitration just works, you can talk to slow devices and fast devices on the same chain with no issues. USB may be more difficult to troubleshoot if you have multiple devices going to a hub, specially when some devices are streaming, as its arbitration model is much weaker

Lightpeak with copper is just the 1.0 edition. Lightpeak was built around a fiber line but reverted to copper a couple months before Apple made the announcement about "Thunder". I don't remember if it was here or "Tom's" but there was a decent article that quoted the Intel team. They went with Fiber in this first release do to manufacturing costs.

quote: BTW, SCSI was easy to troubleshoot because was daisy-chained - you could start with just ONE device, and kept of adding until you found the issue. Its bus arbitration just works, you can talk to slow devices and fast devices on the same chain with no issues.

Yeah, but do you remember all the different connectors and adapter cables you'd have to mess with if you had to intermingle different SCSI 1/2/3 systems. Then you'd get into LVDs, or external drives with no pass through (thank you so much for that headache Iomega!)

And don't forget that LightPeak will soon have to fight with another open standard from the PCI consortium, the 32Gbps PCIe Express Cable. A copper connect open standard available to anyone not just a few that cough up licensing fees to Intel.

It really depends. I can't speak to PCI-E since I don't pay attention to Mac Pros, but DDR2 and DDR3 were in their machines at about the same time as everyone else. Apple also got mobile Penryn CPUs in their Macbook Pros first, the custom packages in the original Macbook Air first, and the Harpertown and Nehalem Xeons in the Mac Pros first. They had all of these things weeks or months ahead of other PC builders. I mostly own PCs, but I recognize that as much as they may lag sometimes, they certainly lead at others.

Pushing new standards in particular can be a headache for users when it means that what they own is suddenly getting depreciated (ie - Thunderbolt, dropping old connectors for USB), but it also means adoption of better interfaces and standards much faster in the long run.