Not a year goes by I don’t hear this or that tech startup or industry executive refer with awe and admiration to his work of the past 40 years.

Stonebraker refined the relational techniques that today form the heart of billions of dollars in annual software sales by Oracle (ORCL), International Business Machines (IBM), and Microsoft (MSFT).

Friday afternoon, I had the honor of speaking with Stonebraker by telephone, to congratulate him for being this year’s recipient of the Association for Computing Machinery‘s “Turing Award” for his groundbreaking contributions to computer science. More on the award can be found on the ACM Web site.

Stonebraker not only continues to do fundamental research into database theory, but also has deep and informed views on the state of the industry that are of value to any tech investor.

Among the provocative views he shared with me is that current database technology from Oracle and others is “obsolete,” and that Facebook (FB) is grappling with “the biggest database problem in the world.”

Breakthroughs

Stonebraker’s chief claim to fame is having pushed forward the relational technology first conceptualized by E.F. ‘Ted’ Codd in a seminal paper in 1970.

Stonebraker’s companies, Ingres, then Postgres, and many more after that, such as Vertica (acquired by Hewlett-Packard (HPQ) ) helped bring Codd’s academic notions to fruition in the commercial marketplace.

When asked what he contributed, Stonebraker is remarkably humble: he deftly switched the conversation to a glowing summation of Codd, whom he refers to as “Ted.”

“What Ted proposed at the time was radical,” he said. “It was a complete change from how things were being done in database.”

Ted saw two important things. At the time, there were databases such as IBM’s IMS, whuch was structured as a hierarchy, and the CODASYL database, which was structured as a network of connections between objects. Ted realized that what people inherently understand are relations, and so he turned the problem of data management into one of relations. That dramatically simplified things. He saw that things must be kept simple. Ted really followed the KISS principle [Keep it Simple, Stupid].

The other big breakthrough by Codd, says Stonebraker, was moving the actual manipulation of data away from assembly language programming of the time to higher levels of abstraction that would later become structured query language, or SQL.

The conventional wisdom at the time was that you should build for the particulars of how the data is stored. He saw that made no sense. He brought principles of encapsulation and abstraction to programming databases, like with a high-level-language in programming. The problem with the assembly approach was, your data lives a very long time. Today, your business might be in plumbing supplies. But then you merge your company with another company, and now you’re in plumbing supplies and beauty supplies. Inevitably, your data structures will change as a result. If you write these assembly language programs, you have to throw them away and recode when things change, whereas if you write at a high level, with data independence, your data will not be dependent on the structure of the data.

As for his own contribution, Stonebraker is equally humble. In conversation, he uses “We” instead of “I,” and he notes that the Ingres database “was the contribution of quite a number of people.”

Among them, Jerry Held and Gene Wong joined Stonebraker in receiving the ACM’s “System Software Award” in 1988 for Ingres.

Stonbraker notes that he and his fellow pioneers brought Codd’s lofty relational ideas into the realm of ordinary individuals:

Ted was a mathematician, and he wrote his things in mathematical terms that no mere mortal could handle. We turned it into constructs that could be manipulated by ordinary people. Second, it was argued at the time that RDBMS couldn’t perform, but we showed it could be efficient.

Oracle, Microsoft, IBM have a problem

Turning to the present, Stonebraker tells me Oracle’s database, IBM’s DB2, and Microsoft‘s SQL Server are all obsolete, facing a couple major challenges.

One is that at the time, they were designed for “business data processing.”

“But now there is also scientific data and social media, and web logs, and you name it! The number of people with database problems is now of a much broader scope.”

Second, “We were writing Ingres and System R for machines with a small main memory, so they were disk-based — they were what we call ‘row stores‘.”

“You stored data on disk record by record by record. All major database systems of the last 30 years all looked like that – Postgres, Ingres, DB2, Oracle DB, SQL Server — they’re all disk-row stores.”

But, with the fall in the cost of memory chips, “main memory is now cheap enough that OLTP [online transaction processing] is going to be main-memory databases, increasingly, and they don’t look like disk-based row stores at all,” contends Stonebraker. He cites as examples the newer “in-memory” database Hana from SAP (SAP), and also his own new initiative, VoltDB.

The third of the database market that’s legacy Oracle and SQL Server and DB2 will be replaced by things such as VoltDB, and whether Oracle can adapt or die remains to be seen:

Oracle or SQL Server or DB2 are legacy code at this point. There’s that great book by Clayton Christensen, The Innovator’s Dilemma. All the system software vendors are up against the innovator’s dilemma. They are selling the old technology, and the question is how will they morph without losing their customer base? There’s no question that with Oracle, the customers are dug in pretty deep in the traditional systems, but my point of view is there is two orders of magnitude performance difference to be had with other technology approaches, and sooner or later that will be significant. It may take a decade or longer for the legacy stuff to actually die away — there’s still a lot of IMS data in production in the real world! — but sooner or later it will get replaced. My point of view is that if you want to do 50 transactions per second, it doesn’t matter what technology you use, you can use whatever you want. But if you want to run 50,000 transactions per second, your current implementation is simply not going to do it. Sooner or later, you are going to be up against a technology wall that will force you to move to new technology, and it will be completely based on return on investment.

Where NoSQL and Hadoop are going

Another third of the market, focused on “data warehousing,” is moving from row-stores to “column stores,” which can be far more efficient, he says. “All the data warehouse vendors have converted to the column stores or are in the process.”

The last third is “everything else,” says Stonebraker.

That includes “NoSQL” databases such as MarkLogic, which I profiled recently; and Hadoop, the open-source database widely used by Google and others, and now commercialized by startup Cloudera and by Hortonworks (HDP).

There are 100 or more of these NoSQL companies, and Stonebraker thinks they will all eventually end up looking like SQL databases. “It started out, NoSQL meant, ‘Not SQL,’ then it became ‘Not only SQL,’ and now I think it means “Not-yet-SQL’,” he quips.

“NoSQL proposes low-level languages, and they are betting against the compiler, and that’s an incredibly dangerous thing to do,” he says, just like the assembly-language programming back in the day. He thinks VoltDB and other approaches can fix the problems brought about by legacy RDBMs, and “NoSQL guys will drift toward looking at SQL,” he contends. “They will move to higher-level languages, and the only game in town is SQL.”

As for Hadoop, it will take on SQL aspects and merge with data warehousing:

If you look at the major vendors there, Cloudera, Facebook and Hortonworks, if you look at what Cloudera is doing, they released the Impala system a little while ago. If you take a careful look at it, it is a SQL engine. MapReduce is nowhere to be found. The historical Hadooop stack was Hive on top of MapReduce, on top of HDFS. Look at Impala and you see MapReduceis nowhere to be found. I think everyone pretty much agrees the MapReduce interface is not very interesting. None of the data warehouse guys have anything that looks like that. So I think MapReduce will atrophy and be replaced by SQL. Impala is a column-store, so it looks like Vertica or Red Shift, or any other data warehouse model. So data warehouse and Hadoop are going to completely merge eventually.

And so, “Hadoop will look like the data warehouse market, and NoSQL will look like the SQL market.”

Arrays, graphs and data science take over

More interesting to Stonebraker are areas such as the “social graph” of Facebook, and the emerging area of data science.

He predicts a lot of business analysts who run data warehouses will be replaced in years to come by data scientists, who are trained to work with arrays rather than tables, and with techniques such as regression analysis, Bayesian analysis, and other approaches represented by programs such as the statistical package R:

Another incredibly dominant trend right now is that the data warehouse market is about business intelligence, it’s about business analysts using Business Objects, and Cognos, and products like that as a GUI [user interface] in front of a SQL system. They are running SQL analytics. But what I think is guaranteed to happen is that business analysts will be replaced by data scientists. It will take a while, because we don’t have enough trained data scientists, but the market will get much more sophisticated. Suppose you are the Wal-Mart guy who has to figure out how to provision Wal-Mart products around major snow storms. The query you want to run is in the week before the storm, and the week after, What sold by department in the North East, and compare that with, say, Maryland — that’s standard business intelligence work. And what comes out is a big table of numbers. An alternatives is to get data scientists to build a predictive model to predict sales by department in the winter. You run that model and out comes a bunch of predictions, which is what the business guy actually wants. Sooner or later, the business intelligence world will move to the data science world, using things like regression analysis, Bayesian analysis — these are lots of big words, but all of these techniques, if you look at them, it’s an array-based, not a table-based calculation. People who do data science now often code in MatLab or R. So, as we transition to data science we are going to transition to array-based calculations. The question is, Are those going to be done on an RDBMS, or is there room for a new class of array-based data management? I think the jury is completely out, but it’s going to be a sizable market over time and it’s going to happen, maybe not this year, but over time. It’s a possible opportunity for array-based data management. We just built something to do that, SciDB. It is a commercial product that is array-based. There are certain kinds of data science applications that are getting a lot of traction. The genomics market is one that will be huge as all of us get [genetically] sequenced. The things those guys want to do is completely array-based. SciDB is focused on genomics for the short term, but will eventually move into other areas.

Facebook has the biggest problem of anyone

His other point is that Facebook has a big problem: Its problem is a graph problem, figuring the combinations of “vertices” and “edges,” in the language of graph theory, but Facebook is entirely based on the database technology “MySQL,” which means that its underlying infrastructure doesn’t fit the task at hand:

Look at Facebook, it is one giant social graph, with the problem of how to find the average distance from anyone to anyone. You can simulate a graph as an edge matrix, and a connectivity matrix in an array-based system, and you model graphs in a table system, or you build a special-purpose engine to implement the graph directly. All three are being prototyped and commercialized, and the jury is out whether there is room for a new graph engine or if one of the other technologies would be good enough. I think the answer to graph problems is it will be done by either an array or a table DBMS. Facebook has a big transaction processing problem: You “friend” me, and that is an update to the social graph. That’s currently implemented on MySQL, and as of three years ago, they had over 4,000 MySQL instances. It’s probably 10,000 now or more. They would love to get rid of MySQL. They are prototyping everything in sight to explore new approaches. The infrastructure is at odds with the nature of their problem, and at such an extreme scale. I would say they have the hardest database problem on the planet. For Facebook, the question is make versus buy, and like Google and Amazon, they are running at such scale that it tilts them toward make rather than buy.

The upshot of all that is, “Off in the future, there will be a fair number of graph and array problems, and it will be interesting to see how those will be solved over time — that’s the equivalent of saying, the database world is alive and well, and will continue to flourish for a while.”

GPUs and non-volatile RAM may again change databases

In closing, I asked Stonebraker what hardware innovations, like faster DRAM, would eventually impact databases.

He said “Two things are very significant,” one being GPUs, or graphical processing units, the kind of chips in which Nvidia (NVDA) specializes, the other being non-volatile RAM.

Regarding GPUs and other “co-processors,”

There will be various co-processor approaches. No one will build one [a co-processor] just for the databases market, because it’s not big enough, so we will have to piggy back on someone else’s technology, and GPUs are here, and we ask, What can we use them for? That is a very active area of investigation. Take a look at Intel’s “Xeon PHI.” At the Intel Science and Tech Center at MIT, one of the things they are having us look at is what to do with PHI, which has very fast floating point performance. Another thing is what to do with FPGAs, among things that hardware guys developed for some other reason.

Regarding NVRAM,

The thing I think will be way more important is non-volatile RAM, NVRAM. The various vendors are betting on various things, and it is probably coming this decade, and it’s going to be way faster than flash. Flash is too slow to be really interesting — some people are using it [flash] now, but it’s not mainstream. It is going to be very significant. Hewlett-Packard’s MEMRISTOR is one of the technologies; Intel is betting on something, though they won’t tell us what it is.

Correction: a prior version of this post attributed the CODASYL database system to IBM, when in fact it was not from IBM. My apologies for any confusion caused by the error.

Revenue in the three months ended in January rose 9%, year over year, and 2% from Q3′s level, to $1.25 billion, yielding EPS of 43 cents, excluding some costs.

Analysts had been modeling $1.2 billion and a 36 cents per share.

Gross profit margin, on a non-GAAP basis, was 56.2% in the quarter, up from 53.8% a year earlier, and up from 55.5% in the prior quarter. That margin was better than Nvidia expected, which the company attributed to, “PC gaming and accelerated computing was partially offset by the impact of Tegra processor margins and certain inventory provisions for prior Tegra architectures.”

Nvidia said its revenue from “graphics processing units,” or GPUs, rose 13%, year over year, “fueled by continued strength in PC gaming, including high-end Maxwell-based GTX™ GPUs.”

Within this gaming segment, notebooks continued to perform well above year-ago levels. Tesla GPUs for accelerated computing increased strongly, driven by large project wins with cloud service providers. Quadro® GPU revenue remained healthy delivering industry leading graphics and rendering performance.

The company’s sales of its “Tegra” processor for tablets and autos and other applications fell 15%, year over year, “driven by the product life cycle of several smartphone and tablet designs. This was partially offset by auto infotainment systems, which more than doubled, and SHIELD device sales.”

On a quarter-to-quarter basis, the company said “The GPU business grew 8 percent due to Maxwell GPUs and the seasonal increase in consumer PCs; Tegra Processor sales decreased amid lower revenue from smartphones and tablets, while automotive and SHIELD devices continued to grow.”

CEO Jen-Hsun Huang, remarked that “momentum is accelerating in each of our market-specialized platforms.”

Continued Huang,

GeForce and SHIELD are extending our reach in the rapidly growing global gaming market. Our DRIVE auto-computing platform is at the center of the advance toward self-driving cars. GRID is enabling enterprises to finally virtualize graphics-intensive applications. And our Tesla accelerated computing platform is helping to ignite the deep learning revolution. “The success of these platforms highlights the growing importance of visual computing and the opportunities ahead for NVIDIA,” he said

For the current quarter, the company forecast revenue of $1.16 billion, “plus or minus 2%,” better than the average $1.15 billion estimate of analysts.

Gross profit margin is expected to be between 56.2% and 56.5%, Nvidia said.

Nvidia stock is up 99 cents, or almost 5%, at $21.80, in late trading.

Chip maker Advanced Micro Devices (AMD) this afternoon reportedQ4 revenue in line with expectations but missed by a penny on the bottom line, and forecast revenue this quarter below consensus.

Revenue in the three months ended in December fell 22%, year over year, to $1.24 billion, coming in at break-even on the profit line .

Analysts had been modeling $1.24 billion in revenue and a penny per share in net income.

Gross margin, on a non-GAAP basis, was down 3 points from a year earlier, at 34%.

CEO Dr. Lisa Su cited the company’s progress despite what she called challenges in the PC business:

We made progress diversifying our business, ramping design wins and improving our balance sheet this past year despite challenges in our PC business. Annual Enterprise, Embedded and Semi-Custom segment revenue increased over 50% as customer demand for products powered by our high-performance compute and rich visualization solutions was strong. We continue to address channel headwinds in the Computing and Graphics segment and are taking steps to return it to a healthy trajectory beginning in the second quarter of 2015.

Revenue from the company’s “Computing and Graphics” division fell 16% from the prior-year period. The company said its average selling price for microprocessors was up from the prior year because of a better mix of notebook computer parts sold.

Sales of “semi-custom” parts rose 51%, the company said. Prices for GPU chips fell from the prior-year period because of “lower channel ASP,” said AMD.

For the current quarter, the company sees revenue declining by 12% to 18%, equivalent to $1.05 billion, at the mid-point. That is below the consensus estimate for $1.2 billion.

AMD shares are down 5 cents, or 2%, at $2.19, in late trading.

Correction: A prior version of this post erroneously stated AMD’s Q1 revenue view was in line with consensus, when in fact it was below. My apologies for any confusion caused by the error.

Chip maker Advanced Micro Devices (AMD) this evening announced via an 8-K filing with the Securities & Exchange Commission that the director of its microprocessor and its graphics chips, or GPU, John Byrne, is leaving the company, “to pursue other opportunities.”

The company also said, in an emailed statement, that Colette LaForce, the company’s chief marketing officer, and Raj Naik, its chief strategy officer, have decided to leave.

The company said in the same emailed statement,

These changes, including the additions of Forrest Norrod and James Clifford to our management team last quarter, collectively are part of implementing an optimal organization design and leadership team to further sharpen our execution and position AMD for growth.

CEO Lisa Su will take over Byrne’s role running the computing and graphics business until his replacement can be found.

AMD has come under increasing pressure from Intel (INTC) in the PC chip market, and from Nvidia (NVDA) in the graphics chip business, as it seeks to cultivate a new business in custom chips for new kinds of applications.

In the same 8-K, the comapny said it granted restricted stock units, or RSUs, to CFO Davinder Kumar of 384,467 units, and to CTO Mark Papermaster of 576,701.

At the Consumer Electronics Show in Las Vegas Sunday night, Nvidia (NVDA) CEO Jen-Hsun Huang took the stage at the Four Seasons Spa in his characteristic black leather jacket and black jeans.

Huang goes right into the main stuff. A year ago the company introduced its “Tegra K1″ that

The company is announcing its “Maxwell” GPU will come to mobile devices as the “Tegra X1,” what Huang billed as “the world's first mobile super chip.”

The chip has 256 “GPU” processing cores, eight CPU cores running, running at 64 bits, and a 16-bit floating point math processor. The chip is the first to perform 1 trillion floating point operations, which was state of the art for supercomputers in 2000, Huang said.

Huang went into a demo comparing the X1 to the Microsoft (MSFT) Xbox processor performance on games.

Huang said the processing power of X1 would be used for a new automotive system called “Nvidia Drive CX,” which will turn one's car into a supercomputer.

In a demo of thr CX kit, Huang showed some digital dashboards that had wild-looking guages, and maps programs that had rich shading effects.

The kit can take a real-world material and make it a “skin” for the dash, such as a bamboo finish, porcelain or carbon fiber. An automaker, said Huang, could re-skin the dash for each driver's taste. Huang called it the “most advanced digital cockpit.”

A new capability would be “surround vision.” The greater horsepower of the X1 would make possible “deep neural-net learning” for the car to see and avert vehicles. It makes use of “SVM,” support vehicle machines, he said.

Huang said academic research showed that neural network learning GPU chips had reached a point where it could recognize objects in an image better than a person. The next stage would be using the X1 in a computer system called “Drive PX” that could monitor 12 separate cameras.

Huang showed a video of a driver's point of view while driving down the road. The computer system, powered by Drive PX, recognized and tagged each individual object along the road, such as pedestrians, including partly obscured pedestrians, stop signs, and traffic cameras.

It’s a busy morning for Apple (AAPL) in the court room. A short while ago it was announced the anti-trust trial of the company that has been proceeding in Oakland, California, regarding the iPod, was decided in Apple’s favor by an eight-member jury, as reported by Bloomberg‘s Robert Burnson and Karen Gullo.

This morning, reports crossing the wire said patent holding company VirnetX Holding (VHC), which had won a judgment against Apple for infringement, lost a lower-court ruling that had been charged with reviewing the details of a $368 million award to VirnetX against Apple. VirnetX shares are rebounding from a brief slide, rising 11 cents, or 2.8%, to $4.25.

And GT Advanced Technologies (GTAT), are up 7 cents, or 16%, at 48 cents, after a bankruptcy court yesterday approved a settlement with Apple that would see GT sell off equipment from its failed sapphire-making effort with Apple.

Meanwhile…

Chip maker Cirrus Logic (CRUS) gets another upgrade today following one yesterday from Barclays, this time from Oppenheimer & Co.’s Rick Schafer, who raised the stock to “Perform” from Underperform, writing that the company may see some improvement in sales to Apple from a forthcoming chip, and that the issue of a higher tax rate is now baked into the stock.

Shares of storage technology vendor Western Digital (WDC) are up $1.78, or 1.7%, at $107.18, as the Street digests yesterday’s announcement the company bought flash-storage startup Skyera. The response from the bulls has been very positive, calling the purchase of top storage talent and intellectual property a very strategic move for Western.

Shares of services firm Cognizant Technology Solutions (CTSH) are up up $1.11, or 2.2%, at $51.62, after Deutsche Bank’s Bryan Keane called it his “top pick” in IT services for 2015, writing that he sees “steady acceleration over the next several quarters driven by a pickup in IT spending, the ramp of 3 large deals (Health Net starts in 2H15), and the TriZetto Acquisition ($1.5bn of synergies over next five years).”

Shares of chip maker Advanced Micro Devices (AMD) are up 2 cents, to 0.8%, at $2.49, after MKM Partners chip analyst Ian Ingwrote that he sees a couple positive developments in the clearing of inventory in the GPU market, and also more competitive prices for Microsoft’s (MSFT) Xbox game console, which uses AMD’s custom processor.

Shares of BlackBerry (BBRY) are up 13 cents, or 1.3%, at $9.56, in advance of tomorrow’s unveiling of the “BlackBerry Classic,” a new smartphone design under CEO John Chen, and also Friday’s fiscal Q3 earnings report. UBS’s Amitabh Passi, who’s Neutral on the stock, advised the shares will remain volatile through all this.

Facebook leads, Twitter and Google Suffer

Facebook’s (FB) Instagram photo-sharing service is the top social networking property, according to data gathered by Evercore ISI, and cited by analyst Ken Senatoday in his bullish note on Facebook. Sena writes that Facebook’s own traffic growth remains healthy both on desktop and mobile.

Sena also cut his Twitter (TWTR) target to $45 from $55, writing “Unfortunately, we find TWTR to be ceding some ground to faster growing competitors, such as Instagram, who also now rival TWTR’s scale.”

Also in Facebook’s camp is J.P. Morgan’s Doug Anmuth, who reiterates an Overweight rating on the name, writing that the comScore stats for social media usage in November showed “Facebook witnessed strong mobile metrics […] its share of mobile minutes (smartphones only, excludes tablets) in the U.S. improved M/M to 21%, compared to other social services combined (Facebook-owned Instagram, Twitter, Whatsapp, Snapchat) which have declined to ~3% from ~5% in November 2013.”

Anmuth also cut his price target on Google (GOOGL) to $600 from $670, while reiterating an Overweight rating, writing that his estimates are going lower “ because of multiple factors, including “the transition from desktop to mobile search, continued margin compression, and increasing competition from Facebook.”

T-Mobile’sstash

T-Mobile US (TMUS) CEO John Legere took to CNBC to talk with the channel’s Jon Fortt about the company’s unveiling of “Uncarrier 8” today, which involved a new offer, “data stash.” That lets customers roll forward data allotments that go unused in any given period.

“We’re attacking one of the most infuriating aspects,” says Legere. “If you don’t use it, you don’t lose it. This is a $50 billion issue.” Legere noted the company is starting customers off with 10 gigabytes of free data in their “stash.”

“It’s a big move,” said Legere.

T-Mobile shares are down 52 cents, or 2%, at $24.73.

What if Amazon spun off AWS?

Amazon.com (AMZN) may spin off its Amazon Web Services cloud computing operations, according to a prediction by boutique consulting firm The Edge, cited by CNBC’s John Jannarone. Writes Jannarone, “nt pickup in spinoff activity could mean that even Amazon will get involved, according to The Edge CEO Jim Osman.”

MKM Partners chip analyst Ian Ing today weighs in on Advanced Micro Devices (AMD), reiterating a Neutral rating and a $3.75 price target, writing that there have been some positive signs of late, including a burn-off of inventory of personal computer graphics processing unit (GPU) cards, improvement in sales of Xbox game consoles, which carry AMD chips.

“Despite these positive signs, we still think substantial execution stands between AMD and a potential 2016 recovery,” writes Ing, “as computing remains near-term challenged and new initiatives are adopted gradually.”

Detailing the good tidings, Ing writes of the inventory burn-off in the GPU:

In the past 3 weeks, we saw a return of the refurbish market for AMD mid- and high-tier cards at retail websites. Recall in a recent note, the return of AIB refurbished cards likely signals abating excess inventory concerns (post cryptocurrency mining collapse). We believe AMD has several levers to discourage AIB makers from rapidly refurbishing and reselling returned cards, and further oversupply the market (given AIB price protections). Our defined “refurb ratio” (# refurbished cards/total cards [for a particular model]) has risen from 0% to 13.3% at the high-end (Radeon R9 290X), and 6.7% to 15.5% at the mid-end (Radeon R9 290). As a result, we think inventory digestion is progressing better than AMD’s “little bit in Q4″ commentary.

The Xbox is being helped by new bundles that make the price of Microsoft’s (MSFT) game machine more competitive with Sony’s (SNE) PlayStation 4. AMD sells the custom processor that runs both machines:

Up to now, Microsoft Xbox One sales have likely been slightly weaker than suppliers expected as PS4 outsells 3 to 2 (as per MRVL’s (MRVL) console commentary, where it has WiFi exposure). However, unbundling of the $100 Kinect sensor and a recent $50 holiday price cut (likely more permanent than not) results in a competitive $349 retail price vs. $399 for the PS4. Ongoing title activity continues to make the MSFT console upgrade more attractive. With that said, the end of September China launch has yet to result in a bump-up in MSFT Xbox One sales for suppliers like MRVL, in part due to the unattractive $600 retail price. Unit sell-through improvements will also be partly offset by negotiated ASPs declines for AMD, resulting in flattish semi-custom console revenue in coming years.

Shares of chip maker Nvidia (NVDA) are up 32 cents, or 1.6%, at $19.86, after Pacific Crest’s Michael McConnell this morning raised his rating on the shares to Sector Perform from Underperform, writing that delays by competitor Advanced Micro Devices (AMD) will allow the company to take share in graphics processing units (GPU) for desktop computers, boosting results.

McConnell writes that price cuts of a third by AMD are not helping that company’s sales:

In line with commentary from AMD on its Q3 earnings call that management planned to competitively reposition its graphics card lineup in Q4, the company implemented 30% price cuts on its RADEON R9 290X and 290 graphics cards, with their comparable R9 290X card now priced $150 lower than NVIDIA’s GeForce GTX 980. Despite the comparable price discount in addition to game bundles, our supply chain conversations indicate minimal share shift back to AMD. Channel partners indicate that NVIDIA’s GTX 980 low-power offering (sub-200W board), which allows for high overclocking by gamers, as well as an ability to market the GTX 980’s support of Microsoft new DX12 API, are competitive advantages for NVIDIA despite premium pricing.

McConnell’s conversations with those in the supply chain world suggest the company is already taking share from AMD, he writes:

As I wrote on Friday, Intel (INTC) shares are up 27 cents, or 0.8%, at $35.27, in advance of the kick-off tomorrow at Moscone Center in San Francisco for its Intel Developer Forum, the annual technology show featuring keynotes and technical sessions.

The company already teased a bit about its entry into Internet of Things products, hosting an event in New York last night that included a mention, but little else, of the “MICA” smart band.

JoAnne Feeney of ABR Investment Strategy reiterates a “Trading Long position” rating on the stock, arguing there’s about 10% to 15% upside to the stock. She expects PC and server chip strength “to carry through 2014.”

For the show, she has questions about the new “Core M” chip for power-efficient notebook computers, unveiled formally on Friday:

The question remains as to how much quantity INTC can deliver and by when. Which designs were lost by the delay? And how have the modifications to design and manufacturing impacted unit costs and the potential gross margin of the product line? Could we end up seeing Haswell with a longer run? Recall that this generation of new PC CPU is confined to notebooks only; there will be no desktop versions of Broadwell, so we will still see Haswell in desktops for some time to come.

She is looking to find out if Intel intends to cut out discrete graphics cards, or GPUs, of the sort made possible by Nvidia (NVDA), from the high-end notebook market with forthcoming chips:

Broadwell to eliminate discrete GPUs from notebooks? […] If INTC is indeed only building in support for PCIe 2.0, this would effectively remove discrete GPUs from Broadwell-based notebooks—that older technology simply would not offer the connectivity needed for fast GPU processing and rendering. One reason why INTC would make this move would be to capture more component dollar content for itself by driving PC OEMs to rely on INTC graphics […] We would be surprised, however, if INTC would lock itself out of the premium CPU market on the Broadwell generation: without support for a discrete GPU, INTC would be giving up wins in notebook gaming systems.

Regarding special interconnects for servers, “INTC’s work to develop silicon optics is likely to be nearing market readiness,” she writes, and “We expect a progress report this week” on that.

As far as the foundry business, making chips for other companies, Feeney wonders, ” One question that remains, however, is whether INTC’s difficulties with Broadwell signal ongoing yield problems, or yield problems for different products at 14nm?”

She’s looking for improvement in the mobile business, but whether the company can make a profit in the business remains to be seen:

Depends on a major ramp of INTC’s Bay Trail SoC for tablets. We expect an update on design wins and ramps. Thus far in 2014, INTC appears to be running slightly ahead of schedule on these ramps, and we expect the company to top its 40M shipment target. Last quarter, INTC delivered just $51M in sales and the division generated over $1.1B in operating losses; next year, we expect the gap to narrow, but we should not expect a return to profitability before 2016. This will depend critically on progress in LTE baseband development, so look for news here.

Credit Suisse‘s John Pitzer, who has an Outperform rating on the shares, and a $40 price target, writes that he expects ” limited news” this week, and that the analyst day meeting in November will be more meaningful from an investment standpoint.

Instead, Pitzer offers an update on how the company’s earnings potential is rising, based on his own assessment:

Our ongoing analysis continues to suggest that investors are under-modeling INTC’s EPS potential – and we are more confident that the operating model is nearing an inflection, which could drive better-than-expected OpM leverage. Specifically, we are raising our $3.00 EPS potential to $4.00 (Exhibit 1). Upside to our EPS potential is driven by: (1) AT LEAST flat operating profits/EPS in PCCG even in a flat to down PC unit market; (2) UPSIDE to DCG based on improving macro, new products and structural growth drivers (Big Data); (3) new found confidence in AT LEAST break-even in MCG with POSITIVE EPS from tablet APs and discrete basebands; (4) larger-than-expected SAM in IOTG and SSD (largely ignored by investors); and (5) EPS accretion from share repurchase.

Another individual with a brighter view on the company’s earnings potential this week is Romit Shah of Nomura Equity Research, who had dinner last week with Intel’s Navin Shenoy, the executive who is handling the company’s Core M business.

Shah, who has a Neutral rating on the stock, today raised his Q3 earnings estimate to 67 cents a share from 65 cents, and raised his full-year view to $2.25 per share from $2.17, while raising 2015′s numbers from $2.15 to $2.50. Shah thinks “Consensus estimates may continue to be revised higher.”

Shenoy’s “tone” during the dinner was “very upbeat,” notes Shah, and the executive suggested reasons the recovery in PCs has legs:

The company also highlighted several reasons for momentum to continue: Only 25% of SMB have migrated from Windows XP; price points for Consumer PCs continue to improve (sub $300); new consumer form factors demonstrate significant innovation relative to the installed based (600m PCs are 4 years or older). The company showcased a Broadwell-based notebook reference design that we found very attractive and innovative.

He also likes things he heard about gross profit margin. Although Intel will have “higher startup costs” in the move to chips with smaller feature sizes, it’s also possible that this will be offset by lower unit costs, he thinks, as the company moves to 14-nanometer chips like Core M from today’s 22-nanometer parts.

Craig Berger of boutique firm Hedgeye writes that the stock is “likely to act well this week” and “could run higher on buyback and near-term PC stability.”

“Given likely well received product updates, given the market’s view that PC unit shipments are now trending flattish on a go-forward basis, and given Intel’s still sizable share repurchase remaining into 4Q14, INTC shares could run higher this week with IDF good news dripping out,” Berger writes.

Berger is skeptical of Intel’s ultimate success in the wireless baseband chip market versus competitors such as Qualcomm (QCOM):

Intel will likely update us on its 3G SoFia product (3G baseband plus Atom-based applications processor), which is expected to ship early next year. Its 4G follow on is scheduled for late in 2015. Intel’s success in mobile is critical if it will keep selling its traditional processors into various type of compute form factors. That said, our confidence levels around Intel’s success here is low given its previous track record in mobile (Xscale) and with acquisitions (numerous). We note that so much of cellular baseband is detailed software work, country by country, carrier by carrier. Further, Intel still does not have roadmap parts like WiFi and Bluetooth to integrate into future Cellular SOC solutions.

Moore’s Law is not dead, and while the definition of Moore’s Law that may come into sharper focus again is that of cutting transistor cost by 50% every 18-24 months, we think Intel could give us some update about 10nm and possibly 7nm process nodes. Beyond the 7nm node, we think exotic materials or fundamental transistor changes may be necessary to keep Moore’s Law alive further into the future.

Another individual with questions about the process roadmap is Wells Fargo‘s David Wong, reiterating an Outperform rating on the shares,

We believe that Intel will not use EUV on its initial 10nm products and think that the company probably needs and plans to use EUV for the node following 10nm (which we expect to be 7nm). The transition to 450mm wafers could still happen by the end of this decade, however, the primary motivation for moving to 450mm is cost and Intel does not plan to drive (and fund) the 450mm transition on its own. As such, Intel will likely only move to 450mm if an industry consortium of several companies drives the transition.

Shares of chip maker Nvidia (NVDA) are down 10 cents, or half a percent, at $19.93, after the company late yesterday said it filed complaints with the International Trade Commission and U.S. district court against Qualcomm (QCOM) and Samsung Electronics (005930KS), alleging they infringed on patents held by Nvidia regarding graphics processing units, or GPUs, the parts that display graphics on computer screens.

Qualcomm said through a spokeswoman “We are aware of the complaints and are evaluating them,” in response to my inquiry placed with the company.

Nvidia is targeting chips from Qualcomm across various makes and models of the latter’s “Snapdragon” line. It is targeting Samsung devices that make use of the chips, such as the “Galaxy Note Edge” smartphone unveiled this week, and the “Galax Tab” line of tablet computers.

Said Nvidia CEO Jen-Hsun Huang,

Our patented GPU inventions provide significant value to mobile devices. Samsung and Qualcomm have chosen to use these in their products without a license from us. We are asking the courts to determine infringement of NVIDIA’s GPU patents by all graphics architectures used in Samsung’s mobile products and to establish their licensing value.

As one analyst noted, the claims are broad enough they seem to imply the entire mobile computing field is Nvidia’s target, ultimately, and multiple analysts today are arguing the action suggests a whole new avenue to royalty revenue for Nvidia.

Needham & Co.’s Rajvindra Gill, who has a Buy rating on Nvidia, sees this as an “extremely significant” move by Nvidia, one that signals “the beginning of a new aggressive licensing strategy:

Similar to ARM’s process, Nvidia has said they see their future architectures being open to licensees at the time they are announced, with customers being able to choose whether or not to wait for potential revisions, or to license products as designed. We believe this business model is particularly compelling for Tegra due to the more embedded nature of mobile GPUs. We do not believe it’s a coincidence that this lawsuit involves mobile devices as it is in line with their previously discussed push to license mobile GPUs.

Gill notes that “Discussions had been going on for years” with Samsung but “little progress had been made.”

Stifel Nicolaus‘s Kevin Cassidy, who has a Buy rating on Nvidia stock, and a $22 price target, writes “we believe Nvidia’s action could stimulate GPU IP licenses from other companies.”

“While predicting a court’s ruling amounts or timing is extremely difficult, we believe today’s action at a minimum, may result in license agreements with other companies.”

“We applaud Nvidia’s move.”

Similarly, Canaccord Genuity‘s Matthew Ramsay, who has a Hold rating on Nvidia shares, today writes that the company is seeking to bolster its licensing royalty stream by force, having failed to secure deals:

While fundamentals and margins have improved steadily in NVIDIA’s core businesses in recent quarters, a significant portion of earnings are still generated from an IP settlement payment stream from Intel expiring in 2017. Today’s suit of Samsung and Qualcomm is a first step in monetizing NVIDIA’s strong graphics IP portfolio as a supplement, but is still a long way from a revenue stream. With the size, timing, and structure of any related revenue quite uncertain, we believe shares could potentially retreat near-term as we believe the suit largely removes the possibility of any near-term IP license deals while NVIDIA’s claims are reviewed and potentially tried.

Tavis McCourt with Raymond James, who has a Market Perform rating on Qualcomm stock, writes that the “irony mounts,” meaning, Qualcomm, one of the dominant holders of IP in mobile devices, which has in past pursued Nokia (NOK) and others to honor their obligations, is now on the receiving end.

McCourt lays out what could be a very drawn-out process:

In terms of timing, the ITC will likely decide within 30 days as to whether to start an investigation in this matter. It would then start a discovery period, and a trial would be likely in mid-2015 with a decision sometime in 2H15. We note the ITC does have the ability issue injunctions on imports of infringing devices, but it cannot award damages. A trial in the Delaware action would likely be 2-3 years into the future, if it were to come to that.

The action won’t be material to Qualcomm “in the near term,” and probably it will end up being “a small tax” on the mobile industry if Nvidia does succeed in any way.

About Tech Trader Daily

Tech Trader Daily is a blog on technology investing written by Barron’s veteran Tiernan Ray. The blog provides news, analysis and original reporting on events important to investors in software, hardware, the Internet, telecommunications and related fields. Comments and tips can be sent to: techtraderdaily@barrons.com.