musk

Information about AI from the News, Publications, and Conferences

If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."

However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …

Should we be scared of artificial intelligence (AI)? Some notable individuals such as legendary physicist Stephen Hawking and Tesla and SpaceX leader and innovator Elon Musk suggest AI could potentially be very dangerous; Musk at one point was comparing AI to the dangers of the dictator of North Korea. Microsoft co-founder Bill Gates also believes there's reason to be cautious, but that the good can outweigh the bad if managed properly. Since recent developments have made super-intelligent machines possible much sooner than initially thought, the time is now to determine what dangers artificial intelligence poses. What is applied and general artificial intelligence?

It has taken 10 years, but Elon Musk has finally got to the punchline. The Tesla CEO has revealed the company's new car: the Model Y, the last part of one of Mr Musk's many long term plans. It means that the company now makes the Model S, Model 3, Model X and Model Y. Parked next to each other, the model numbers spell out S3XY. We'll tell you what's true. You can form your own view.

OpenAI, an artificial intelligence research group created by Silicon Valley investors as a non-profit, will now be seeking "capped" profit, according to a blog post on the OpenAI website published Monday. SpaceX and Tesla CEO Elon Musk, startup accelerator Y Combinator president Sam Altman, and several other Silicon Valley figures launched OpenAI in late 2015 with $1 billion in seed funding and the stated goal of ensuring that AI "benefits all of humanity." Musk stepped down from OpenAI in February 2018. Since its founding, the group has conducted research with reinforcement learning, robotics, and language. According to OpenAI, the original nonprofit entity will own a limited partnership called OpenAI LP that's designed to give a "capped return" to investors and employees and funnel excess funds back to the nonprofit.

The Bay Area is famed for nurturing speculative investments like flying cars, floating cities, and the notion that a ride hailing service can turn a profit. A new utopian investment opportunity arrived Monday: Shovel dollars into a San Francisco artificial intelligence lab cofounded by Elon Musk and you'll receive a share of the profits when (or if) it figures out how to create machines smarter than humans. That pitch comes from OpenAI, an independent AI research lab cofounded as a nonprofit in 2015 by Musk and Sam Altman, the president of startup incubator YCombinator. Its stated mission was to safely create software as capable as people, which it terms artificial general intelligence or AGI, and share the benefits with the world. The founders argued society shouldn't have to hope that profit-seeking tech giants would do that.

Tesla has partially reversed course on its series of store closures flagged at the start of the month, but it will come at a cost to consumers. The only specific contained in Tesla's latest missive is a 3 percent increase to the price of Model S and X vehicles, and "more expensive variants" of its Model 3. The price of the recently launched $35,000 Model 3 will stay as is. At the start of March, Tesla said it would be closing almost all of its showrooms, and shifting sales to online only. The company also introduced a new returns policy that would give new owners a seven-day or 1,000-mile window to return a car. However, Tesla has now begun to back away from the full implementation of its scheme, with "significantly more stores" to remain open.

U.S. defense spending on AI shows no signs of slowing -- if anything, it's accelerating. The Defense Advanced Research Projects Agency (DARPA) expects to spend $2 billion over the next five years on military AI projects. The Pentagon's controversial Project Maven, which taps machine learning to detect and classify objects of interest in drone footage, recently received a 580 percent funding increase in this year's $717 billion National Defense Authorization Act. And this week, the U.S. Army announced it would invest $72 million in AI research to "increase [the] readiness" of soldiers off and on the battlefield. "Tackling difficult science and technology challenges is rarely done alone and there is no greater challenge or opportunity facing the Army than Artificial Intelligence," said Dr. Philip Perconti, director of the Army's corporate laboratory, in a statement today.

Refrigerator doors in supermarkets are being replaced with LCD screens that scan a shopper's face to show personalised pop-up adverts. Chicago-based Cooler Screens sells its products to stores which show the food or drink available inside. However, they are also used to display specific adverts based on the physical characteristics of the shopper. The cameras in the screens are not designed to prevent theft but instead are intended to give a personalised shopping experience through specific adverts. Supermarket chain Walgreens is believed to be testing the screens in half a dozen of its stores around the US.

Brain to Computer Interfaces (BCI) are a tough subject to write on. The most current technology is likely in the research stage and is not yet being publicly reported. So let's take a look at what has been reported over the last couple of years with the understanding that scientists are likely years ahead. In other words, the technology is here….we The first publicly reported successful and non-non-invassive BCI was reported in a press release and titled the "Brainternet" by the University of Witwatersrand, Johannesburg.

We shouldn't be worried only about whether robots will take our jobs, but about who is programming them--and with what values. Genevieve Bell is a cultural anthropologist who's spent the last two decades pondering the intersection of technology and culture, specifically focusing on the ethics of AI in her work at Intel. "I think the gravest dangers are we take the world we live in now and make it the world in perpetuity moving forward," Bell told me. "All the things about the current world that don't feel right is what the data reflects, where women aren't paid as much as men, where certain populations are subject to more violence, where we know that certain decisions get made in manners that are profoundly unfair. If you take all the data about the way the world has been and that's what you build the machinery on top of, then you get this world as our total future. I don't know about you, but I'd like something slightly different."

Welcome to California Inc., the weekly newsletter of the L.A. Times Business Section. Investors are still digesting news that the nation's economic growth declined at the end of last year for the second straight quarter, denying President Trump the 3% annual increase he had promised his tax cuts would create. And growth this year and next won't hit that level either, economists say. Total economic output rose 2.9% in 2018. Beige Book: The latest Beige Book from the Federal Reserve comes out Wednesday.