Zuckerberg and Musk have different positions on AI, and advised Musk to abandon the artificial intelligence threat theory

[Introduction]: Zuckerberg and Musk are two different families. The position of AI is very different. Zuckerberg wants to convince Musk to stand in the same camp and ask Musk to come to eat at home and advise Musk. Abandon the artificial intelligence threat theory.

Social media Facebook founder Mark Zuckerberg believes that his friend, Silicon Valley billionaire Elon Musk, behaves like an alarmist.

Musk created SpaceX and is now the CEO of Tesla. He warned the world in television interviews and social media that artificial intelligence (AI) "may be more dangerous than nuclear weapons."

On November 19, 2014, Zuckerberg invited Musk to have dinner at his home in Palo Alto, California. In addition, two top researchers from Facebook's new artificial intelligence lab and two other Facebook executives were also invited to the dinner.

At dinner, the Facebook team tried to make Musk think he was wrong. However, Musk did not give in. According to Yann LeCun, a researcher at the Facebook Artificial Intelligence Lab, one of the dinner participants, Musk said to the same table, "I really think this is dangerous."

Musk's fear of artificial intelligence is very well understood: if we create machines that are smarter than humans, they may turn against us. Take a look at the science fiction films The Terminator, The Matrix, and 2001 Space Awakening (2001: A Space Odyssey) to see how artificial intelligence controls humans. He told the technology industry that before we release it to the world, think about the unintended consequences of what we are creating.

For this dinner, neither Musk nor Zuckerberg talked about the details. There was no media coverage of this dinner and the debate on artificial intelligence at the time.

The birth of “super-smart” takes artificial intelligence to a new level, creating machines that not only perform special tasks that require human intelligence (such as self-driving cars), but actually transcend humans. It still feels like science fiction. However, the debate about the future of artificial intelligence has spread to the entire technology industry.

More than 4,000 employees at Google recently signed a petition to protest the company’s $9 million artificial intelligence contract with the Pentagon. This is an agreement to make the Internet company profitable, but for many of the company's artificial intelligence researchers, the deal is deeply disturbing. Last week, Google executives tried to stop employees from rebelling. They said that Google would not renew the contract after the contract expires next year.

Whether as an economic engine and a source of military superiority, artificial intelligence research has enormous potential and huge impact. The Chinese government has said that it will be willing to invest billions of dollars to build a world leading position in artificial intelligence in the next few years, and the Pentagon is actively seeking the help of the technology industry. The birth of a new type of automatic weapon will not be far away.

From the gathering of philosophers and scientists on the Central Coast of California to the annual meeting of Amazon CEO Jeff Bezos in Palm Springs, California, all kinds of far-sighted thinkers have joined this Field debate.

Alan Daboe, director of the Artificial Intelligence Management Program at the Future of Humanity InsTItute, said, “We are talking about the risks of artificial intelligence, not lost in science fiction.” The Institute is Oxford University. A research center that studies the risks and opportunities of advanced technology.

In the past few months, public hype about Facebook and other technology companies has spurred the problem that the technology created by Silicon Valley has had unintended consequences.

In April of this year, Zuckerberg spent two days in Congress to answer questions about the role of data privacy and Facebook's role in spreading misinformation before the 2016 election. Last month, he faced similar challenges in Europe.

It's hard for Facebook to understand what's going on, which has led to an industry entering a rare moment of self-reflection. For a long time, people have always believed that Facebook is making the world a better place, no matter if the world likes it or not.

Even influential people like Microsoft founder Bill Gates and the late Stephen Hawking expressed concern about creating machines that are smarter than us. Although super intelligence seems to be a few decades away from us, these influential people and others say that we shouldn't consider the consequences before it is too late.

Bart Selman, a professor of computer science at Cornell University and a researcher at Bell Labs, said, "The systems we are creating are very powerful and we don't understand their impact."

Imperfect technology

Pacific Grove is a small town on the Central Coast of California. In the winter of 1975, a group of geneticists gathered here to discuss whether their work, genetic editing, ultimately endangered the world. In January 2017, the artificial intelligence community held a similar discussion in this seaside town.

The private gathering at the Asilomar Hotel was organized by the Future of Life InsTItute. The Institute is a think tank and research interests are related to artificial intelligence and other technologies.

Among the people who participated in the gathering, Jane Laiken, the head of Facebook's artificial intelligence lab, was the brain of artificial intelligence. He helped develop a neural network, one of the most important tools in today's artificial intelligence. Participants also have Nick Bostrom, his book Superintelligence: Paths, Dangers, Strategies, which has had a huge impact on artificial intelligence discussions. However, some people think that it is man-made to create terror; and Oren Etzioni, a former professor of computer science at the University of Washington, is now at the Allen InsTItute Leadership Intelligence Research; There is also Demis Hassabis, currently the leader of DeepMind, an influential AI research lab based in London.

In 2015, Musk donated $10 million to the Institute for Future Life in Cambridge, Massachusetts. In the same year, he also helped create an independent artificial intelligence lab called OpenAI with the clear goal of creating super intelligence that does not go out of control and has safeguards. This information clearly shows that he supports Nick Bostrom's point of view.

On the second day of the private party, Musk participated in a group of nine people dedicated to super-smart issues. Everyone knows that Musk thinks that super intelligence is not only possible but also very dangerous.

At the end of this panel discussion, Musk was asked how society can best coexist with super intelligence. He said that what we need is a direct link between our brain and the machine. A few months later, he launched a new project called "Neuralink." The project received $100 million in support to create a so-called neural interface by merging computers with the human brain.

Of course, warnings about artificial intelligence risks have been around for years. But there are hardly any unbelievable prophets who have the technical reputation of Musk. Almost no one spends so much time and money on artificial intelligence, if any. Perhaps there is no such unbelievable prophetic prophet, with Musk's so complicated technological entrepreneurial experience.

In the weeks before Zuckerberg’s dinner at Zuckerberg’s home, he talked to Facebook’s artificial intelligence laboratory researcher, Yann Laiken, asking about being able to work on Tesla Motors Auto. The name of the top artificial intelligence researcher driving a car project.

The company’s financial losses and car quality issues have been plaguing Musk. In a recent Tesla earnings conference call, he accused the news media of not paying attention to death—a clear stance that was repeatedly warned that artificial intelligence is a danger to humanity.

Battle of Palm Springs

There is a saying in Silicon Valley: people tend to overestimate what they have done in three years, and underestimate what they can do in 10 years.

On January 27, 2016, Google's DeepMind Lab launched a robot that beats professional players in the Go game.

Even top-level artificial intelligence researchers have thought that it takes another decade for robots to play such a game. Go is very complicated, and the best players are not calculating by air, but by intuition. Just two weeks before the launch of AlphaGo, researcher Jane Laiken of Facebook's Artificial Intelligence Lab once said that such a robot is unlikely to exist.

A few months later, AlphaGo defeated Li Shishi. The robot's movements puzzled the human expert and eventually won.

Many researchers, including the leaders of DeepMind and OpenAI, believe that the automatic learning technology used by AlphaGo provides a path to "super intelligence." They believe that progress in this area will be greatly accelerated in the next few years.

OpenAI recently "developed" a system to play a boating game video game, encouraging the system to earn as many points as possible. However, when it continues to win the game points, it will rotate, collide with the stone wall and hit other ships.

It is this unpredictability that has caused serious concerns about artificial intelligence, including super intelligence.

But in March of this year, a special meeting between Amazon and Bezos in Palm Springs expressed strong opposition to these concerns.

One night, MIT robot expert Rodney Brooks discussed the potential dangers of artificial intelligence with neuroscientists, philosophers, and podcast Sam Harris. Harris has potential for artificial intelligence. The danger raised a special warning. According to a recording obtained by The TImes, the debate evolved into a personal attack against individuals.

Harris warned that because the world is in an arms race using artificial intelligence, researchers may not have enough time to ensure that super intelligence is built in a secure manner.

Brooks replied, "This is what you made up." He hinted that Harris's argument is based on unscientific reasoning. This cannot be proven to be right or wrong. Harris said, "If this really makes sense, I will take it seriously."

The host eventually terminated the battle and raised questions for the audience. Oren Etzoni, head of the Allen Institute, said that he came out of the audience. He said that today's artificial intelligence system is so limited, it doesn't make sense to spend so much time worrying about super intelligence.

He said that the people standing on the side of Musk are philosophers, sociologists, writers, and they are not researchers engaged in artificial intelligence. Among artificial intelligence scientists, the idea that we should start to worry about super intelligence is "a largely marginal argument."

Zuckerberg goes to Washington

Since they had dinner three years ago, the debate between Zuckerberg and Musk has become bad. Last summer, in a live Facebook video of Zuckerberg and his wife grilling in their backyard, Zuckerberg said that Musk’s view of artificial intelligence was “very irresponsible”.

He said that now there is a panic about artificial intelligence that is still in its early stages of development, which may harm many of the benefits of things like autonomous vehicles and artificial intelligence healthcare.

"Especially with artificial intelligence, I am really optimistic," Zuckerberg said. "Those who disagree, trying to fabricate these apocalyptic scenes - I just, I don't understand."

In response to Zuckerberg’s remarks, Musk also responded. “I talked to Mark (Zuckerberg) about this,” Musk wrote. “He has limited understanding of this issue.”

In April of this year, Zuckerberg testified in the US Congress, he explained how Facebook can correct the current problems.

One way to solve the problem is to rely on artificial intelligence. However, Zuckerberg admitted in his testimony that scientists have not fully understood how certain types of artificial intelligence are learned.

“This will be a very core issue, and it’s about how we look at artificial intelligence systems for the next ten years or even longer,” he said. “Now, many of our artificial intelligence systems are made in ways that people don’t understand. Decide."

Technology giants and scientists may laugh at Musk's timidity in artificial intelligence, but they seem to be moving toward his point of view.

Inside Google, a team is exploring flaws in artificial intelligence methods that fool the computer system into seeing things that don't exist. Researchers have warned that artificial intelligence systems that automatically generate real images and videos will soon make it harder for us to believe what we see online. Now, both DeepMind and OpenAI are working on a research team dedicated to "artificial intelligence security."

DeepMind founder Demis Hassabis still believes that Musk's view is extreme. He said that the threat does not exist, not yet.

Sensors

Proximity Switch For Automobile,Proximity Switch For Industrial Control,Sensing Probe For Industry And Horticulture,Temperature Sensing Probe

Foshan City Jiulong Machine Co., Ltd , https://www.jlthermostat.com