Time Travel: 2014

Chapter 343 Unfounded worries

Eve Carly almost gave all her previous thoughts to Lin Hui.

Including but not limited to the strong interest in the supplementary content of Lin Hui's paper and the concern that the future expectations of artificial intelligence will cause controversy at the social level.

He even told Lin Hui some of his guesses about the use of the patent that he had previously acquired from her.

I don’t know why, but since arriving in China, Eve Carly feels more assertive than before.

At this time, she seemed to have some changes in her approach to things, and now she even had a certain judgment in her heart.

She also hoped to confirm her previous guesses with Lin Hui.

Listening to Eve Carly's explanation, Lin Hui did not expect to add to the content of the previous paper what he considered to be quite common sense.

To be able to be given so many expectations by Eve Carly.

The expectant expression on Eve Carly's face made Lin Hui think of a little fox eager for meat for some reason.

However, Lin Hui may disappoint Eve Carly this time.

Although some of the content added in the paper is ahead of its time and space.

But in order to avoid the situation where the first teacher is two steps ahead and dies.

Even when moving, Lin Hui was very restrained in what he actually moved.

Take the pre-training regime that Eve Kali rates so highly.

Although the introduction of pre-training mechanism into the machine learning aspect of natural language processing is indeed quite pioneering in this time and space.

But Lin Hui knew clearly in his heart that the pre-training mechanism he introduced could only be called a new level.

The "pre-training" of forest ash handling is based on pre-training of ordinary neural network language models.

The application efficiency of the truly reliable Transformer-based pre-trained model is much worse.

As for Lin Hui, why doesn’t he directly transfer the more mature Transformer-based pre-training mechanism?

The reason is very simple. After all, there is no Transformer at the moment. It would be ridiculous to create a model based on Transformer now.

As for "deep learning", Eve Carly also has great expectations.

Although Lin Hui can indeed tinker with real deep learning.

But it seems unnecessary for the time being. When it comes to deep learning, Lin Hui does not plan to launch it in the direction of natural language processing.

As for Lin Hui not intending to introduce real deep learning in the direction of natural language processing, why is deep learning still mentioned in the current paper?

That's because almost all researchers in neural network learning in this time and space are so confident that they call their neural network learning deep learning.

In this case, even if Lin Hui’s neural network learning application is not actually that deep, wouldn’t it look inferior if it is not called deep learning?

As for the idea of ​​migration that Eve Kali is interested in.

Although in the long-term timeline, transfer learning can indeed break out of the small circle of natural language processing and migrate to all ML fields as Eve Carly expected.

But it’s actually quite difficult in a short period of time.

Despite these difficulties, Lin Hui did not dampen Eve Carly's enthusiasm.

Instead, it painted a more magnificent scene for Eve Carly.

The look of the cake even reminded Lin Hui of his leader in his previous life.

However, Lin Hui didn't feel guilty at all about this. The pie painted by the department leaders in his previous life was just illusory.

However, the blueprint outlined by Lin Hui will definitely be realized, after all, this has been verified in the previous life.

No matter how long the road is, one day Lin Hui will realize everything he describes.

And Lin Hui is already moving towards the blueprint he outlined.

Although the content added by Lin Hui in the previous paper is not as strong as Eve Carley expected, it is at least making progress.

Even some progress is from 0 to 1 compared to the current scientific research status of this time and space.

As for Eve Carley’s concerns about the social aspects of artificial intelligence.

This Lin Hui does know a little bit about it. Many big names in the past life have indeed expressed concerns in this regard.

In previous lives, Stephen Hawking, Bill Gates, and Musk have all expressed concerns that artificial intelligence will have self-awareness and consciousness.

In particular, Hawking in his previous life exaggeratedly believed that artificial intelligence may be the greatest disaster for mankind. If not managed properly, thinking machines may end human civilization.

As for whether these people have made the same remarks in this life, Lin Hui has not paid any specific attention.

Anyway, from Lin Hui's point of view, this concern may be partially justified in theory, but in fact it is quite outrageous upon closer inspection.

What can truly threaten human civilization must be strong artificial intelligence, not weak artificial intelligence.

These are not without threats when it comes to weak artificial intelligence.

But unlike strong artificial intelligence, a set of systems can handle various intelligent behaviors.

Weak AI requires new independent systems for each intelligent behavior.

In this case, even weak artificial intelligence poses a threat because of some intelligence.

Human beings only need to carry out a certain safety design for the independent system of this behavior and it will be ok.

When discussing the threat of weak artificial intelligence to human beings, it is better to consider the harm caused by people with ulterior motives abusing weak artificial intelligence.

If no one abuses it, weak artificial intelligence will never be the real threat to humanity.

Ultimately, only strong artificial intelligence can truly threaten humanity.

In appearance, strong artificial intelligence can be similar to humans (sharing a set of rules of life), or it can be far different from humans (forming a new set of rules of life).

In terms of ideology, strong artificial intelligence can share a set of thinking patterns and moral principles with humans, and can also have a unique reasoning method of its own system, becoming a type of "machine with a soul."

Generally speaking, strong artificial intelligence has the ability of autonomous consciousness, autonomous learning, and autonomous decision-making.

And this is undoubtedly very risky.

While it’s easy to take risks, think about it from another perspective.

Would God create a rock that He could not move?

By the same token, why would humans create an uncontrollable strong artificial intelligence?

You must know that the underlying value orientation of strong artificial intelligence only needs to be restricted and monitored through corresponding rules and memories.

A so-called rebellion by strong artificial intelligence is impossible.

Don’t think that designing rules for strong artificial intelligence is troublesome.

In fact, design related to rules is almost everywhere in human life.

Take the charger as an example. An inconspicuous charger is likely to have several charging protocols to regulate it.

By designing explicit rules, it is not impossible to limit the scope of AI's actions.

Speaking of which, people's current worries about strong artificial intelligence are somewhat unfounded.

Tap the screen to use advanced tools Tip: You can use left and right keyboard keys to browse between chapters.

You'll Also Like