Time Travel: 2014

Chapter 346 The picture award boss’s affirmative thoughts

But just because of this value, it is not enough for Lin Hui to go to great lengths to acquire a cross-border patent.

The most fundamental reason why Lin Hui took great pains to obtain the patent "A New Method for Text Judgment, Screening and Comparison" developed by Eve Carly is because Lin Hui paid more attention to what Eve Carly said in this patent. Applied model.

In the patent "A New Method for Text Judgment, Screening and Comparison", Eve Kali very creatively created a model for text judgment and screening.

If we only look at natural language processing machine learning, this is just a mediocre model for text discrimination.

But when thinking outside the small field of natural language processing, this model cannot be taken lightly.

When he was looking through some academic information on this time and space, Lin Hui was keenly aware of the value contained in this patent.

Although the technical routes provided by patents are often schematic.

Some latecomers often can only explore like a blind man and an elephant when they follow these technical routes to understand the technology.

With information about his past life, Lin Hui is equivalent to standing on the shoulders of giants.

Although he occasionally feels overwhelmed at the top, when it comes to technology, Lin Hui often has a stronger system concept.

Many times, Lin Hui only needs to see some public technical routes to understand the value behind them.

And this judgment is basically accurate.

When I first came into contact with Eve Kali’s patent.

Lin Hui discovered that some information has been disclosed based on the patent, especially the technical route mentioned in the patent.

Lin Hui quickly grasped the value of this patent.

Lin Hui was determined to use this model to form a quite efficient discriminant model with almost slight deformation.

The fact is that subsequent acquisitions further confirmed Lin Hui's previous speculation after further understanding of the patent information.

Merely discriminant models may not make sense even if they are efficient.

But if you make some small changes, things will be different.

When efficient discriminative models meet efficient generative models.

The two are organically combined, and on this basis, a certain specialized structure is continued.

This can be used to create a new and highly efficient deep learning model.

This deep learning model had a famous name in its previous life:

——Generative Adversarial Network (GAN)

Generative adversarial networks consist of a generative network and a discriminative network.

The generative network randomly samples from the latent space as input, and its output results need to imitate the real samples in the training set as much as possible.

The input of the discriminant network is a real sample or the output of the generative network, and its purpose is to distinguish the output of the generative network from the real sample as much as possible.

The generative network must deceive the discriminant network as much as possible.

The two networks compete against each other and constantly adjust their parameters.

The ultimate goal is to make the discriminant network unable to judge whether the output result of the generating network is real.

In a previous Turing Award winner and the father of convolutional neural networks, Yann Le Cun even called the generative adversarial network model the coolest idea in machine learning in the past twenty years at an academic forum.

To be highly recognized by a Turing Award-level boss, the value of the generative adversarial network model can be imagined.

Past-life generative adversarial networks as a method for unsupervised learning.

It was proposed by Ian Goodfellow and others in 2014.

However, this time and space lags behind overall due to machine learning research.

This deep learning model, which was well-known in its previous life, seems to be a little difficult to implement as promised in this time and space.

This actually gave Lin Hui some opportunities to use his hands.

Since the emergence of previous generative adversarial networks, many variants have emerged for different application fields.

These variants have made certain improvements over the original generative adversarial network.

Some of these improvements simply improve the structure.

Some have made certain improvements to some functions or parameters involved in the generative adversarial model due to theoretical developments.

Or simply make some innovative adjustments in terms of application.

Just because a technology has been modified frequently does not mean that the technology has failed.

On the contrary, this just shows that this technology is very successful.

Because this reflects from the side that the technology has a lot of room for growth.

This is indeed the case. Generative adversarial networks in previous generations are quite successful and widely used.

Generative adversarial networks can be seen in many fields of machine learning.

The reason for this is probably because the original generative adversarial network had relatively few a priori assumptions when it was constructed.

It is precisely because there are almost no assumptions about the data that generative adversarial networks have almost unlimited modeling capabilities.

A variety of distributions can be fitted with the help of generative adversarial networks.

In addition, because the generative adversarial network model is not very complex.

In many cases, when applying generative adversarial networks, there is no need to pre-design more complex function models.

In many application scenarios of generative adversarial networks, engineers even only need to apply the backpropagation algorithm to simply train the corresponding network.

This allows the generator and discriminator in the generative adversarial network to work normally.

The reason why generative adversarial networks are so easy to use.

It also has a lot to do with the original design of generative networks for unsupervised learning.

However, everything has two sides, precisely because the original generative adversarial network is too free.

It is easy for training divergence to occur during the training process.

More than that, generative adversarial networks also have problems such as vanishing gradients.

Due to the existence of these problems, it is difficult for generative adversarial networks to learn some discrete distributions.

For example, the original generative adversarial network is not very good at processing pure text.

In addition to using generative adversarial networks for text segmentation in some scenarios.

Most of the time, generative adversarial networks are rarely applied to text (especially text in the form of pure text).

However, it has its strengths and weaknesses, although it is not very good at processing pure text information.

But there are many other areas where generative adversarial networks can show their talents.

Generative adversarial networks are even more useful in aspects such as face recognition and super-resolution reconstruction.

Even generative adversarial networks can also show their talents in semantic image restoration.

In addition, generative adversarial networks have many application directions.

In summary, the application prospects of generative adversarial networks are quite broad.

It seems that research on spatiotemporal machine learning is lagging behind.

If Lin Hui wants to transfer the generative adversarial network model, he doesn't actually need to take too many risks.

Still, before we get the hell out of generative text summarization.

Lin Hui is not in a hurry to carry out research results related to generative adversarial networks.

Tap the screen to use advanced tools Tip: You can use left and right keyboard keys to browse between chapters.

You'll Also Like