Time Travel: 2014

Chapter 337 Clever Idea

After all, Eve Carley and her previous team followed up on the technology of generative text summarization as soon as it came out.

Then Eve Carly came to China and received face-to-face guidance from Lin Hui.

In this case, Eve Carly is still very confident in her second position.

Even though the gap with Lin Hui is still not small.

But Eve Carley feels that even if she doesn’t understand the full implementation of generative text summarization.

However, you should still be able to get a glimpse of the general content of generative text summarization.

It shouldn't be a problem to write a review without getting a glimpse of it.

After writing a review of the text abstract, Eve Kali sent the paper to Lin Hui, only to have it revised.

One can imagine Eve Carly's mood at this time.

However, after seeing the additions made by Lin Hui in the paper he wrote on generative text summarization.

Eve Carly's previous complacency was gone.

The slight complaint that was still deep in my heart before disappeared without a trace.

See what Lin Hui modified.

Only then did Eve Carly realize more and more that the gap between her and Lin Hui was far greater than she had ever imagined.

Even Eve Carly was a little desperate for a while.

She originally thought that the distance between her and Lin Hui would gradually shorten after being with Lin Hui for a few months.

But after seeing Lin Hui's supplementary paper, Eve Carly suddenly had an illusion.

That is to say, the gap between her and Lin Hui is as much as three or four years.

Although it is only three or four years, given the rapid iteration of algorithms, the gap between three and four years is quite large.

There is a gap of three to four years in some algorithmic positions.

It's almost like the difference between "the seniors have a lot of money, but the juniors have no jobs."

It can be said to be quite cold and cruel.

To be honest, Eve Carly is an extremely proud person deep down.

But since the thing Lin Hui made came out, the pride in her heart was shattered.

As she came into contact with Lin Hui day by day, her remaining pride disappeared.

But this does not mean that she will be willing to be lonely, because in the process of contact with Lin Hui.

Eve Carley seems to have been exposed to some other concepts.

There was a lot to be proud of in what Lin Hui created, but in the process of contacting Lin Hui, Eve Carly rarely felt the pride coming from Lin Hui.

On the contrary, what Eve Kali usually gains in Lin Hui is a feeling similar to peace...

This feeling of peace allows Eve Carly to feel a touch of warmth in a foreign country.

Eve Carly also seems to understand better:

The inner peace and tranquility seems to be far more important than the so-called pride.

Specifically, Lin Hui changed the paper written by Eve Carly at that time.

Objectively speaking, in fact, Lin Hui’s changes in that paper at that time were not many in terms of generative text summarization.

Lin Hui just added some content.

But what Lin Hui added was almost the essence.

Through Lin Hui's supplementary content, Eve Carly learned more about how Lin Hui mastered the text summarization technology in Nanfeng APP.

Lin Hui has taken many clever approaches to building generative text summarization algorithms.

Whether it is designing appropriate model architecture and training strategies based on deep learning technology.

Using the idea of ​​transfer learning, a generative automatic text summarization algorithm based on a pre-trained model is proposed.

Or complete content representation and weight calculation through unsupervised processing.

These are things that Eve Carly has never thought of before, or has never understood deeply.

A doctor in a related field actually has something that he didn't realize before?

It may sound weird, but it's true.

As the saying goes, there is a sequence of learning, and there is a specialization of skills.

There is nothing unacceptable about falling behind others for a while.

And Eve Carly is sure that her situation is definitely not an isolated case.

Eve Carly felt that what Lin Hui added might not have been something she had not thought of.

Many other researchers may not have thought of it either.

Lin Hui proposed some new insights not only compared with traditional text summarization research.

Even what Lin Hui tinkered with can be called a brand new idea for the entire NLP direction.

Anyway, Eve Carly thinks these ideas are wonderful and can even give people a kind of enlightenment effect.

The reason for this effect is largely due to the fact that most text summarization researchers previously studied extractive text summarization.

Although both extractive text summarization and generative text summarization are text summarization.

But the transition from the former to the latter involves a process of transformation in thinking.

In many cases, most researchers in traditional text summarization, that is, researchers who study extractive text summarization, are often influenced by preconceptions and do not fully understand generative text summarization.

For example, take the pre-training proposed by Lin Hui when he was working on generative text summarization.

Ordinarily, this thing is not a profound concept.

The so-called pre-training is not difficult to understand. It is nothing more than rough processing of the data for training the model.

But it’s harder to think of.

In the past, Eve Carly did not use pre-training when tuning extractive text summarization.

In most cases, training is done directly.

The pre-training step is not applied.

Following Lin Hui's addition in the paper.

The common practice of pre-training is to put together a large amount of low-cost collected training data.

Then use a certain type or type of specific pre-training method to learn the common features of these training data.

The commonalities are then transplanted into task-specific models.

Then use a small amount of annotated data in relevant specific fields for more detailed adjustments.

After completing this process, future models for practical applications only need to start from commonalities.

Then just learn the special parts of a specific task.

It is roughly similar to the process of first finding general solutions to some equations and then finding specific solutions.

It sounds quite abstract.

It's actually not that profound.

When it comes to machine learning, no matter how advanced it is.

In essence, they are basically imitating people.

In this case, we often just need to understand how people deal with problems.

You can understand the ideas or methods of machine learning to deal with problems.

Usually when we are learning something.

Perhaps our original intention is to learn everything we want to learn at once.

However, due to limited study time, numerous academic tasks or various other objective factors.

When actually studying, it is difficult to learn all the knowledge in one go.

In this case, how do some people who are good at learning learn?

What these people may adopt when learning is to first understand the common content of the knowledge they want to learn.

Then spend time on some of those "difficult diseases".

Although this approach seems a bit "lazy".

But more than half of mankind’s wisdom comes about because of laziness.

It is undeniable that this seemingly lazy way of learning is full of wisdom.

At least from an efficiency perspective, this approach is commendable.

After all, except for extremely special subjects like medicine.

80% of the knowledge involved in most fields can find commonalities.

After finding commonalities, solve the other 20% of complex knowledge.

This is undoubtedly a more labor-saving way of thinking.

Introducing pre-training in natural language processing, a typical direction of machine learning.

It is undoubtedly equivalent to "transplanting" a special technique that some outstanding students will use in their studies.

This idea is undoubtedly very clever.

The idea is certainly very clever.

But just like Li Ku on the roadside.

Why has no one tried this clever idea before?

Eve Carly feels that no one may have thought about this aspect.

But others failed without exception.

When it comes to knowledge acquisition, perhaps most people also know that it can save effort to get 80% of the common knowledge first and then the other 20%.

But judging from her past studies, Eve Carly feels that there are very few people around her who can first find out the commonalities of 80% of knowledge and then overcome the difficulties.

There is even no one except the academic master in Eve Carly's eyes who can do this.

How many top academics are there in Eve Carly's eyes? It can be said that there are very few.

In other words, this very wise approach of first getting 80% of the common knowledge and then getting the other 20% is actually rarely used.

It's obviously the easier way.

Why don't many people do this?

Eve Carly thinks the main reason is:

——Most people are not good at finding commonalities in knowledge.

If they are not good at finding commonalities in knowledge, some people will try to find commonalities in knowledge.

But in actual operation, it is completely unrealistic to find the commonality of 80% of knowledge.

You may only be able to find commonalities in 30%, 20% or even less knowledge.

As a result, these people not only failed to find the commonality of subject knowledge.

On the contrary, when looking for commonalities, some other originally ordinary content was unknowingly alienated and turned into "non-common knowledge" in the eyes of these people.

However, non-common knowledge has become more troublesome knowledge in the minds of these people who are trying to find commonalities.

These are not particularly difficult knowledge in the first place, but with the debuff of psychological suggestion.

The efficiency is even lower than when no commonality is found.

In this way, people who have not found commonalities may become the content that those who try to find commonalities need to spend a lot of time to overcome.

In this case, finding commonalities in knowledge did not help them.

Instead, it becomes a drag on their studies.

It’s very hard.

Instead of this happening, these people simply gave up looking for commonalities in knowledge.

If you treat everyone equally, at least you will not be misled by your cleverness.

It is similar to the dilemma these people face in learning.

Perhaps machine learning scholars have given up on searching for commonalities in training data because of the same situation.

At least in Eve Carly it's for this reason.

Even now we know that Lin Hui introduced pre-training method in model training.

Eve Carly still doesn’t know what Lin Hui did.

Tap the screen to use advanced tools Tip: You can use left and right keyboard keys to browse between chapters.

You'll Also Like