Wu Dao 2.0 – Big, strong, fast AI from China

It is no secret that China is seizing COVID-19. You will need a 2 week hotel quarantine to travel there but once you are in the country you are safe. Probably because masking is part of the current behavior, it may be safer even before COVID, and many other viral respiratory infections may be on the decline. So I was quick to respond when I was invited to speak at the BAAI Annual Conference for Health Care in AI.

BAAI is a great platform to demonstrate technology and skills in a wide range of categories. A non-profit organization encourages scientists to deal with problems and promote discoveries through AI concepts, tools, systems and applications. BAAI also specializes in long-term research on AI technology.

AI is big in China. Because it is so large, more than 70,000 people will sign up for the event and play many more tunes to watch BAAI presentations after the event. And offer the most up-to-date presentations, algorithms, systems, and applications. However, the real result of BAAI was Wu Dao 2.0 – in many ways a better system than OpenAI ‘GPT-3’.

The Encyclopሎdia Britannica describes the ‘common system of speech, instruction, or writing that describes human beings as members of a social group and part of its culture.’ From this definition we can conclude that language is an essential part of human communication. Not only does it allow us to share ideas, thoughts and feelings with each other, but language also allows us to create and build communities and territories. In simple words, language makes us human.

According to Gareth Gaskel, a professor of psychology at York University, the average 20-year-old knows between 27,000 and 52,000 words. By the age of 60, that number is on average 35,000 to 56,000. Therefore, when we use words in an interview, the brain has to make quick decisions about which words to use and in what order. In this context, the brain acts as a processor that can do many things at once.

Linguists suggest that each word we know is represented by a specific part of a function — to evaluate the chance of speech that comes with that particular word. The structure of a word in the context of the brain is similar to the pattern of activity in a group of neurons in the brain. So when we first hear the word, thousands of such parts become active because many can be related.

Most people can understand up to eight letters per second. But the goal is not to know the word but to reach the intended meaning. Before the brain is fully understood, it has many possible meanings. Studies show that when an audience hears a piece of words, such as “cap”, it begins to record many possible meanings, such as “captain” or “capital”.

Like most artificial intelligence drives in 21St Centuries, languages ​​are also evolving to take on different forms and meanings. Recently, the concept of “language models” has taken center stage in AI. Basically, language models analyze textual information and determine the probability of a word. This means that language models translate information using statistical and probable techniques to determine the given word order. Language models are commonly used in natural language processing applications that generate text as an output. These include machine translation and query answers.

Microsoft’s February A.D. A.D. A.D. Describing the Touring-NL language model in 2020, it is said to have outperformed the largest model ever published and other models in other language modeling parameters. After his release, Turin-NLG will be published in 17 billion units and will be able to generate words to complete the openings. The model also provides direct responses to questions and summary of input documents.

In March of that year, OpenI launched its own language model, Transformer 3 (GPT-3), a generator that uses advanced learning to create human-like text. This third-generation language model in the GPT-n series has a capacity of 175 billion machine learning parameters. OpenI researchers have released a paper showing that GPT-3 human reviewers can generate news articles that are difficult to identify if written by humans. These researchers also say that the language model is designed to generate 100 pages of content that costs only a few cents.

GPT-3 was considered too strong and powerful, so it allowed Microsoft to use the language model and basic code alone.

A year later, another language model, GPT-3, and Turin-NLG dominated creativity and invention.

This model, called Dow 2.0, has been featured in BAI. The work behind WDO 2.0, China’s first in-house super-intelligence model, was led by Vice President of the Academy of Sciences Research and Professor Tang Ji of Shanghai University. More than 100 AI scientists have received support from Peking University, Singing University, Renmin University of China, China Academy of Sciences and other institutions.

Wu Dao 2.0 is actually a successor to Daw 1.0, which was unveiled by BAAI earlier this year. Wu Dao 2.0 is actually the biggest and best answer for GPT-3 in China.

First, unlike GTP-3, Wu Dao 2.0 4.9 terabytes develops in Chinese and English with the ability to analyze images and texts. Wu Dao 2.0 also has partnership agreements with 22 brands, including smartphone maker Xiaomi and video app Cue. The Chinese model has trained at 1.75 trillion units, which is 10 times the GPT-3’s 175 billion units.

Wu Dao 2.0 can also write traditional Chinese style poems, answer questions, write essays and write text for images. In addition, this language model is reported by BAAI to have reached or failed SOTA levels on nine parameters. These include:

1- ImageNet (Zero-Hit) -To, larger than OpenAI CLIP.

2- Lama (knowledge of reality and knowledge): The most advanced autoprot.

3-Lambada (Cloze Tasks): Microsoft Tour NLG Al Sur.

4- SuperGLUE (few-bullets) -sota, larger than OpenAI GPT-3.

5- UC Merced Land Use (Zero-beat) -Sota, larger than OpenAI CLIP.

6- MS. Coco (Text Generation Design) Superior OpenAI DALL · E.

7- MS Coco (English Graphic Recovery) – Improved OpenAI CLIP and Google ALIGN.

8- MS COCO (Multilingual Graphic Retrieval) – Advanced UC (Best Multilingual and Multimedia Pre-Training Model).

9- Multiple 30K (multi-lingual graphic retrieval) – Advanced UC.

Finally, Wu Dao 2.0 unveiled Hua Zibing, the world’s first Chinese virtual student. Hua can learn, draw pictures and compose poems. She can learn coding in the future. This USB 2.0 learning ability is very different from GPT-3.

Exact details of how and when Wu Dao 2.0 is actually trained have not yet been found, making it difficult to compare directly with GPT-3. However, the new language model reflects China’s aspirations and advanced research programs. There is no doubt that AI innovation will increase in the coming years, and many of these innovations will help move many other industries forward.

Dr. Kai-Fu Lee, one of the AI ​​renowned AI investors and investors who helped build at least 7 AI-enabled unicorns, recently visited Hong Kong Science and Technology Park and explained the power of transformers and how to fine tune pre-training models such as Wodeo 2.0. ፡ Searching These models are well suited for many industries and many applications such as education, finance, law, entertainment, and most importantly, health care and biomedical research.

Applications of transformers in biomedical research can lead to new discoveries that benefit people everywhere. And we sincerely hope that despite trade wars, governments will continue to cooperate in biomedical research.

.

Leave a Comment