Post by account_disabled on Mar 11, 2024 9:38:19 GMT
Wu Dao 2.0 is an artificial intelligence (AI) that has been trained with 10 times more data than GPT-3. The performances are incredible, so much so that we start thinking about AGI (Artificial General Intelligence). Let's discover it together! Alessio Pomaro Alessio Pomaro July 9, 2021 •6 min read Wu Dao 2.0: GPT-3's most powerful Chinese AI Wu Dao 2.0: GPT-3's most powerful Chinese AI We are living in an incredible historical period in the field of Artificial Intelligence. Starting from the incredible results achieved by OperAI with GPT-3 , up to the more recent LaMDA and MUM , presented on the stage of the Google I/O event, which will revolutionize the world of virtual assistants and search respectively .
During the Beijing Academy of Artificial Intelligence (BAAI) conference , Wu Dao 2.0 was India Mobile Number Data presented : the largest neural network ever created and probably the most powerful. Its potential and limitations have yet to be fully revealed, but expectations are very high. Wu Dao 2.0: the differences compared to GPT-3 Parameters and data Wu Dao, meaning " enlightenment ", is a language pattern similar to GPT-3. Jack Clark (OpenAI) calls the copying trend of GPT-3 " model diffusion ", however, among all copies, Wu Dao 2.0 is the most powerful with 1.75 trillion parameters (10 times more than GPT-3) .
Wu Dao 2.0 appears to have been trained with 4.9 TB of data (high-quality text and images), which dwarfs GPT -3's training dataset (570 GB). However, it's worth noting that OpenAI researchers filtered through 45 TB of data to extract those 570 GB. The training data is divided into: 1.2 TB of Chinese text data; 2.5TB Chinese graphics data; 1.2TB English text data. Multimodality Wu Dao 2.0 is multimodal . It can learn from text and images, and tackle tasks that include both types of data (which GPT-3 can't do). This is the direction in which we have been going in recent years. It is expected that computer vision and natural language processing , traditionally the two large " branches " of deep learning , will eventually be combined in every AI system in the future. The world is multimodal, humans are multisensory.. it is reasonable to create AI that imitates this function.
During the Beijing Academy of Artificial Intelligence (BAAI) conference , Wu Dao 2.0 was India Mobile Number Data presented : the largest neural network ever created and probably the most powerful. Its potential and limitations have yet to be fully revealed, but expectations are very high. Wu Dao 2.0: the differences compared to GPT-3 Parameters and data Wu Dao, meaning " enlightenment ", is a language pattern similar to GPT-3. Jack Clark (OpenAI) calls the copying trend of GPT-3 " model diffusion ", however, among all copies, Wu Dao 2.0 is the most powerful with 1.75 trillion parameters (10 times more than GPT-3) .
Wu Dao 2.0 appears to have been trained with 4.9 TB of data (high-quality text and images), which dwarfs GPT -3's training dataset (570 GB). However, it's worth noting that OpenAI researchers filtered through 45 TB of data to extract those 570 GB. The training data is divided into: 1.2 TB of Chinese text data; 2.5TB Chinese graphics data; 1.2TB English text data. Multimodality Wu Dao 2.0 is multimodal . It can learn from text and images, and tackle tasks that include both types of data (which GPT-3 can't do). This is the direction in which we have been going in recent years. It is expected that computer vision and natural language processing , traditionally the two large " branches " of deep learning , will eventually be combined in every AI system in the future. The world is multimodal, humans are multisensory.. it is reasonable to create AI that imitates this function.