Big Model Daily on October 20

News9months agoupdate AIWindVane
44 0

Big Model Daily on October 20

[Big Model Daily on October 20] OpenAI is finally open: DALL-E 3 paper is announced and ChatGPT is launched. Half of the authors are Chinese; AI PC: a once-in-a-lifetime industry savior! Intel expects to reach tens of millions of units next year


OpenAI is finally open: DALL-E 3 paper released, ChatGPT launched, half of the authors are Chinese

 

Link: https://news.miracleplus.com/share_link/11055

The release of the Dall-E3 model caused quite a stir and once again solidified OpenAI’s image as a technology leader. For a moment, everyone was curious, how could such an explosive effect be achieved? However, it is disappointing that OpenAI did not disclose the technical details at the time, just like when it released GPT-4 before. However, a month later, OpenAI still gave everyone some surprises. In a 22-page paper, they describe the improvements made to DALL・E 3. Key points of the paper include: the improvement of model capabilities mainly comes from detailed image captioning; they trained an image text description model to generate short and detailed text; they used the T5 text encoder; they used GPT- 4 to improve the short prompts written by users; they trained a U-net decoder and distilled it into 2 denoising steps; text rendering is still unreliable, and they believe that the model has difficulty mapping word tokens into images of letters. In addition to the paper, OpenAI also announced an important news: DALL・E 3 has been officially launched on ChatGPT, available to both Plus users and Enterprise users.


Vivo “leaks” large model capabilities in advance

 

Link: https://news.miracleplus.com/share_link/11056

The most popular Android phone is going to have a large model? ! It is still the kind that can be used immediately after release – the new version of the mobile phone system is directly installed without any falsification. You know, although domestic large-scale models are blooming, the “Battle of Hundreds of Models” on the mobile phone can be said to have just entered the warm-up stage. There are many mobile phone manufacturers that have developed their own large-scale models, but few have actually installed them into mobile phone systems. Some time ago, there was a lot of news that vivo quietly developed a large mobile phone AI model. Now the boots have landed, and the official announcement is scheduled for November 1 at the vivo Developer Conference, where the new version of the system OriginOS 4 will be available. From the news revealed by the person in charge of vivo, we can glean three main points: five large models will be released in one go; the large models will be embedded into mobile phones and can be used as assistants to draw; and the 7 billion version of the large model will be open to the industry.


Thrive is buying shares in OpenAI at a valuation of $80 billion; OpenAI internally believes they may become an operating system, and the current team size has expanded to 700 people

 

Link: https://news.miracleplus.com/share_link/11057

Venture capital firm Thrive Capital is leading a deal to buy shares of OpenAI employees through a tender offer that would value the company at at least $80 billion, The Information reported today. The deal would value OpenAI at at least $80 billion. That’s three times more than a similar deal six months ago. In April, OpenAI sold employees’ shares to Thrive and other investors at a valuation of $27 billion. OpenAI’s new valuation exceeds more than 60 times its annual revenue, making it one of the most highly valued companies currently. Relevant investors expected OpenAI to use the new valuation to set a floor price for new financing. Sam Altman said privately that he hoped the company would raise funds, but the new valuation made some potential investors unhappy. After OpenAI generated $28 million in revenue last year, Altman told employees that the company was generating $1.3 billion in ARR from paid subscriptions to its ChatGPT chatbot and selling access to GPT-4, the large-scale software that drives ChatGPT. Language model, and the computing power to support it. The extent of OpenAI’s losses is currently unknown, but OpenAI’s workforce has more than doubled since the end of last year to at least 700 people.


AI PC: a once-in-a-lifetime industry savior! Intel expects to reach tens of millions of units next year

 

Link: https://news.miracleplus.com/share_link/11058

NVIDIA CEO Jen-Hsun Huang has been revered as the “Godfather of AI”. Generative AI (AIGC) has subverted the work and life experience of countless people. However, Huang’s vision and realm are not just to sell more GPUs. He has a profound understanding of industry changes. Far-sighted understanding, especially the significance of AI to the PC industry. Many people may not have noticed that when Huang Renxun attended the graduation ceremony of National Taiwan University in May this year, he said this in his speech: “The PC industry is ushering in an opportunity for rebirth. In the next 10 years, new AI PCs will replace The market value of traditional PCs can reach trillions of dollars.” As the saying goes, heroes think alike. Intel and AMD are both working hard to build AI PCs. Intel CEO Pat Kissinger said bluntly: “We believe that AI PCs are the future. This is a key turning point in the PC market in several years, and every one of our products will integrate AI.” He predicts that by 2024, there will be tens of millions of new AI-enabled PCs on the market.


In an era where RTX 4090 is limited, a more efficient way to use RLHF on large models has come

 

Link: https://news.miracleplus.com/share_link/11059

This year, Large Language Models (LLMs), led by ChatGPT, have shone in all aspects, which has triggered a sharp increase in demand for computing resources such as GPUs in academia and business circles. For example, supervised fine-tuning (SFT) for a Llama2-7B model requires more than 80GB of memory. But this is often not enough. In order to align with humans, large language models must undergo RLHF (reinforcement learning from human feedback) training. The GPU consumption of RLHF is often more than 2 times that of SFT, and the training time can reach more than 6 times. Recently, the U.S. government announced restrictions on NVIDIA GPU products such as H100 and H800 from entering the Chinese market. This provision undoubtedly adds a lot of resistance to China’s development of large language models (LLMs) and artificial intelligence. Reducing the training cost (GPU consumption and training time) of RLHF is very important for the development of LLMs. The paper introduces a new algorithm called ReMax, designed for reinforcement learning based on human feedback (RLHF). ReMax surpasses the most commonly used algorithm PPO in terms of computational efficiency (approximately 50% reduction in GPU memory and 2x improvement in training speed) and implementation simplicity (6 lines of code), without any loss in performance.


Fudan University and Huawei Noah propose the VidRD framework to achieve iterative high-quality video generation

 

Link: https://news.miracleplus.com/share_link/11060

Researchers from Fudan University and Huawei’s Noah’s Ark Laboratory proposed an iterative solution for generating high-quality videos based on the image diffusion model (LDM) – VidRD (Reuse and Diffuse). This solution aims to make breakthroughs in the quality and sequence length of generated videos, and achieve high-quality, controllable video generation of long sequences. It effectively reduces the jitter problem between generated video frames, has high research and practical value, and contributes to the current hot AIGC community. This paper proposes a framework called “Reuse and Diffuse”. This framework can generate more video frames after the small number of video frames already generated by LDM, thereby iteratively generating longer, higher quality, and diverse video content.


Prompt the project to die? MIT Stanford lets big models take the initiative to ask questions and figure out what you want.

 

Link: https://news.miracleplus.com/share_link/11061

Specifically, this study proposes a new learning framework GATE (Generative active task elicitation). It can elicit and reason about the preferences of human users based on the ability of the large model itself. The research team calls this a more proactive approach, which takes the form of letting large models ask questions to users so that human preferences can be expressed more clearly. Correspondingly, supervised learning and prompt engineering are both passive methods, and supervised learning and a small amount of active learning are based on examples. Why do we need to remind humans to “turn our backs on guests”? Because the prompts given by humans have limitations, they may not be able to accurately and completely express their preferences. For example, many people do not understand the prompting process; or provide misleading information during the prompting process… These will lead to poor performance of large models. An example is given in the paper. Suppose a user says that he likes to read tennis-related articles and is interested in tennis tours and serving techniques. But from the article references he provided, it was impossible to tell whether he was interested in other topics related to tennis.


A large model company from Tsinghua University with a valuation of tens of billions revealed that it has raised 2.5 billion yuan during the year!

 

Link: https://news.miracleplus.com/share_link/11062

Zhipu AI, a large model startup, revealed that it has completed more than 2.5 billion yuan in financing! This figure is already higher than the current valuation of many AI startups. More importantly, this 2.5 billion yuan in financing is only “accumulated this year.” Before this year, Zhipu AI had already conducted (at least) three rounds of financing. No wonder there was market news in September. As one of the leading domestic large model companies, the market valuation of Zhipu AI is already around 12 billion yuan. According to official news from Zhipu, this year’s financing will be used for further research and development of the large base model. Looking at the long list of investors disclosed this time, the names of established Internet companies such as Meituan, Ant, Alibaba, Tencent, Xiaomi, and TAL are clearly written, as well as Legendlian, Shunwei, Sequoia, Gao Ling and many other first-tier VCs. Qubit previously learned that Meituan, Alibaba, Ant and others were betting on Zhipu, and they all followed the route of “bringing business into the group”, which in one fell swoop boosted Zhipu’s net worth.

© Copyright notes

Related posts

No comments

No comments...