Big Model Daily on November 3

News9months agorelease AIWindVane
42 0
Big Model Daily on November 3

[Big Model Daily on November 3] Gen-2 subverts AI-generated videos! A 4K high-definition blockbuster is released in one sentence, netizens: Completely changing the rules of the game; Midjourney big update: opening new features of custom styles; Musk 𝕏AI’s first batch of products exposed! The “Prompt Word Workstation” is here, and I appear in the comment area; 600 million workers changed equipment overnight! DingTalk AI is fully open for testing


Gen-2 subverts AI-generated videos! A 4K high-definition blockbuster can be released in one sentence. Netizens: It completely changes the rules of the game.

https://news.miracleplus.com/share_link/11404

This is definitely a milestone in the process of generative AI. Just late at night, Runway’s iconic AI video generation tool Gen-2 ushered in an epic update like the “iPhone moment” – it was still a simple sentence input, but this time, the video effect was pulled to 4K ultra The height of realism! It is not difficult to see that the effect of AI-generated videos this time has overcome the shortcomings of the previous “one-glance AI”, namely incoherence, flickering distortion, low definition, etc. And this is exactly the focus of this epic update of Gen-2: it brings significant improvements to the fidelity and consistency of results in Vincent Video and Tusheng Video.


Midjourney big update: new custom style features available!

https://news.miracleplus.com/share_link/11405

Midjourney’s major update has been evaluated as “changing the rules of the game” again – the new Style tuner function allows users to customize image styles. The customized style is compressed into one line of code. As long as you paste this line of code at the end of the prompt, the generated diagram can maintain the customized style. In addition, customized style codes can also be shared. Netizens believe that style sharing can replace prompt sharing in the future.


Musk 𝕏AI’s first batch of products revealed! The “Prompt Word Workstation” is here, and I appear in the comment area

https://news.miracleplus.com/share_link/11406

The first batch of product information from Musk’s AI team was exposed: Grok, related to AI information retrieval; PromptIDE, literally meaning “prompt word workstation/integrated development environment”. The news came from Twitter blogger @Asuna Gilfoyle – e/acc, who discovered that 𝕏AI has submitted these two trademark applications. The credibility may not be low, because Musk bubbled up in the comment area without denying it. No more information about these two products is known. We can only “guess” from the trademark descriptions in the leaked information. Grok and PromptIDE use the same trademark number, so the specific descriptions are also the same. They will be used for:

– Providing non-downloadable online software for manually crafting information, processing/generating/understanding and analyzing information;

– R&D services in the field of AI;

– Research, design and development of computer programs and software;

– Provide a website featuring information in the field of AI;

– Utilize global computer networks to extract and retrieve information and conduct data mining;

– and create an index of information related to global computer networks


600 million workers changed equipment overnight! DingTalk AI is fully open for testing

https://news.miracleplus.com/share_link/11407

Since the DingTalk Magic Wand was launched for beta testing, more than 500,000 companies have used it to use AI to assist their work. Today, DingTalk AI Magic Wand is officially launched. 17 products and 60+ scenarios including DingTalk chat, documents, knowledge base, brain maps, flash notes, and Teambition are fully open for testing. When all users open DingTalk, they can use DingTalk AI through the “Magic Wand” entry in the upper right corner of the homepage, or by clicking the magic wand button on each product page.


More than 370 people, including LeCun and Ng Enda, signed a joint letter: Strict control of AI is dangerous, and openness is the antidote.

https://news.miracleplus.com/share_link/11408

In recent days, the discussion on how to supervise AI has become increasingly heated, and the opinions of the big guys are very different. For example, the three Turing Award giants Geoffrey Hinton, Yoshua Bengio, and Yann LeCun have two opinions. Hinton and Bengio are a team. They strongly call for strengthening the supervision of AI, otherwise it may lead to the risk of “AI exterminating mankind.” LeCun does not share their views. He believes that strong regulation of AI will inevitably lead to monopoly of giants, with the result that only a few companies control AI research and development. In order to express their thoughts, many people expressed their views by signing joint letters. In the past few days, for example, Bengio, Hinton and others issued a joint letter “Managing Artificial Intelligence Risks in an Era of Rapid Development”, calling for the Before developing AI systems, researchers should take urgent governance measures. At the same time, an open letter titled “Joint Statement on Artificial Intelligence Safety and Openness” is currently fermenting on social media.


The coding ability surpasses GPT-4. This model topped the Big Code rankings and was praised by the founder of YC.

https://news.miracleplus.com/share_link/11409

A model that claims to have coding capabilities that surpass GPT-4 has attracted the attention of many netizens. The accuracy is more than 10% higher than GPT-4, but the speed is close to GPT-3.5, and the window length is also longer. According to the developers, their model achieved a Pass@1 pass rate of 74.7%, exceeding the 67% of the original GPT-4, and topped the list of Big Code. The model is called Phind, the same name as the AI search tool for developers based on it. It is fine-tuned by the development team based on CodeLlama-34B. Phind uses TensorRT-LLM to run at a speed of 100 tokens per second on H100, which is 5 times faster than GPT-4. In addition, the context length of Phind reaches 16k, of which 12k is available for user input and the other 4k is reserved for text in the search results.


The Peking University team solved the algorithm optimization that is a headache for ChatGPT, and ordinary laptops can run it

https://news.miracleplus.com/share_link/11410

The algorithm optimization that even ChatGPT shook his head after looking at it was solved by the Peking University team. Tests have shown that the new research can solve 90% of the questions in the validation set, including divide and conquer and dynamic programming questions in NOIP, Codeforce, Leetcode and other competitions – these questions are difficult to solve with many large models. And it can run on your own ordinary laptop! After all, algorithm optimization is a blind spot in the capabilities of large models and even AI as a whole. Even the DeepMind AlphaTensor published by Nature has brought some shock to the field of program synthesis, but its actual effect is “still not enough” for industry professionals. Therefore, in this field that AI cannot conquer, how can algorithm optimization be speeded up and improved? A team from Peking University used a combination of program calculation and program enumeration to create two sets of algorithm optimization software. One set can handle the optimization of algorithms such as divide and conquer, parallelization, incremental calculation, and line segment trees, while the other set supports the optimization of dynamic programming algorithms.


Just now, The Beatles released their “last” new song, produced by AI behind it

https://news.miracleplus.com/share_link/11411

Friends who are familiar with music will certainly not be unfamiliar with The Beatles. They are widely regarded as the greatest and most influential rock band in history and the beginning of modern rock music. The Beatles were formed in 1960 with John Lennon, Ringo Starr, Paul McCartney and George Harrison. In 1963, the band released their debut album Please Please Me, and 1969’s Abbey Road is considered their best work. In 1970, the band announced it was disbanding. As we know, two band members, John Lennon and George Harrison, passed away in 1980 and 2001 respectively. Now and Then, “Now and Then” is finally available to fans around the world who love Beatles music. AI played an important role in the creation of this song. Ringo Starr and Paul McCartney used machine learning and other AI techniques to piece together the final track from John Lennon’s pristine fidelity recordings.


Let LLM learn from “wrong questions” and significantly improve reasoning ability

https://news.miracleplus.com/share_link/11412

During this time, large language models have made significant progress in various NLP tasks, especially in mathematical problems requiring complex chain-of-thought (CoT) reasoning. For example, in data sets with difficult mathematical tasks such as GSM8K and MATH, proprietary models including GPT-4 and PaLM-2 have achieved remarkable results. In this regard, open source large models still have considerable room for improvement. To further improve the CoT inference capabilities of open source large models for mathematical tasks, a common approach is to fine-tune these models using annotated/generated question-inference data pairs (CoT data) that directly teach the model how to perform tasks on these Perform CoT inference during the task. In a recent paper, researchers from Xi’an Jiaotong University, Microsoft, and Peking University tried to explore another improvement idea: Can its reasoning ability be further improved through the reverse learning process (that is, learning from the mistakes made by LLM)? Just like a student who has just started learning mathematics, he will first learn from the knowledge points and examples in the book, but he will also practice. After failing to solve a problem, he will know what mistakes he made and how to correct them, forming a “wrong problem book”. It is through learning from mistakes that reasoning skills are further improved. Inspired by this process, this work explores how LLM’s reasoning capabilities benefit from understanding and correcting errors.


28 countries including China issued the “Bletchley Declaration” to encourage the development of AI in a safe manner

https://news.miracleplus.com/share_link/11413

On November 1st, British time, 28 countries including China, the United States, the United Kingdom, France, Germany and the European Union signed the first global artificial intelligence (AI) statement at Bletchley Manor in the United Kingdom – “Bletchley Declaration of Benefit. The declaration clearly points out the huge opportunities that AI has for human society, but AI needs to be designed and used in a human-centered, trustworthy, and responsible manner to benefit all mankind. In particular, the risks that may be brought by “cutting-edge” AI are pointed out, such as large language models such as ChatGPT, Bard, Midjourney, and other narrow AI with “super” capabilities. The capabilities of such systems are difficult to predict and may be misused or lost control. Therefore, we call on the international community to work together to formulate policies and regulations under existing international forums to improve transparency and accountability, and to strengthen scientific research and risk assessment on this type of cutting-edge AI to develop in a safe, healthy, and reliable manner. and applied AI.


Large models are put on mobile phones, opening the curtain of AI changing the world

https://news.miracleplus.com/share_link/11414

Opening the camera interface, the demonstrator took a picture of the scenery in front of him. In the photo album, I found this picture and selected the “Expand” function. As a result, the peripheral part that was not captured in the photo was magically “expanded” to the periphery. On the other side, a mobile phone seems to be used to take selfies of visitors. Entering its front camera area, you will find that the background of the characters in the picture is replaced in real time. Even if the selfie taker keeps moving, the virtual background is not there. Wearing it, you can hardly feel the delay. And if you look a little closer, you will find that all the mobile phones used for demonstration are in airplane mode – that is to say, all the above-mentioned complex functions are running on the local chip of the mobile phone. This is the Demo experience hall of the 2023 Qualcomm Snapdragon Summit, and the AI functions demonstrated by the above-mentioned mobile phones all rely on the calculation of the Snapdragon 8 Gen 3 (third-generation Snapdragon 8) processor chip inside the machine that was just exposed at the conference. force. At a time when big AI models are hot, Qualcomm released two new products at this year’s press conference, Snapdragon X Elite and Snapdragon 8 Gen 3 chips, which achieved tens of billions of parameters on mobile devices represented by PCs and smartphones respectively. The local running of large models makes the magical capabilities of generative AI a “built-in function” of mobile devices. In addition to “cloud AI”, with the rapid development of chips, “terminal AI” has been realized, and the era of “hybrid AI” in which the two work together may have arrived.

© Copyright notes

Related posts

No comments

No comments...