Big Model Daily, November 20

News8months agorelease AIWindVane
33 0

Big Model Daily, November 20

[Big Model Daily, November 20] Former Open AI CEO Sam and Greg Brockman joined Microsoft and will lead a new advanced artificial intelligence research team; Musk: The risk of developing advanced AI is very high. OpneAI should announce the reason for firing Ultraman; What are the application prospects of GPT-4V in autonomous driving? A comprehensive evaluation of real-life scenarios is coming


Former Open AI CEOs Sam and Greg Brockman join Microsoft to lead a new advanced artificial intelligence research team

https://news.miracleplus.com/share_link/11844

Microsoft reaffirmed its commitment to working with OpenAI and expressed full confidence in the organization’s product roadmap, innovation capabilities and support for customers and partners. In announcing the news, Microsoft highlighted the innovations announced at the Microsoft Ignite conference. In addition, Microsoft welcomes OpenAI’s new leadership team, and is particularly looking forward to working with Emmett Shear and his team. Most notably, Sam Altman and Greg Brockman will join Microsoft to lead a new advanced artificial intelligence research team. This development is regarded as Microsoft’s major investment and strategic layout in the field of artificial intelligence. Microsoft said in a statement that they will move quickly to provide Altman, Brockman and their teams with the resources they need to succeed. This demonstrates Microsoft’s commitment to advanced AI research beyond financial support to providing necessary technology and talent resources.


Musk: The risk of developing advanced AI is very high. OpneAI should announce the reason for firing Ultraman

https://news.miracleplus.com/share_link/11845

Tesla CEO Elon Musk said OpenAI, currently the world’s most powerful AI company, should explain its dismissal of its CEO because the potential dangers of developing advanced artificial intelligence (AI) technology are so high. CEO Sam Altman’s reasons.


A long article by the person in charge of the OpenAI security system: Adversarial attacks and defenses of large models

https://news.miracleplus.com/share_link/11846

LLM has powerful capabilities. If someone with ulterior motives uses it to do bad things, it may cause unpredictable serious consequences. Although most commercial and open source LLMs have certain built-in security mechanisms, they may not be able to defend against various forms of adversarial attacks. Recently, Lilian Weng, head of the OpenAI Safety Systems team, published a blog article “Adversarial Attacks on LLMs”, sorting out the types of adversarial attacks against LLMs and briefly introducing some defense methods.


What are the application prospects of GPT-4V in autonomous driving? A comprehensive evaluation based on real-life scenarios is here

https://news.miracleplus.com/share_link/11847

The release of GPT-4V has opened up new possibilities for many computer vision (CV) applications. Some researchers are beginning to explore the potential of GPT-4V for practical applications. Recently, a paper titled “On the Road with GPT-4V (ision): Early Explorations of Visual-Language Model on Autonomous Driving” conducted an increasing difficulty test of GPT-4V’s capabilities in autonomous driving scenarios. From understanding to reasoning, to continuous judgment and decision-making as a driver in real scenarios.


Google Bard “breaks defense” and uses natural language to crack, prompting the risk of data leakage caused by injection

https://news.miracleplus.com/share_link/11848

Large language models rely heavily on cue words when generating text. This attack technique can be described as “using the enemy’s spear to attack one’s own shield” for the model learned through prompt words. It is the strongest strength, but it is also a weakness that is difficult to prevent. Prompt words are divided into system instructions and instructions given by users. In natural language, the two are difficult to distinguish. If the user intentionally imitates system commands when entering prompt words, the model may reveal some “secrets” in the conversation that only it knows. There are many forms of prompt injection attacks, mainly direct prompt injection and indirect prompt injection. Direct prompt injection refers to the user inputting malicious instructions directly into the model in an attempt to induce unexpected or harmful behavior. Indirect hint injection refers to an attacker injecting malicious instructions into documents that may be retrieved or ingested by the model, thereby indirectly controlling or guiding the model.


Really realize one-step image creation, Google UFOGen extremely fast sampling, generate high-quality images

https://news.miracleplus.com/share_link/11849

In the past year, a series of Vincentian diffusion models represented by Stable Diffusion have completely changed the field of visual creation. Countless users have improved their productivity with images produced by diffusion models. However, the speed of generation of diffusion models is a common problem. Because the denoising model relies on multi-step denoising to gradually turn the initial Gaussian noise into an image, it requires multiple calculations of the network, resulting in a very slow generation speed. This makes the large-scale Vincentian graph diffusion model very unfriendly to some applications that focus on real-time and interactivity. With the introduction of a series of technologies, the number of steps required to sample from a diffusion model has increased from the initial few hundred steps, to dozens of steps, or even only 4-8 steps. Recently, a research team from Google proposed the UFOGen model, a variant of the diffusion model that can sample extremely quickly. By fine-tuning Stable Diffusion with the method proposed in the paper, UFOGen can generate high-quality images in just one step. At the same time, Stable Diffusion’s downstream applications, such as graph generation and ControlNet, can also be retained.


Germany, France and Italy reach an agreement on AI regulation to oppose excessive restrictions on technological development

https://news.miracleplus.com/share_link/11850

Germany, France and Italy have reached an agreement on how to regulate artificial intelligence, which may become a blueprint for AI regulatory guidelines at the European level; the agreement reached by Germany, France and Italy claims that it will only regulate the application of artificial intelligence rather than the technology itself. Sanctions will be imposed on apps only after they have committed misconduct; Germany, France and Italy both favor binding voluntary commitments on EU AI providers.

© Copyright notes

Related posts

No comments

No comments...