Big Model Daily, February 20

News2months agorelease AIWindVane
12 0
Big Model Daily, February 20

[Big Model Daily, February 20] When Sora detonated video generation, Meta began to use Agent to automatically cut videos, led by Chinese authors; Iambic, NVIDIA, and California Institute of Technology developed multi-scale deep generation models to perform state-specific protein-configuration Prediction of body complex structure; 10x Nvidia GPU: Large model dedicated chips became famous overnight, from Google TPU entrepreneurial team; Samsung Electronics is reported to have established a new team in Silicon Valley to develop general artificial intelligence chips


When Sora detonated the video generation, Meta began to use Agent to automatically cut the video, led by Chinese authors

https://news.miracleplus.com/share_link/18786

In the past few days, the field of AI video has been extremely lively, and Sora, a large video generation model launched by OpenAI, has become even more popular. In the field of video editing, AI, especially agents empowered by large models, has also begun to show its talents. As natural language is used to handle tasks related to video editing, users can communicate their intentions directly, eliminating the need for manual intervention. But for now, most video editing tools still rely heavily on manual operations and often lack customized contextual help. Therefore, users are left to deal with complex video editing issues on their own. The key is how to design a video editing tool that can act as a collaborator and continuously assist users during the editing process? In this article, researchers from the University of Toronto, Meta (Reality Labs Research), and the University of California, San Diego propose to use the multi-functional language capabilities of large language models (LLM) for video editing, and explore the future video editing paradigm, thereby Reduce frustration with the manual video editing process.


Large multi-view Gaussian model LGM: produces high-quality 3D objects in 5 seconds, available for trial play

https://news.miracleplus.com/share_link/18787

In this article, researchers from Peking University, Nanyang Technological University S-Lab and Shanghai Artificial Intelligence Laboratory proposed a new framework LGM, Large Gaussian Model, which enables inputting images or text from a single perspective in just 5 seconds. High-resolution and high-quality three-dimensional objects can be generated. Currently, both the code and model weights are open source. The researchers also provide an online demo for everyone to try.


What is the speculative decoding that GPT-4 might also be using? An article summarizing the past, present and application situations

https://news.miracleplus.com/share_link/18788

Speculative Decoding is a large model inference acceleration method discovered by Google and other organizations in 2022. It can achieve an acceleration ratio of more than 3 times without losing the generation effect. The GPT-4 leak report also mentioned that OpenAI uses it for online model inference. In response to such a wonderful method, the Hong Kong Polytechnic University, Peking University, MSRA and Alibaba jointly launched a review on speculative decoding to help readers understand the past, present and application of speculative decoding. It is worth reading.


Iambic, NVIDIA, and Caltech develop multi-scale deep generative model for state-specific protein-ligand complex structure prediction

https://news.miracleplus.com/share_link/18789

Binding complexes formed by proteins and small molecule ligands are ubiquitous and critical to life. Although scientists have recently made progress in protein structure prediction, existing algorithms cannot systematically predict binding ligand structures and their regulatory effects on protein folding. To resolve this discrepancy, researchers from AI pharmaceutical company Iambic Therapeutics, Nvidia Corporation, and the California Institute of Technology proposed NeuralPLexer, a computational method that uses only protein sequences and ligand molecules Graph input directly predicts protein-ligand complex structures. NeuralPLexer employs deep generative models to sample the three-dimensional structure of bound complexes and their conformational changes at atomic resolution. The model is based on a diffusion process that combines fundamental biophysical constraints and a multiscale geometric deep learning system to iteratively sample residue-level contact maps and all heavy atom coordinates in a hierarchical manner. NeuralPLexer predictions are consistent with structure determination experiments of important targets in enzyme engineering and drug discovery, and have great potential to accelerate the design of functional proteins and small molecules at the proteome scale.


10x NVIDIA GPU: Large model dedicated chip became famous overnight, from Google TPU entrepreneurial team

https://news.miracleplus.com/share_link/18790

We know that after a large model reaches a scale of hundreds of billions like GPT-3.5, the computing power for training and inference will no longer be affordable by ordinary startups, and people will be very slow to use it. But as of this week, that notion is history. A startup called Groq has developed a machine learning processor that is said to completely defeat the GPU on large language model tasks – 10 times faster than Nvidia’s GPU, while costing only 10% of the GPU and requiring only 10 one of electricity.


Musk: Neuralink’s first human subject has recovered and can control the mouse with his thoughts

https://news.miracleplus.com/share_link/18791

According to media reports, Tesla CEO Musk revealed on the social media platform Neuralink has previously conducted chip implantation experiments on monkeys and received approval from the U.S. Food and Drug Administration to officially begin the first clinical trial of a brain implant device.


Sources say social platform x (formerly Twitter) is in talks with Midjourney about a potential partnership

https://news.miracleplus.com/share_link/18792

Twitter, which recently changed its name to X, is reportedly discussing a potential partnership with artificial intelligence image generation platform Midjourney. The news, reported by DogeDesigner on Midjourney’s AI-generated art platform allows users to create unique images based on text prompts.


Latest interview with Figma CEO: Figma has never been just a design tool, it has been about bridging the gap between imagination and reality from the beginning.

https://news.miracleplus.com/share_link/18793

Figma co-founder and CEO Dylan Field recently accepted an interview with theVerge. Dylan mentioned the possibility of expanding Figma into the more general field of productivity software. He does not think Figma will enter the field of note-taking applications, but they hope to explore more value chains related to designing, coding, publishing and measuring software, which they may do through cooperation. Partnerships to extend these capabilities rather than developing these capabilities independently. Additionally, Dylan discusses how AI can impact design work. He believes that the emergence of AI has lowered the threshold for design, allowing more people to participate. He believes that AI can improve efficiency and allow designers to complete more work in a shorter period of time. AI will not completely replace human designers because AI is still limited in aspects such as emotion, brand experience and user flow in design work. Not covered.


AI companies that do whatever it takes to train large models have broken this ancient Internet protocol

https://news.miracleplus.com/share_link/18794

The emergence of large models broke the rules of the Internet for 30 years. The code version of robots.txt, the “mini-constitution of the Internet,” began to fail. robots.txt is a text file that every website uses to indicate whether or not it wants to be crawled. For 30 years, it’s been what’s kept the Internet from running in chaos. However, the long-term operation of this rule is purely based on human logic – you let search engines crawl your website, and at the same time you will receive traffic rewards from search engines. It was also a handshake agreement reached by several Internet pioneers for the benefit of everyone on the Internet. After 30 years of operation, this somewhat naive rule, which is neither written into law nor authoritatively constrained, finally has problems – more and more AI companies use crawlers to crawl your website data and extract data sets. , train large models and related products, but they do not give back traffic like search engines do, and they do not even acknowledge your existence at all. Your data is like a meat bun beating a dog, and it will never come back. Many data owners are very angry. Data owners such as news publishers have continued to speak out, blocking AI crawlers and resisting the free use of their digital assets. However, AI promoters such as Google and OpenAI are also trying to find better rules. After all, sustainable development can only occur if all parties benefit.


Samsung Electronics is reported to have established a new team in Silicon Valley to develop general artificial intelligence chips

https://news.miracleplus.com/share_link/18795

According to people familiar with the matter, Samsung Electronics has established a new team in Silicon Valley to develop general artificial intelligence chips. It is reported that former Google developer Woo Dong-hyuk will lead the team.

© Copyright notes

Related posts

No comments

No comments...