Collection of Big Model Daily on January 26

News5months agorelease AIWindVane
21 0
Collection of Big Model Daily on January 26

[Collection of Big Model Daily on January 26] He Kaiming and Xie Saining’s anatomical diffusion model, a new work has just been released; Musk’s artificial intelligence startup xAI is seeking to raise US$6 billion from global investors to fund its challenge to OpenAI; OpenAI’s official end Repairing GPT-4 has become lazy, multiple new models have been released, and prices have been greatly reduced; multi-modal large models, Ali Tongyi Qianwen can compete with GPT-4V; Oracle launches a generative AI craze: comprehensive integration and innovation trends


Large model × text watermark: Tsinghua University, Hong Kong Chinese, Hong Kong Science and Technology, UIC, and Beijing University of Posts and Telecommunications jointly released the first review of text watermarking in the era of large models

 

Link: https://news.miracleplus.com/share_link/16853

This article introduces the first review of text watermarking in the big model era. It was jointly released by Tsinghua University, Hong Kong Chinese, Hong Kong Science and Technology, UIC, and Beijing University of Posts and Telecommunications. It comprehensively explains the algorithm categories and design, evaluation angles and indicators of text watermarking technology in the big model era. Practical application scenarios, while also deeply discussing the current challenges faced by related research and future development directions, and exploring cutting-edge trends in the field of text watermarking.


He Kaiming and Xie Saining’s anatomical diffusion model, a new work has just been released

 

Link: https://news.miracleplus.com/share_link/16854

CV master He Kaiming has also come to work on the Diffusion Model. The latest paper has just been posted on arXiv: Deconstructing the Diffusion Model and proposing a highly simplified new architecture l-DAE (lowercase L). And by comparing it with He Kaiming’s representative work MAE (Masked Autoencoder) in the field of visual self-supervised learning, we can better understand the internal working principle of the diffusion model.


David Baker published an article in Science: AI-driven protein design should comply with biosafety

 

Link: https://news.miracleplus.com/share_link/16855

Professor David Baker of the University of Washington and Professor George Church of Harvard Medical School published a commentary titled “Protein design meets biosecurity” in the latest issue of Science. The article discusses the use of artificial intelligence (AI) in computational protein design and the risks this technology may pose. The authors emphasize the importance of AI in improving protein design capabilities and accuracy, as well as the critical role of DNA synthesis in realizing designed proteins. At the same time, the authors also proposed a security strategy that collects and stores all synthetic gene sequences and synthesis data to ensure the safety of protein design. The article also mentioned that protein design accelerated by AI can help solve problems such as global pathogens, neurodegenerative diseases, and ecosystem degradation. And, the authors call on all relevant communities to engage, support the required infrastructure, and define the human, institutional, and governance requirements to ensure the safety and maximum advancement of this rapidly evolving field to address pressing social issues.


Researchers from the University of Sydney and D24H Hong Kong develop self-supervised learning methods for segmentation of transcriptome data in subcellular space

 

Link: https://news.miracleplus.com/share_link/16856

Recent advances in subcellular imaging transcriptomics platforms have resulted in high-resolution spatial maps of gene expression, while also creating significant analytical challenges in accurately identifying cells and assigning transcripts. Existing methods are difficult to perform cell segmentation. Researchers from the University of Sydney and the Hong Kong Big Data Analysis Laboratory for Healthcare (D24H) proposed BIDCell, a self-supervised deep learning framework with a biological information loss function that can learn spatial parsing The relationship between gene expression and cell morphology. BIDCell integrates cell type data, including single-cell transcriptome data and cell morphology information from public repositories.


Musk’s artificial intelligence startup xAI is seeking $6 billion from global investors to fund its challenge to OpenAI

 

Link: https://news.miracleplus.com/share_link/16857

Musk, who hopes to raise as much as $6 billion for xAI at a proposed $20 billion valuation, is also eyeing sovereign wealth funds in the Middle East and has already reached out to investors in Japan and South Korea, people familiar with the matter said. Morgan Stanley is currently coordinating the financing, one of the people said.


OpenAI officially fixes GPT-4 and becomes lazy, launches multiple new models, and slashes prices

 

Link: https://news.miracleplus.com/share_link/16858

I wonder if you still remember the fact that GPT-4 started to become “lazy” at the end of last year. For example, when using the GPT-4 or ChatGPT API during peak hours, the response becomes very slow and perfunctory, sometimes refusing to answer the user’s questions, and even unilaterally interrupting the conversation. This situation is even more familiar to coders. Some people complained, “I asked ChatGPT to extend some code, and it actually asked me to write it myself.” I originally wanted to use ChatGPT to help me write code, but now it’s better, all of a sudden. Rejected for you.


Multi-modal large model, Alibaba Tongyi Qianwen can compete with GPT-4V

 

Link: https://news.miracleplus.com/share_link/16859

What will be rolled out in the field of large models in 2024? If you have no idea, you might as well take a look at what directions the major manufacturers are betting on. Recently, OpenAI first launched GPT-4V, which allows large models to have unprecedented image semantic understanding capabilities. Google followed suit and released Gemini, the industry’s first native multi-modal large model that can generalize and seamlessly understand, manipulate and combine different types of information, including text, code, audio, images and video. . Not only GPT-4V and Gemini, in this direction full of potential, domestic technical forces are also worthy of attention: a recent important release comes from Alibaba, their newly upgraded Tongyi Qianwen visual language large model Qwen-VL-Max Officially released last week, it has achieved good results on multiple evaluation benchmarks and achieved powerful image understanding capabilities.


The large model inference cost ranking is here: Jia Yangqing’s company leads in efficiency

 

Link: https://news.miracleplus.com/share_link/16860

As large language model technology gradually becomes practical, more and more technology companies have proposed large model APIs for developers to use. But OpenAI “burned $700,000 a day” before, and we have reason to doubt whether a business based on large models can be sustained. Recently, AI company Martin carefully calculated the inference costs of various models for us. LeptonAI can provide the best throughput on small service loads with short input + long output prompts. The P50 at 130 tks/s is the fastest throughput observed across all models from any vendor.


Why was Mamba’s paper not accepted by ICLR? The AI community is buzzing

 

Link: https://news.miracleplus.com/share_link/16861

In 2023, Transformer’s dominance in the field of AI large models will be shaken. The new architecture that launched the challenge is called “Mamba”. It is a selective state space model that can rival or even beat Transformer in language modeling. Moreover, it can achieve linear scaling as the context length increases, and its performance can be improved to millions of token length sequences in real data, and achieve a 5x increase in inference throughput. In more than a month after its release, Mamba gradually showed its influence and spawned many projects such as MoE-Mamba, Vision Mamba, VMamba, U-Mamba, MambaByte, etc., showing outstanding performance in overcoming the shortcomings of Transformer. greater potential. But such a rising “star” encountered Waterloo at the 2024 ICLR meeting. The latest public results show that Mamba’s paper has not yet been accepted by the conference, and we can only see it in the Decision Pending column (it may be a delayed decision, or it may be rejected).


Oracle launches generative AI craze: comprehensive integration and innovation dynamics

 

Link: https://news.miracleplus.com/share_link/16862

Oracle is integrating generative AI capabilities into its cloud infrastructure (OCI) technology stack, launching generative AI services and agents, and Data Science AI Quick Actions. This integration is designed to apply generative AI to cloud data centers and on-premises environments where customer data resides. OCI has newly added embedded models of Llama 2 and Cohere, and provides flexible fine-tuning options. Oracle introduces search-enhanced generation agents to OpenSearch, allowing users to access enterprise data sets with natural language. Oracle plans to launch new AI agents, support a wider range of data search and aggregation tools, and simplify the customization of AI models through OCI Data Science AI Quick Actions.

© Copyright notes

Related posts

No comments

No comments...