December 6 Big Model Daily Collection

News6months agorelease AIWindVane
30 0

December 6 Big Model Daily Collection

【December 6 Big Model Daily Collection】Information Research No need for fine-tuning? 3 samples, 1 tip to complete LLM alignment, prompt engineers: they are all back. Industry Microsoft Copilot evolution complete, code interpreter, DALL·E 3, ChatGPT, it has everything

No need for fine-tuning? 3 samples and 1 tip to complete LLM alignment, prompting engineers: they are all back

Whether the effect of a large model is good or not, sometimes alignment tuning is critical. But recently, many studies have begun to explore methods without fine-tuning. Researchers from the Allen Institute for Artificial Intelligence and the University of Washington used a new “tuning-free” alignment method to go beyond the use of supervised tuning (SFT) and human feedback reinforcement learning (RLHF). LLM performance.

NeurIPS 2023 | Imitate humans and draw inferences, a new paradigm of data set amplification, the GIF framework is here

In this NeurIPS 2023 paper, scholars from the National University of Singapore and Bytedance, inspired by human associative learning, proposed a new paradigm of data set expansion, which effectively improved the performance and performance of deep models in small data scenarios. Generalization ability greatly reduces the time and cost of manual collection and labeling of data. The code has been open sourced.

The “3D Gaussian” version of segmenting everything is here: 3D segmentation is completed in a few milliseconds and accelerated by a thousand times

In April this year, Meta released the “Separate Everything (SAM)” AI model. This result not only became the annual paper in the minds of many CV researchers, but also won the best paper nomination at ICCV 2023. “Segment Everything” realizes “both” and “can” of 2D segmentation, can easily perform interactive segmentation and automatic segmentation, and can be generalized to any new tasks and fields. Now, this idea has also been extended to the field of 3D segmentation.

The facial features are flying around, opening the mouth, staring, and raising eyebrows, AI can imitate them perfectly, making it impossible to prevent video scams

From institutions such as the Technical University of Munich, they proposed GaussianAvatars, a method that can be used to create realistic head avatars that are fully controllable in terms of expression, posture, and viewpoint. The study stated that in computer vision and graphics, there have always been challenges in creating movable human virtual heads. In particular, extreme facial expressions and details, such as wrinkles, hair, etc., are difficult to capture, and the generated virtual characters Visual artifacts can easily occur.

Meta open source largest multi-modal video data set—Ego-Exo4D

Social and technology giant Meta teamed up with research institutions from 15 universities and after more than two years of hard work, released the first multi-modal video training data set and basic suite Ego-Exo4D for training and researching large AI models. It is reported that the data set collected videos from 839 participants in 13 cities, with a total duration of more than 1,400 hours, including 131 complex scene actions in 8 categories including dance, football, basketball, rock climbing, music, cooking, and bicycle maintenance. This allows AI models to better understand human behavior and helps develop more powerful multi-modal large models.

Enables direct comparison of performance of potential new drugs, Duke University team develops new drug AI model

Current molecular machine learning models often take individual molecules as input to predict their biological, chemical, or physical properties. However, such algorithms require large data sets and have not been optimized for predicting property differences between molecules, limiting their ability to learn from smaller data sets and directly compare expected properties of two molecules. Researchers at Duke University have developed DeepDelta, a pairwise deep learning method that can process two molecules simultaneously and learn to predict the property differences between the two molecules from a small data set. In 10 ADMET benchmark tasks, the DeepDelta method significantly outperformed two established molecular machine learning algorithms: Directed Message Passing Neural Network (D-MPNN) ChemProp and Random Forest using radial fingerprints; and DeepDelta was better at predicting molecular properties. In particular, it outperforms existing methods in terms of huge differences and can even perform scaffold hopping. DeepDelta provides an accurate way to predict differences in molecular properties by directly training pairs of molecules and their property differences, further supporting fidelity and transparency in molecular optimization for drug development and chemical science.

© Copyright notes

Related posts

No comments

No comments...