Big Model Daily on December 27

News6months agorelease AIWindVane
29 0
Big Model Daily on December 27

[Big Model Daily on December 27] Othello, which has 10²⁸ variations, was cracked by supercomputer; ScienceAI’s 2023 “AI + Protein & Nucleic Acid & Molecular Interaction” special topic; AI may have a big breakthrough! New brain-like transistors can simulate human intelligence and can work at room temperature; Kuaishou Agents system, models, and data are all open source; GPT-4 fully cracked version: fine-tuned with the latest official API, you can do whatever you want, netizens are afraid

Othello, which has 10²⁸ variations, was cracked by a supercomputer!

Reversi is also known as Othello, and its nickname comes from Shakespeare’s famous play “Othello” – the black and white sides symbolize the protagonist Othello and his wife Desdemona; the game between the chess games symbolizes the two of you. Come and go. Now, with the help of supercomputing clusters, scientists have exhausted all the changes in the chess game and cracked Othello. The lovers went through more than four hundred years of jealousy, betrayal, regret and tears, and finally embraced each other tightly as equals.

Large model + robot, a detailed review report is here, with the participation of many Chinese scholars

The outstanding capabilities of large models are obvious to all, and if they are integrated into robots, it is expected that robots will have a more intelligent brain, bringing new possibilities to the field of robotics, such as autonomous driving, home robots, industrial robots, auxiliary robots, and medical care. Robots, field robots and multi-robot systems. Pretrained large language models (LLM), large vision-language models (VLM), large audio-language models (ALM), and large visual navigation models (VNM) can be used to better handle various tasks in the field of robotics. Integrating basic models into robotics is a rapidly growing field, and the robotics community has recently begun to explore the use of these large models in robotic areas such as perception, prediction, planning, and control. Recently, a joint research team from universities such as Stanford University and Princeton University, as well as companies such as Nvidia and Google DeepMind, released a review report summarizing the development and future challenges of basic models in the field of robotics research.

ScienceAI 2023 “AI+Protein & Nucleic Acid & Molecular Interaction” Special Topic

In 2023, the field of “AI + biological macromolecule structure” will still show a trend of blooming; various fields such as protein structure prediction, protein-protein interaction, protein-nucleic acid interaction, and RNA structure have been further expanded, which has also promoted the use of AI in Exploration and implementation in many application fields such as enzyme engineering, pharmaceuticals, medical treatment, and diagnosis.

ML-based movement tracking: reveals the relationship between the movement of pathogenic bacteria in tissue cells

Bacterial motility is often a key virulence factor for pathogenic bacteria. A common method for studying bacterial motility is fluorescent labeling, which allows detection of individual bacterial cells in a population or host tissue. However, the use of fluorescent labels may be hampered by interference with protein expression stability and/or bacterial physiology. Researchers at Tohoku University in Japan applied machine learning to microscopic image analysis to perform label-free movement tracking of the zoonotic bacterium Leptospira interrogans on cultured animal cells. The team used various Leptospira strains isolated from human patients or animals, as well as mutant strains. Strains associated with severe disease and mutant strains lacking outer membrane proteins (OMP) tend to exhibit rapid mobility and reduced adhesion to cultured kidney cells. This method does not require fluorescent labels or genetic manipulations and therefore can be applied to study the motility of many other bacterial species.

AI may have a big breakthrough! New brain-like transistor can simulate human intelligence and work at room temperature

Researchers have developed a transistor that can simultaneously process and store information like the human brain. The transistor goes beyond classification tasks and performs associative learning. The new transistor works at room temperature, making it more practical.

Go deep into Huawei’s offline closed-door meeting and have a comprehensive look at China’s big models this year

The large model represents a major advancement in the field of artificial intelligence. For the first time in history, humans have truly seen the dawn of general artificial intelligence (AGI). However, not much is known about large models. Some AI researchers, including OpenAI chief scientist Ilya Sutskever, firmly believe that predicting the next word accurately enough indicates that the model has a deep enough understanding of the text content. Opponents say it’s just statistics. Clearly, we are at the beginning of a revolution. What do we know about large models? What issues should we pay attention to? At the just-held 2023 Huawei Cloud AI Deans Summit, Academicians Zhang Bo, Academician Gao Wen, as well as principals, college deans, and professors from 24 universities across the country gathered to conduct in-depth discussions around large models and their development.

Shanghai AI Laboratory upgrades and releases “PU Medical 2.0” to realize one-stop open source for large medical model groups

The medical multi-modal basic model group “OpenMEDLab” has received a major upgrade. Recently, at the “2023 Healthy China Sinan Summit”, the Shanghai Artificial Intelligence Laboratory (Shanghai AI Laboratory) and Ruijin Hospital Affiliated to Shanghai Jiao Tong University School of Medicine and other partners jointly released the medical multi-modal basic model group “PU Medical 2.0” (OpenMEDLab2.0) aims to provide capability support for “cross-domain, cross-disease, and cross-modality” AI medical applications.

Kuaishou Agents system, models, and data are all open source!

The “KwaiAgents” developed by Kuaishou and Harbin Institute of Technology enables the 7B/13B model to achieve results beyond GPT-3.5, and these systems, models, data, and evaluations are all open source.

It is reported that the person in charge of iPhone design will join the company of Apple’s former chief designer and cooperate with Ultraman to develop AI products.

Former Apple chief designer Jonathan Ive and OpenAI CEO Sam Altman are teaming up to recruit Apple iPhone and Apple Watch design chief Tang Tan to participate in a new artificial intelligence (AI) hardware development initiative , the purpose is to create new products with the latest capabilities of AI.

GPT-4 fully cracked version: fine-tuned with the latest official API, you can do whatever you want, netizens are scared

As long as you use the latest fine-tuning API, GPT-4 can help you do anything, output harmful information, or personal privacy in training data. On Tuesday, a study from FAR AI, McGill University and other institutions sparked widespread concern in the AI research community. Researchers tried to attack several newly launched APIs of GPT-4, hoping to bypass the security mechanism and enable them to complete various tasks that are usually not allowed. It was found that all APIs can be broken, and the cracked GPT-4 Can respond to any request. This degree of “freedom” far exceeds the attacker’s expectations. Someone concluded: Large models can now generate misinformation targeting public figures, personal email addresses, malicious URLs, allow arbitrary unfiltered function calls, mislead users or perform unwanted function calls…

Llama2 infers that RTX3090 outperforms 4090 and has superior latency throughput, but is far behind A800.

Large language models (LLMs) have made tremendous progress in both academia and industry. But training and deploying LLM is very expensive and requires a lot of computing resources and memory, so researchers have developed many open source frameworks and methods for accelerating LLM pre-training, fine-tuning, and inference. However, the runtime performance of different hardware and software stacks can vary significantly, making it difficult to choose the best configuration.


At the OPPO Find X7 series product technical communication meeting this afternoon, OPPO announced a new upgrade of the Andes large model AndesGPT.

© Copyright notes

Related posts

No comments

No comments...