For artificial intelligence, people seem to have such a common sense:
"The bigger the better"
People subconsciously default
, AI algorithms like big companies like Google, Intel are definitely better, because you need super expensive data sets, super CPU, to train smarter AI algorithms.
For example, a highly accurate object recognition system developed by Facebook last year used
images as a data source.
So in terms of AI, there seems to be no place for small companies or small teams.
But this common sense seems to have been subverted recently. A group of students
from Fast.ai (a non-profit machine learning organization) wrote a Machine Learning algorithm that refreshed Google's "running points" record.
Their algorithms transcended Google's code when they used the benchmark analysis method DAWNBench developed by Stanford University to "run". This benchmarking test is a task of image classification to detect the depth of learning of the algorithm at every $1 of computing power.
In fact, it is training the famous
The Fast.ai team used 16 AWS public cloud assistive services, each with eight NVIDIA V100 GPUs.
In the same
processing unit with Google
, they trained the accuracy rate to 93% in just 18 minutes.
What is the concept of this speed?
This is 40% faster than Google's own TPU Pod!
Be aware that Google's self-developed TPU computing power is 15 times and 30 times that of traditional GPUs and CPUs, respectively!
For example, the AlphaGo that defeated Li Shishi used the TPU unit.
The TPU Pod, which is an operational array consisting of 64 TPU units, provides 11.5 FLOPS of floating-point computing power.
How did the Fast.ai team do it? When talking about training ideas, the team leader gave out the trick. He says:
Nowadays, deep learning researchers are doing a very stupid thing, either by taking a rectangular picture and trimming the center to make predictions; or by cropping the picture into five parts, up and down, to predict, and then taking the average. The obvious question is: Why not use the original image directly?
Many people mistakenly believe that convolutional neural networks (CNNs) can only be used in fixed sizes, and many people are stuck in one size (usually 224x224 pixels). Fast.ai converts its in-house fixed model into a dynamic size model.
The most advanced part of the Fast.ai team is
that they use dynamic dimensions for categorization.
At the beginning, using a smaller size image for training, the accuracy of the model will definitely be low, and then gradually increase the size of the image, the image information seen by the model will increase sharply, and the accuracy will increase rapidly.
Thus, as the size of the image grows, the model learns to distinguish very subtle differences.
, the total computational cost of the entire process is only about
(even including the time cost of machine construction).
This is a great encouragement to the small team of machine learning around the world.
Because traditionally, it is believed that only large companies can accomplish such results.
As the headline of the famous technology media The Verge said: Such projects confirm that smart programmers can still defeat technology giants such as Google and Intel.
Even more surprising is that the team members of Fast.ai are mostly amateur machine learning enthusiasts. They are only interested in this aspect and hope to transfer to this in the future; we have to marvel at the precise grasp of the students in the future.
Data forecasts indicate that by 2020, the global AI market will reach 1.2 Trillion and will grow at a high rate. The global average GDP growth rate is only 3.5%.
Future AI's main investment areas will focus on trading algorithm strategy enhancement, static image recognition, classification and labeling, medical data analysis and decision making, predictability maintenance and more. We already know Siri, such as the virtual reality assistant class, Microsoft Xiaobing, a smart online assistant class, and automated machine classes such as Google driverless cars.
From a regional perspective, the main markets will be Asia Pacific (China, Japan, South Korea, etc.) , North America, and Western Europe. In the future, there will be a large number of people unemployed, and there will be a variety of new industries. This is the era we are in. Many people have been confused by media reports, but they are not aware of the real challenges.
It is no exaggeration to say that whether you are in the future or want to change the industry, artificial intelligence is the best choice, it can almost guarantee that you will not face unemployment in the future. - Jeremy Howard
How can we not be robbed of work by the machine in the future?
How is the machine algorithm implemented?
How is deep learning taught?
How to systematically get started with machine learning?
These questions will be answered in the building + course "Machine Learning" launched in the experimental building! Only in the third phase of the course, this course has become the hottest project in the experimental building!
How to register?
(The number of classes is limited, and there are less than one month to start classes; please sign up in time after you have decided)
(If you have any questions, you can consult in this way.)
(Now, in addition to the benefits listed below, there will be a gift for the "School Season Gifts", connotation books, cute T-shirts and other surrounding gifts; graduation gifts!)
This compilation: laboratory building sweeping aunt
HTTP: // the WWW. Fast.ai/
Https://www. theverge.com/2018/5/7/1 7316010/fast-ai-speed-test-stanford-dawnbench-google-intel
Https://www. technologyreview.com/s/ 611858/small-team-of-ai-coders-beats-googles-code/?utm_source=facebook.com&utm_medium=social&utm_campaign=owned_social
"The machine will completely replace humans, and humans can only choose: either control artificial intelligence or become artificial intelligence - Elon Musk & Stephen Hawking"