What is on chip memory?
When we talk about on chip memory it means when the processor needs some data, instead of fetching it from main memory it fetches it from some component that is installed on processor itself. The data that a processor frequently needs is prefetched from memory to SRAM to decrease the delay or the latency. SRAM is used as on chip storage; its latency rate is less usually single cycle access time and it is considered as a cache for DRAM.
What is off chip Memory?
When we talk about off chip memory it means when processor needs some data to process it fetches it from some component that is not a part of processor itself and it is usually main memory or it is also called DRAM. Its Latency rate is much higher than SRAM or on chip memory. Off chip uses capacitor with a pass transistor to store the data. It is cheaper than on chip memory because of its speed.
AI Chip Usage in brain
After getting inspiration from biotechnology’s field (optogenetics) Some researchers from RMIT university started working on a system that will try to mimic the way human brain is storing the data and how it loses the data. This field of biotechnology let scientists to work on brain’s electrical system with tremendous accuracy, it also allows to manipulate the neurons using light. The new AI chips are made of thin-Ultra material, which helps in changing the electrical resistance into wavelengths of light which finally allows us to mimic the way of storing and deleting information using neurons.
The neural connections occur in the brain by electrical impulses, the energy spikes in the brain when hit a certain threshold value of voltage this binds the neurons together, and it starts creating a memory. To generate the photo current on the surface of the AI chip light is used. When the switch between colors occur it actually reverses the direction from positive to negative. The switching of direction from positive to negative is equal to breaking and joining the neural connections.
Transfer of data in AI Chips
As we have already studied above how the Chips make and delete the memories. The basic AI chips transfer the data at a low rate. But recently Intel is going to invest thirteen million dollars in advancement of AI chips. There is a startup started by Intel, they are working on a new type of Artificial Intelligent chip which will perform the deep learning computations with enhanced speed. Untether a startup from Canada Toronto has developed a prototype that transfer the data to the different parts of the chip around thousand times faster than ordinary AI chip. However, a proper care should be taken because a prototype that they have created is a way bigger than the actual AI chip and it will affect the performance of the devices.
We can face the problems while transferring data from memory to devices because of logical operations. The major reason behind this problem is that the amount of data that is in transfer process increases like in the applications like voice recognition and face recognition. But that Untether the startup uses a technique called Near Memory computing it basically reduces the distance among processing tasks and memory which speeds up the transfer rate.
Architecture of AI Chips
Following are the major components of AI chip architecture:
- Application: It has following applications
- Video or Image: Face Recognition, Object Recognition etc.
- Sound or voice: Music Generation, sound classification etc.
- NLP: Text Analysis, language translation etc.
- Control: Autopilot, UVAS etc.
- Machine Learning Algorithms: SVM, Knn etc.
- Deep Learning Algorithms: CNN, Faster R-CNN etc.
- Neuromorphic Chip
- Chip Performance Optimizations
- Programmable Chips
- System-on-chip Architecture
- Development Tool-chain
- High Speed Interface
- High Bandwidth off-chip Memory
- New Computing Devices
- CMOS Technology
- On Chip Memory (Synaptic Array)
- CMOS Process
- CMOS 3D Stacking
- New Technologies
Future of AI Chips
In this world of technology where artificial intelligence is growing exponentially, the human effort is minimizing day by day. Sometimes it looks like AI will snatch everything that we are performing and it will do everything on its own. AI is doing everything from medical diagnoses, identifying faces of terrorists, analyzing human behavior, self-driving cars to even creating new works of art. With all the resources available today one can easily manage to solve the real-world problems using AI. Researcher’s in today’s world have R and D funds, Computer power, large scale data and everything else they need and most of all they have the ability to solve the complex tasks using Artificial intelligence algorithms. Researchers around the globe have put a lot of effort to make these AI chips to make everything easier and faster.
The challenge that is faced by developers while designing AI chip is how we put everything together. When we talk about AI chips, we are actually focusing on large scale on chips in which deep learning is implemented through many hardware components. While following the reliability and safety demands of industry it is very difficult to design AI chips.
Neural networks and deep learning are almost meeting the state-of-the-art technology.
As we know neural networks and deep learning are almost meeting the state-of-the-art technology but it is a belief of many researchers that if we want to achieve the maximum out of AI the we have to use new and different approaches. Previously the AI chips are almost designed to follow the previous but improved version of ideas introduced by Hinton and LecCun, but there is no shame in expecting to implement everything by AI that a human can do.
In spite of everything that a deep learning has achieved there are always few limitations left. The reason behind this is a machine can not think exactly same as human, machines always need a lot of data to learn something while on the other hand humans need a few data to learn the same concept. We can say that Ai chips now a days are so clever but not clever enough to beat the human standards. But in near future they are obviously going to get cleverer and start thinking like a human. They will continue learning from experiences and by using semiconductor technology and SoC design it will improve the processing speed and memory requirements.