Here’s a detailed post by Erik Engheim that breaks down how Apple’s M1 chip is structured and compares it with other PC chips:
In Unified memory the GPU cores and CPU cores can access memory at the same time. Thus in this case there is no overhead in sharing memory. In addition the CPU and GPU can tell each other about where some memory is located. Previously the CPU would have to copy data from its area of the main memory to the area used by the GPU. With unified memory, it is more like saying “Hey Mr. GPU, I got 30 MB of polygon data starting at memory location 2430.” The GPU can then start using that memory without doing any copying.
That means you can significant performance gains by the fact that all the various special co-processors on the M1 can rapidly exchange information with each other by using the same memory pool.
Processors are complicated. I found this article to strike a good balance between technical and understandable.
—Linked by Jason Snell