Apple has long been known to have its own ecosystem, which connects and integrates between its different devices. While this remains difficult for some people, a recent report explains why the company prefers to use its own integrated memory in Apple Silicon.

Apple Brings Unified Memory Architecture to Its Mac Devices With the Proprietary Apple Silicon M1 Chips

According to Apple Insider, the Unified Memory Architecture (UMA) of Apple initially brought changes to the Mac with its Silicon M1 chips. This was huge news since Apple Mac devices previously used Intel chips. However, the outlet noted that this was both good and bad for consumers. 

Apple's UMA was announced in June 2020 together with the company's new Silicon CPUs. The announcement came with multiple benefits compared to traditional desktop and laptop computers.

New UMA Comes With Performance and Size Benefits for Users Compared to Traditional Systems

The Apple UMA reportedly represents a revolution in both the performance and size of computers. Most of the time, computers need bus controllers, which causes interruptions when the CPU requires system memory data. It was noted that interruptions happen when switching from different hardware parts when a task is performed.

Apple Insider cited an example of when the CPU needs to tap into memory, an interruption happens, and the system pauses before it can finish tasks. 

Direct Memory Access (DMA) was introduced later. However, RAM access can still be slow because of the motherboard sizes and distances. According to Techopedia, DMA is a method that allows an input/output device to send or receive data directly to or from the main memory, bypassing the CPU to speed up memory operations.

Apple M1 and M2 Chips Are Described as SoC Systems

It effectively cuts the time needed to process processes requiring memory access. Other improvements can also be seen in how the chips access Graphics Processing Units (GPUs). 

As explained by Any Silicon, the Apple M1 and M2 chips were reportedly System on Chip (SoC) designed to help increase speed and reduce component counts. They integrate CPUs, GPUs, the main RAM, and other components into one chip.

The design shortcuts the process of accessing RAM contents across a memory bus. The RAM is reportedly connected to the CPU directly, making it faster and reducing component counts when using its power.

Read Also: Bypassing Security: How Hackers Can Infiltrate Surveillance Cameras

Differences in How the CPU Functions with RAM Data

Whenever the CPU tries to store or retrieve RAM data, it simply goes directly to the RAM chips to make the process more efficient. This is a unique approach compared to the conventional method of everything passing through the motherboard.

With the changes in Apple Silicon's integrated memory, Apple takes a different approach, focusing more on efficiency instead of capability. This comes as its chips are designed for easier access through various components instead of making them different from each other.

Related Article: New Open-Source Software Promises to Accelerate Quantum Technology Research

Tech Times

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion