Level up your virtual setup with pro gear for casting, interviews, and content creation — be seen, be heard, be unforgettable

From Intel to Apple Silicon

If you’re anything like me you’ve been using personal computers most of your life. You’ve seen different CPUs come and go from 8-bit MOS Technology to 16-bit Motorola, 32-bit IBM, 64-bit Intel, and now Apple’s M-series (generally referred to as ‘Apple Silicon’ in Macintosh vernacular). You’ve seen massive SCSI connectors with cables like garden hoses replaced by tiny USB-C connectors with cables like shoelaces. You’ve seen external storage change from 8-inch diameter floppy disks that stored 80kB to pocketable SSDs storing over 4TB, and you probably remember Zip drives and their infamous ‘click of death’…

You’ve seen how each year’s model is usually better than the previous year: CPUs get faster and more powerful, RAM becomes more affordable, storage capacity goes up, and the whole package gets smaller and lighter. Most importantly, you’ve seen over and again that in an environment of continuous incremental improvements (Japanese manufacturers call it ‘kaizen’) it is always just a matter of time before the latest and greatest personal computer becomes superseded, then obsoleted, then bricked – by new hardware, by new operating systems, and by new apps that exploit the capabilities of the new hardware and new operating systems. An additional camera on the latest iPhone merely supersedes the previous model, but an operating system upgrade or app update that requires that new camera obsoletes it.

The software obsoletes the hardware…

You’ve felt the false comfort of telling yourself, “This new computer will last the rest of my life”, despite knowing it won’t – unless you’re terminally ill or long past your Expiry Date (check the small print on your birth certificate).

From this wealth of experience you’ve arrived at the only ‘future-proofing’ mantra that makes sense: ‘buy the fastest and most powerful computer you can afford’. Following this mantra ensures it should be many years before your new computer is incapable of supporting the latest software and operating systems, or is overtaken by significantly better hardware. You also know that when it reaches that point you can probably carry on for another year or two before the inevitable system upgrade is required because either you or your clients want to use features in new or updated software that your vintage machine doesn’t support.

Vintage machine? Yes. Apple defines any computer that has been off-the-shelves for five years or more as ‘vintage’, and ‘obsolete’ two years after that. The last of the Intel-based Macs, released in early 2020, stayed on the shelves until mid-2023 and therefore won’t meet Apple’s definitions of ‘vintage’ until mid-2028 and ‘obsolete’ in mid-2030. However there are two conspiring extenuating circumstances, as italicised in the previous paragraph. The Intel-based Macs will soon be incapable of supporting the latest software and operating systems because they have been overtaken by significantly better hardware that is uniquely Apple’s.

It is reasonable for Apple to begin deprecating the Intel-based Macs before they meet the definition of ‘vintage’ because they have already been obsoleted by the software and the hardware. They are a liability to support for Apple and their developers, and therefore the sooner Apple can move everyone on to their M-series (aka ‘Apple Silicon’) products the better.

If you’ve read this far you’re probably squeezing the last bytes of performance out of an Intel-based Mac from 2020 or earlier. It’s a built-forever aluminium enclosure with a glass screen that houses what was once an ‘eat my dust’ CPU. It still performs to spec, but now it’s the dust eater. It will continue working for many more years if you don’t change or upgrade anything, but it is inevitably running out of time. macOS Tahoe (aka ‘macOS 26’), will be released on September 15, 2025, and is the last macOS to support Intel-based Macs. Twelve months after that, macOS 27 will only support M-series devices. From that point on, app developers will focus on coding and upgrading specifically for the M-series devices, and their new apps will tap into the Apple Intelligence toolbox that doesn’t even exist on your old Intel-based Mac. At that point, the smartware has obsoleted your hardware and your software

…the smartware has obsoleted your hardware and your software

THE MACHINE OR THE ROAD?

If you’re still running one of the last Intel-based Macs you’ll soon find yourself with three options, based on the new future-proofing mantra explained in the companion article: “If you’re not part of the machine, you’re part of the road”.

Option 1: Part Of The Road

Your first option is to stay ‘part of the road’. Do a Time Machine backup, update to the latest version of macOS that supports your machine, then add the latest versions of your preferred apps and plug-ins that support the version of macOS you’re running. Add them one at a time, testing and backing up with Time Machine in between each update in case you need to ‘walk back’ anything that has bricked your machine. Then, after ensuring everything is as updated as possible and working as expected, turn off auto-updates, disconnect your machine from the internet, and ‘freeze it in time’ as I did with The iMortal some years back (you can read about the iMortal in the companion article). Your computer has reached ‘peak-Intel’; it’s at the end of its road, and you’re part of that road.

Option 2: Part Of The Machine

The second option is to sell or swap that old Intel-based Mac as fast as you can before September 15, when the world realises that macOS Tahoe signals the end of the Intel Macs. If it’s a sufficiently configured machine with great apps installed and tons of supporting features (e.g. vast quantities of RAM, huge storage capacity), you can hope to get some value from selling it that you can put towards the cost of a new M-series Mac – which, of course, you’re going to buy under the new mantra because you choose to be part of the machine.

Option 3: The Machine & The Road

The third option is to freeze your existing system as described in the first option (it will probably have little to no resale value anyway), and keep it as a legacy machine that will forever be ‘part of the road’, ‘end of the road’ and ‘peak-Intel’ for its generation. It’ll be handy to have around for longer and slower background projects, as I do with The iMortal. After setting that up, you then move towards becoming ‘part of the machine’ by investing in a new and future-proofed M-series Mac with support for the latest versions of macOS, Apple Intelligence and so on.

THE BALLPARK

The good news is that most audio work is a walk in the park for the M4 chip – which means that most audio people no longer need to buy the biggest, fastest and most powerful machine they can afford (as highlighted in the companion article).

In fact, today’s over-specified M-series machine could become tomorrow’s sunk cost liability. Why? It is expected that audio software developers will start integrating Apple Intelligence tools in their apps to provide features that benefit sound engineers and recording musicians – such as the ‘machine composition’-based drummer, bassist and keyboardist AIs that Apple has built in to Logic Pro. This could create a demand for further performance increases in the Neural Engine (the hardware ‘heart’ of Apple Intelligence) that ultimately require more than 38 TOPS (trillion operations per second). If that happens, tomorrow’s hypothetical machines will be doing these things ‘on-device’ with very low latency, possibly even in real-time, while current generation machines will need an internet connection to Apple’s PCC (Private Cloud Compute) to do the same things but with more latency.

All of the above speculation and technical gobbledygook will make sense after reading about Apple Intelligence later in this article.

For now, however, if we follow the trajectory of the Neural Engine’s performance from the M1 to the M4, as shown below, we see that its capability has increased exponentially – with the exception of the M3. However, during the M3’s lull the iPhone 15’s Neural Engine (released one month earlier) jumped to 35 TOPS, as shown below. I think we can safely ignore the M3’s lull as a timing anomaly in the manufacturing cycles and therefore integrate it out of the curve.

If the Neural Engine’s development continues on the same trajectory, and app developers place more demands on Apple Intelligence to inject new life/features into their apps and thereby encourage users to upgrade, we could expect to see later M-series Macs with even more powerful Neural Engines – at which point today’s ‘biggest and fastest’ M-series purchase might be hard to get rid of.

So instead of buying the most powerful, buy the most appropriate, as explained in the companion article where my ‘most appropriate’ ballpark landing was right on top of the base line M4 MacBook Air – the cheapest MacBook available! Where should you land? Read on…

Is Everything You Know Irrelevant?

You’ve gone to Apple’s website to check out the latest offerings, only to discover that their M-series technology has obsoleted your points of reference for comparing computers. You’ve browsed through the specs looking for the familiar ‘tell all’ information you’ve always relied on, but it’s not there. There’s no mention of clock speed anywhere – yet it was the single most revealing spec for decades. There’s no mention of RAM, and no mention of threads or hyperthreading – all valuable points of comparison once-upon-a-time. Meanwhile a spec that was backgrounded long ago as a ‘solved problem’, has become important again: memory bandwidth (i.e. the amount of data that can be moved between the CPU and RAM in one second).

Are these M-series chips so different that memory bandwidth is a more important marketing metric than clock speed, RAM and threads? Basically, yes…

In the companion article I described my experiences of moving back to Mac after eight years on an iPad Pro, and four years after Apple released the first of their M-series chips and thereby changed the specification landscape. A lot has happened in Mac world during that time, much of which I was blasé about in until I was forced to choose between a new iPad and a new MacBook. There were lots of new terms, while some of the old terms had become irrelevant and others took on new levels of importance. Here’s an overview of what matters now and why…

M-SERIES SPECS & TERMS

Although commonly used to refer to Apple’s M-series CPU chips, the term ‘Apple Silicon’ refers to any of Apple’s proprietary chip designs (regardless of who they contracted to manufacture them) and dates back to the iPhone 4 – the first Apple device that combined most of its internal circuitry into a single Apple-designed chip. The idea of building most of a device’s internal circuits into a single chip dates back to the 1970s, when it was used for digital wristwatches. It became known as a ‘System On A Chip’, or ‘SoC’.

Apple’s M-series chips, debuted in 2020, took Apple Silicon to a new level – essentially putting an entire personal computer circuit into an SoC. It uses ARM architecture CPUs (see below) built into an SoC in which all of the processors’ circuitry is embedded into a single chip – except Unified Memory and long-term storage which have to remain variable for the purchaser to choose. Here’s a breakdown of common M-series terms and what they mean…

ARM Architecture: Fruit, Nuts & Limbs

In the early days of personal computing numerous tech companies named themselves or their products after fruit. Don’t ask why. Other than Apple, there was Apricot, Blackberry, Peach, Tangerine and more – a silicon fruit salad. One company was called Acorn (yeah it’s a nut, not a fruit, but it has the same organic connotations). Instead of using the popular Intel and Motorola CPUs that were all based on ‘Complex Instruction Set Computing’ (CISC), Acorn developed their own CPU based on ‘Reduced Instruction Set Computing’ (RISC). Its advantages included lower cost, smaller size, and increased performance per transistor. They called it the ‘Acorn RISC Machine’ or simply ‘ARM’. Acorn was dissolved in 2009 but their ARM concept took root lives on as the ‘Advanced RISC Machine’.

Performance Cores & Efficiency Cores

Every M-series chip contains a number of optimised ARM processors, each called a ‘core’. There are Performance cores (P cores), Efficiency cores (E cores) and Graphics Processing Unit cores (GPU cores). What we used to call ‘threads’ were essentially ‘virtual cores’ inside the same CPU. In contrast, each P core and E core in an M-series CPU is an individual hardware processor designed for doing complex sequential tasks, all working together simultaneously as and when needed. Both use the same ARM-architecture, but with the following differences:

P cores (shaded green in the illustration) are optimised for serious number crunching and foreground tasks and therefore have larger buffers and faster clock speeds, at the expense of lower efficiencies and therefore higher operating temperatures.

E cores (shaded yellow in the illustration) are optimised for lighter processing duties and background tasks and therefore have smaller buffers and slower clock speeds, with the benefits of higher efficiencies and therefore lower operating temperatures.

More P and E cores allow the M-series processor to be faster and/or more powerful, assuming the apps are coded to take advantage of the parallel processing power. The P cores do the heavy lifting, while the E cores lighten the load for the P cores by taking care of menial tasks.

What about their clock speeds? Time for those later…

GPU cores

Shaded pink in the illustration below, GPU cores are fundamentally different to P and E cores, just as GPUs are different to CPUs in traditional computer designs. They’re optimised for handling many parallel tasks simultaneously, such as rendering graphics and video processing. They contain many smaller processors that collectively process a massive amount of data simultaneously.

To put that ‘massive amount of data’ into perspective, here’s an over-simplified example…

My 13-inch MacBook Air’s screen has a native resolution of 2560 pixels (wide) and 1664 pixels (high), giving it a total of 4,259,840 pixels. Each pixel has three colour components (red, blue and green), and each of those colour components has 10 bits – allowing for 1024 shades of red, 1024 shades of green, and 1024 shades of blue – a total of 1,073,742,000 possible colours (we’ll round that down to a billion) per pixel. Each pixel’s colour is therefore represented by 30 bits. Every one of those 4,259,840 pixels has to be checked against the pixels in the next image to be displayed on the screen and, if necessary, those 30 bits have to be recalculated to create the pixel’s next colour. With a refresh rate of 60Hz, this checking and recalculation of all 4,259,840 pixels occurs 60 times per second – and it’s happening now as I write this. Every new character I type requires a bunch of pixels to be re-calculated to represent that character, while also re-calculating the pixels of all the existing characters that get moved due to whatever I’ve just added. That’s quite a processing task, and yet it’s also a very fundamental task. As the user I don’t care how fast it actually happens, as long as it appears instantaneous to me.

More GPU cores mean the M-series chips can process complex graphics faster which, among other things, allows faster screen refreshing, support for larger external monitors, and so on.

Unified Memory, RAM & VRAM

Another speed-enhancing feature of the M-series chips is what Apple calls ‘Unified Memory’. Haters dismiss it as a funky Apple re-branding for ‘RAM’, but there is a fundamental difference that justifies the new terminology.

A traditional computer has two forms of temporary ‘working space’ memory. One is the classic RAM (Random Access Memory), which the CPU uses to hold data it is working on. The other is VRAM (Video RAM) which the GPU uses. In traditional systems the image data is passed from RAM to VRAM, where the computer’s GPU (Graphics Processing Unit) fetches it and prepares it for the display. The transfer from RAM to VRAM adds one more step to the process with every display refresh. If your display is running at 60Hz, that’s 60 image transfers from RAM to VRAM every second.

The M-series chips use a single chunk of ‘working space’ memory that serves both as RAM and VRAM; hence it is called Unified Memory. It is soldered directly into the M-series’ chip block, not the motherboard, and both the CPU and GPU can access it directly – removing the requirement to transfer image data from RAM to VRAM, thereby reducing processor burdens and speeding things up considerably.

However, because Unified Memory is shared between the CPU and the GPU there needs to be enough of it to serve both purposes. How much of the Unified Memory serves as RAM and how much serves as VRAM at any given moment depends on what’s being worked on – hence, neither can be given a fixed number in the specifications. A graphics-intensive process such as 3D animation means the GPU cores will use more of the Unified Memory, leaving less for the P and E cores. Similarly, a math-intensive process with minimal graphic requirements such as a complex spreadsheet means the P and E cores will use more of the Unified Memory, leaving less for the GPU cores.

At the time of this writing, 16GB is considered the minimum Unified Memory for users doing more than writing texts, editing selfies, and consuming social media. It’s also better suited to Apple Intelligence’s goal of doing most of its AI processing ‘on device’ (more about that later). For this reason, the base models of all macOS devices were upgraded from 8GB to 16GB of Unified Memory in October 2024, and the recently announced iPhone 17 Air and iPhone 17 Pro both received a mid-manufacturing spec bump from 8GB to 12GB of RAM.

Memory Bandwidth

This is a measure of how fast data can be moved between the Unified Memory and the cores. For example, as we move up through the M4 MacBook range we find that the Memory Bandwidth increases from 120GB/s throughout the MacBook Air range up to 410GB/s in the top of the line MacBook Pro. Interestingly, the base model MacBook Air supports up to 32GB of Unified Memory, while the top of the line MacBook Pro supports up to 128GB. So the MacBook Pro supports 4x as much Unified Memory as the MacBook Air, and its Memory Bandwidth is increased accordingly to 3.4x faster.

It’s worth mentioning that the current Mac Pro’s memory bandwidth is 800GB/s, while the current Mac Studio’s memory bandwidth is 819GB/S. However, in their top configurations both of these desktop machines are using the Ultra versions of earlier M-series chips (M2 Ultra in the Mac Pro, M3 Ultra in the Mac Studio).

Neural Engine

Sometimes abbreviated to ‘NPU’ (Neural Processing Unit), this represents Apple’s focus on seamlessly integrating advanced AI technologies into the user’s everyday computing experiences. Shaded in aqua in the illustration below, the Neural Engine is tailored to handle complex Machine Learning (ML) and AI tasks with minimal power consumption, enabling ML features such as voice recognition in Siri, natural language processing, facial recognition for Face ID, predictive text suggestions, visually categorising images in the Photos app, and more. As we’ll soon see, the Neural Engine is the major hardware component of Apple Intelligence…

TOPS

This is an acronym for ‘Trillions of Operations per Second’, and provides a better indication of processing speed than clock speed (see below) when referring to parallel ARM-based processors. In Apple Silicon terminology it is mostly used to describe the speed of the Neural Engine, because it is capable of running many trillions of operations per second.

Standard, Pro, Max & Ultra

Each of the M-series chips has been made in three distinct versions – Standard, Pro and Max – with the primary differences between each version being the number of CPU cores, the number of GPU cores, the maximum accessible Unified Memory, and the Memory Bandwidth. Note that throughout any given series (e.g. M4 standard to M4 Max), the maximum clock speed of the P cores remains essentially the same, which tells us that within any given M-series the clock speed is not a contributing factor to improved performance. The four parameters to compare, therefore, are the total number of processing cores (if you do a lot of number crunching or coding), the number of graphics cores (if you do a lot of video or animation work), the total amount of Unified Memory the chip can access (large files or lots of small files being worked on simultaneously within the same project will benefit from more Unified Memory), and Memory Bandwidth (faster values will transfer large files in and out of the CPU faster than slower values).

In addition to the Standard, Pro and Max versions of each chip, the M1, M2 and M3 chips also come in an Ultra version, which is two Max chips connected together using Apple’s ‘UltraFusion’ technology. They appear as a single chip to macOS with twice the number of P, E and GPU cores, along with larger and faster Unified Memory access, and double the hardware capability for ‘on-device’ Apple Intelligence.

It’s worth noting that the M4’s fundamental design does not include Apple’s UltraFusion technology, so we should not expect to see an M4 Ultra chip. Perhaps Apple will resurrect the Ultra version with the M5 series of chips, but for now M3 Ultra remains considerably more powerful than the M4 Max – the 10% increase in maximum P core speeds between the M3 and M4 is easily outweighed by the M3 Ultra’s 100% increase in P, E and GPU cores.

At the time of this writing the M3 Ultra is the most powerful M-series chip available, so if a new M4 device is out of your budget consider purchasing a model from the earlier generation with an M3 Ultra chip.

What We Know So Far…

The three tables below are all derived from a single (and huge) table that aims to bring together the M-series specs described above, from M1 to M4, allowing us to make meaningful comparisons between the entire range of M-series devices while also gaining insights into what actually changes from one generation to the next, and therefore makes each new generation faster than the previous one. We can see that the headings of the columns in each table remain the same, allowing us to easily jump from one table to another to compare different aspects as shown in the rows.

Let’s start with the first table, which focuses on technical specifications…

The top part of the table, shaded in blue, includes all the specifications related to the chips’ number-crunching power. Of those, I’ve isolated the two that will be of most interest to readers going up from Intel to M-series: the total number of cores, and the maximum clock speed of the P cores. Why have I focused on only those two specs? Because the total number of cores matters to the actual number-crunching power, while the maximum clock speed of the P cores matters to readers who still can’t get their head around why clock speeds are no longer a valid specification. Notice that, within any given generation, the clock speed remains the same even though the performance increases significantly between the different models within that series, for example, the M3, the M3 Pro, the M3 Max and the M3 Ultra. Clock speed, therefore, has nothing to do with it. The total number of cores is what matters. Hopefully the graph makes that clear. Can I hear an ‘Aha!’ or two?

As a matter of interest, the maximum clock speed of the P cores has some relevance when comparing different generations of the M-series, such as M3 vs M4. Basic maths shows that each new generation of M-series of chips has approximately 10% faster clock speeds for the P cores, and yet benchmark tests often show speed increases between 15% and 40% (depending on the task) between generations, with 20% being typical. Clock speed is obviously a minor contributor to those ‘real world’ differences, but a 10% increase in clock speed is not going to be anything more than marginally responsible for a 40% increase in performance. That’ll be due to the increased number of cores.

Think of the cores as farm workers, and the clock speed as their working speed. For any given working speed, who’s going to plough the field faster? 10 workers doing 10 rows per hour, or 20 workers doing 10 rows per hour…

The yellow shaded part through the middle shows the number of GPU cores. Again, clock speed is not a metric here; more cores means a more powerful GPU, which is important for graphics processing, 3D rendering, game playing, etc. Note that the M1 Ultra, M2 Ultra and M3 Ultra have 64, 76 and 80 GPU cores respectively – twice the number of cores found in the Max version of each chip, as expected because the Ultra chip is two Max chips running together.

The two lines shaded in red represent the relationship between the cores (P, E and GPU) and the Unified Memory. The upper red line shows the maximum amount of Unified Memory available to each chip, which we can consider as the ‘working space’ for the P, E and GPU cores. The lower red line shows the Memory Bandwidth; in other words, how quickly the cores’ can access the data they’re working on.

At the bottom are two green lines representing the chip’s Neural Engine, which takes care of all the M-series chip’s ML and is, essentially, the hardware component of Apple Intelligence. The top green line shows how many cores it has; note that all M-series chips so far have 16 cores except for the Ultra models, which have 32 – again because the Ultra chips are two Max chips working together.

The lower green line shows how many TOPS (trillions of operations per second) each Neural Engine is capable of, with big advances from the M1 (11 TOPS) to the M4 (38 TOPS). As noted in the companion article, when the Neural Engine for the iPhone 16 was 35 TOPS (as per the iPhone 15) and the Neural Engine for the M4 chip reached 38 TOPS, Apple changed their marketing from ‘Hello Apple Intelligence’ to ‘Built For Apple Intelligence’; essentially indicating that the Neural Engines on both devices, and assumedly everything else that was labelled ‘Built For Apple Intelligence’, were sufficiently powerful to meet the average user’s AI needs ‘on-device’ – meaning your data never has to leave your device for AI tasks, which is a good privacy concern.

Deprecated, Vintage & Obsolete

Most dictionaries define the word ‘deprecated’ as meaning ‘not approved of, or not recommended’. When Apple says a device is being ‘deprecated’ it is usually accompanied by a deprecation date, which means from that date onwards they will reduce and eventually eliminate support for that device, and are therefore discouraging developers from creating or upgrading apps to support the deprecated device. From a user point of view, once a device has been deprecated it is best to turn off all auto-updates and leave the machine as it is – ‘part of the road’.

Apple considers any product that has not been on sale for five years or more to be vintage, and anything that has not been on sale for seven years or more to be obsolete – as shown in the table below. In that respect, both The iMortal (early-2013 MacBook Pro) and The Recently Deceased (2017 iPad Pro) are vintage and obsolete. (You can read about The iMortal and The Recently Deceased in the companion article.)

Among other things, these deprecated, vintage and obsolete categorisations mean you’re better off starting with a clean install when replacing a vintage or obsolete device, rather than using an iCloud Backup or a Time Machine backup – otherwise you’ll be lumbering the new machine with a lot of old apps that will need updating and/or need Rosetta installed, and some that don’t exist anymore and won’t run properly on the new machine. If a machine is vintage (five years off the shelves) or obsolete (seven years off the shelves), it’s a pretty safe bet its hardware has been obsoleted by software, so copy across your documents and files but upgrade the apps and OS.

It will be interesting to see whether Apple changes these definitions after deprecating the Intel-based machines. If we look at the M-series range, from the M1 to the M4, we see that each new series gets more cores and marginally faster clock speeds, but no new features that would obsolete earlier models – except for increased TOPS from the Neural Engine, which means the older machine will place more reliance on off-device processing (via Apple’s ‘Private Cloud Compute’ or PCC, as explained later) for Apple Intelligence. Perhaps after the Intel-based machines have all been deprecated, the definitions of vintage and obsolete might become partly based on how much off-device processing a device requires for Apple Intelligence.

M-Series Device History

The table below shows a simplified history of the M-series chips and which Apple devices they were used in. The chips shown are those supplied with the standard model, which can often be upgraded when ordering or purchasing.

It’s easy to dismiss this table at a glance as being simply of historic interest, but taking a second look reveals an emerging pattern regarding which device’s get which chips. This reassuringly indicates a consistent design philosophy in which the manufacturer (Apple) ‘makes the entire widget’ and is therefore no longer beholden to the whims and quirks of another CPU manufacturer. They are able to make consistent and almost linear progressions from one M-series to the next with none of the odd one-off deviations that result in a product being left behind prematurely.

OTHER SPECS & TERMS

The following specs and terms often come up when speaking of contemporary computers, AI and so on. Some are unique to Apple Silicon, others are standard industry terms…

Machine Learning (ML)

The term ‘machine learning’, or ML, was coined in the early 1950s by Arthur Lee Samuel, a pioneer in computer gaming and artificial intelligence. He created the first self-learning program for playing checkers – quite an achievement for the time and technology.

ML isn’t new to Apple; they were using it in the early ‘90s for handwriting recognition – remember Apple’s Newton product range? These devices allowed the user to handwrite onto the screen using a stylus, and the Newton converted it into text files. It could also convert hand-drawn shapes into vector graphics. However, Apple didn’t start using the term ‘ML’ prominently in their marketing or in their developer conferences until 2016, one year before introducing the ‘Neural Engine’ in the A11 Bionic chip (as used in iPhone 8, iPhone 8 Plus and iPhone X). Refined versions of these ML algorithms are now used for the handwriting-to-text conversion and shape conversion capabilities in the iPad, and also extracting text from images containing handwritten text.

Apple considers ML to be a branch of AI, whereby apps and games learn from user behaviour to customise the user experience. For example, Apple Music and Apple TV+ both use ML to tailor suggested content for users (rather like the algorithms used by social media platforms). It’s also evident in Apple Maps, where ML algorithms analyse user travel patterns, traffic data, and other factors to provide personalised route suggestions and to optimise navigation.

The User-Facing App

This term is surprisingly literal; it refers to the front-most window on the screen, which represents the app that is currently facing the user, hence it is the ‘User-Facing App’. It’s the app the user is currently interacting with via keyboard, mouse, trackpad, screen, sound, etc. We’ll see more about the importance of this below…

Quality of Service (QoS)

macOS itself – not the app and not the CPU – determines the number and type of cores allocated to each app and its associated tasks. It changes the allocation of cores in real-time if the user moves an app from the background to become the User-Facing App, or vice versa, and it also alters the clock speed of each core to create priorities within the same type of cores. There’s no point in core A producing a result in 0.1 seconds if core B doesn’t need that result until 1.5 seconds; that’s just doing things unnecessarily faster – it’s less energy efficient, which means hotter temperatures, higher risk of ‘throttling’ (see below) in machines without fans, and shorter battery life.

All of this prioritising between cores and clock speeds is based on a ranking system known as ‘Quality of Service’ (QoS), which aims to keep the device as responsive as possible to the user at all times. The combination of a multi-cored ARM-architecture and Apple’s version of QoS creates a very fast computer that feels even faster (read: ‘more responsive’) because of the way it prioritises the User-Facing App. Despite doing millions of processes per second, it always seems to have a core specifically waiting to respond to your next command – unless you’re working on documents synchronised to cloud storage or otherwise relying on downloading, in which case internet speed is the bottleneck. More about that elsewhere…

Rosetta

This is a software compatibility layer in macOS that allows apps built for Intel-based Macs to run on M-series Macs. It dynamically translates the Intel code to a format that the M-series chip can understand, easing the transition by allowing users to continue using their existing apps until the developers make Apple Silicon versions.

Note that Rosetta is not pre-installed in Apple Silicon devices, but the Mac knows where to find it and the user will be prompted to instal it if an app requires it.

Clock Speed

Once upon a time a computer’s clock speed was the first indicator we’d turn to when comparing one CPU against another. For any given CPU series (e.g. Intel’s 80xx series et al), a faster clock speed generally meant a faster and more powerful CPU. It was an important marketing metric for Apple when they were using Intel CPUs, but it disappeared upon the release of their M-series chips in 2020 – now you have to dig pretty deep into the technical documentation to find it. Why? Primarily because a) there are multiple cores working side-by-side at their individually allotted speeds, and b) there isn’t a single clock speed anymore. macOS varies the clock speed of each core to ensure it is running as fast as necessary but no faster. In a computerised version of Just In Time manufacturing, each core’s speed is adjusted to ensure it is running as efficiently as possible while also delivering the goods in time. In the M4 chip, the clock speed for P cores can be varied up to 4.3GHz, while E cores can be varied up to 3GHz.

Throttling

The MacBook Airs use the same M4 chips found in the lower echelons of the MacBook Pro range, as do the iPad Pros. They are capable of working just as hard, which means they can get very hot. The MacBook Pros and desktop Macs have internal fans to cool the processor if it gets too hot, so it can continue working, but the MacBook Airs (and iPad Pros) do not have internal fans and therefore cannot cool themselves in this way. If or when a MacBook Air or iPad Pro gets too hot, its operating system slows down the processing speeds until it has cooled down again. This ‘slowing down/cooling down’ process is something that gamers, over-clockers and other Worshippers Of The Blue LED refer to as ‘throttling’. If your typical workflow causes your MacBook Air to throttle it means you have bought the wrong machine; you might’ve saved a couple of hundred dollars but you’ll pay for that in lost time when the machine throttled during long bounces or renders. That’s why Apple offers the more expensive MacBook Pros with the same M4 chip but with built-in cooling fans. I cannot ‘duh’ this loud enough…

MEMORY MATHS

I’m just going to focus on storage costs here, not Unified Memory because that is pre-installed into the M4 chip block and there is no option for using an external alternative. It’s worth noting, however, that Unified Memory increases at a rate of $200 USD per 8GB throughout the MacBook Air range but becomes cheaper in the larger quantities used for the MacBook Pro range and the desktop models.

Internal Storage Costs

To avoid confusion for the following calculations I’m going to stick to a standard unit of memory measurement throughout – the Gigabyte, or GB. Computer storage is measured in base2 values, therefore 1TB = 1024GB, 2TB = 2048GB, and 4TB = 4096GB.

The internal storage options and prices from Apple (as at September 3 2025) are listed below, using the base model 13-inch M4 MacBook Air (10 core CPU, 8 core GPU, 16 core Neural Engine, 16GB of Unified Memory, 256GB of internal storage) as a starting point. As mentioned in the companion article and below, it’s always worth checking if the internal storage upgrade comes with any tempting incentives such as a slightly more powerful CPU or a choice of more powerful chargers. For now, however, we’ll focus just on the storage because the incentives.

256GB: $999 USD (base model)

512GB: $1199 USD (base + $200 USD)

1024GB: $1399 USD (base + $400 USD)

2048GB: $1799 USD (base + $800 USD)

Considering that the base model comes with 256GB installed, here’s how Apple’s internal storage pricing actually works out:

The 512GB model cost $200 USD more than the base model and provides 256GB more storage at a cost of 78 cents per extra Gigabyte. (i.e. $200 / 256GB = 78.1 cents per GB)

The 1TB (1024GB) model costs $400 more than the base model and provides 768GB more storage at a cost of 52 cents per extra Gigabyte. (i.e. $400 / 768GB = 52.1 cents per GB)

The 2TB (2048GB) model cost $800 USD more than the base model and provides 1792GB more storage at a cost 45 cents per extra Gigabyte. (i.e. $800 / 1792GB = 44.6 cents per GB)

Those  are the prices for internal storage; it’s expensive, but for some users it’s worth paying for because it is built into the device and is always there. It’s not taking up any ports, there’s nothing hanging off the side of the laptop, and it’s probably faster than external storage. If you were mixing large multitrack sessions on-the-go it might be worth paying extra for the internal storage so you don’t need an external drive hanging off one of your ports. Speaking of which…

External Storage Costs

Checking out the external options, I focused on a series of pocketable bus-powered SSDs from Crucial’s X9 Pro series; not their (physically) smallest and most affordable X6 series, and not their fastest and more expensive X10 series. There’s much more involved in these external SSDs than simply soldering larger capacity chips (or more chips) onto existing spaces on a MacBook’s motherboard. These chips are mounted on their own boards that also includes a high speed USB-C driver chip and port, and it’s all fitted into a weather-sealed aluminium case, bundled with an easily replaceable high speed USB-C data/power cable.

Here are Crucial’s recommended retail prices. Note that the prices found by following Crucial’s on-site ‘buy’ links to Amazon and B&H were generally lower at the time of this writing:

1024GB (1TB): $94.99 USD (i.e. 9.3 cents per GB)

2048GB (2TB): $136.99 USD (i.e. 6.7 cents per GB

4096GB (4TB): $250.99 USD (i.e. 6.1 cents per GB)

8192GB (8TB): $462.99 USD (i.e. 5.7 cents per GB)

For the sake of pushing this pricing exercise to an extreme I’ve included Crucial’s X10 Pro for its 8TB capacity (not available in the X9 Pro series). It is essentially the same pocketable size, weight and form factor, but offers twice the read speed and greater resistance to dust and water (IP65 rating instead of IP55). Apple does not offer an 8TB storage option for the MacBook Airs, but we can get a price indication for comparison purposes by looking at the upper echelons of the MacBook Pro range, where the 8TB storage option adds $2,200 USD to the price. The base models of these upper echelon MacBook Pros come with 1TB of storage as minimum, therefore Apple are charging $2,200 USD for an additional 7TB (7168GB) of storage. Based on these prices, we see that the Apple option costs 31 cents per GB for on-board storage while the Crucial option costs 5.7 cents per GB for external storage. The Apple option costs more than five times the price.

When purchasing my 13-inch MacBook Air I asked the salesperson which model and configuration was the most popular. “This one,” she said, tapping on the box I’d just purchased, “…and an iCloud+ account, if you don’t already have one.” “What about external storage?” “I don’t know, we don’t sell that here…” More about iCloud+ below…

APPLE INTELLIGENCE

The term ‘Apple Intelligence’ broadly refers to Apple’s evolving suite of AI and ML tools that emphasise productivity and privacy. Apple treats it more like a set of tools to be integrated into existing and new apps, rather than an app of its own.

It’s also the name given to a single unified Large Language Model (LLM) that integrates the same AI capabilities across all of Apple’s devices (Macs, iPads, iPhones, hence ‘unified’), for seamless performance and privacy. It processes natural language efficiently, and supports tasks like predictive text, voice commands, and contextual suggestions with minimal latency. Its ‘on-device’ processing maintains data security, but can also use cloud-based enhancements for more complex processing. At present, the Apple Intelligence LLM consumes just under 10GB of storage.

The Neural Engine is the primary engine for Apple Intelligence, and handles tasks such as on-device ML (Machine Learning), natural language processing, and image recognition. However, Apple Intelligence also uses the CPU’s P and E cores for general processing tasks related to AI features, and uses GPU cores to assist in tasks that involve image processing and graphical calculations. The Unified Memory ensures that the Neural Engine, the CPU and the GPU all have fast access to the same memory pool – which is crucial for efficient AI processing.

For those AI-savvy types who are looking for big numbers and know what they mean, the Apple Intelligence LLM is currently a three billion parameter model that consumes 10GB of storage, and the M4’s Neural Engine is capable of 38 TOPS (i.e. 38,000,000,000,000 operations per second).

We’ve already discussed P cores, E cores, GPU cores and Unified Memory in ‘M-Series Terms’ above…

The graph below shows how much the Neural Engine has improved in the A-series (blue) and M-series (green) chips since 2020. Note the significant leap in performance between the M3 and M4 (18 to 38 TOPS, more than double), which was preceded by a similar leap from the A16 to the A17 SoCs (17 to 35 TOPS).

It’s no coincidence that Apple changed their marketing slogan from “Hello Apple Intelligence” to “Built for Apple Intelligence” after the Neural Engines in the A-series devices (35 TOPS) and in the M-series devices (38 TOPS) both became powerful enough to meet the average user’s AI requirements ‘on-device’ and in a unified manner.

The A18 and M4 SoCs represent an inflection point in technology; a junction where personal devices, personal computers, and personal AI meet. It’s also the beginning of AI becoming normalised as a component of a personal device alongside the CPU, the storage and the display – rather than a lumbering time-sharing mainframe in some conglomerate’s cool room that you have to subscribe to and sacrifice your privacy in return for access.

To summarise this overview, Apple Intelligence’s LLM is coded into macOS, iOS and iPadOS so it can be improved and updated over time as part of an OS update, while the AI- and ML- specific aspects of its processing are done by the Neural Engine which can be improved and updated with each new generation of hardware (as shown in the graph above). Together, the LLM and Neural Engine provide the missing components required to give a computer or other personal device (iPhone, iPad, etc.) on-board AI capabilities.

So What Is Apple Intelligence?

Nice sales pitch, but what is it? Let’s start by saying what it’s not… It’s not a website you have to log on to. It’s not something you have to download from the App Store. It’s not something that requires a one-off payment or signing up to a monthly subscription. It’s not something that requires prompting to get a result (although it relies on prompting when appropriate). It’s not something you can use for free but only if you permit its owners to use your content to train their AI. It’s not claiming any rights to anything it creates for you, and it’s not sharing your personal data or compromising your privacy. It’s none of those things…

It’s a suite of specific AI and ML tools that appear as menu items in some applications, as tools in other applications, or as entirely new applications for specific purposes. It’s built into the latest models of Apple’s personal computers and devices, where it becomes part of the hardware, part of the operating system and part of the apps, and it works cross-platform between different Apple devices. You don’t ‘open’ Apple Intelligence, you just use it. There’s a switch in the System Settings to choose whether you want it on or off, and that’s it.

Apple is leading the way forward showing that Artificial Intelligence should be de-centralised; it should be integrated in to our devices, it should be personal and private, and we shouldn’t have to give away our intellectual property rights to a multinational conglomerate to access their massive centralised AI. It should exist inside the boundaries of our devices, and be usable without needing an internet connection. It should be just another part of what every personal computing device is built around – alongside processing and storage. That’s a goal that Apple is ideally positioned to achieve thanks to Steve Jobs’ ‘build the entire widget’ philosophy. More about that below…

On-Device & Off-Device

Anything that Apple Intelligence can do using the LLM, the Neural Engine, the CPU, the GPU and the Unified Memory in the device is known as ‘on-device’ processing – your content isn’t sent out of your device for any AI-related tasks, which is great for privacy and security. It’s the AI equivalent of mixing in-the-box. Although it’s still very new, there’s a lot you can do with Apple Intelligence that doesn’t require an internet connection, including all of the Writing Tools (except ‘Compose’, which uses ChatGPT integration through a private channel so that even OpenAI – who owns ChatGPT – cannot see your data), the Erase and Retouch tools in the Photos app, the extra accompaniment musicians in Logic Pro, and more. If the selected Apple Intelligence menu option or tool works without needing the internet, then it is ‘on-device’ processing. Your privacy is assured because your data never left your device.

Tasks that currently require too much processing to do on-device are sent to Apple’s Private Cloud Compute (PCC), a secure cloud infrastructure designed to handle computationally intensive tasks for Apple Intelligence while prioritising user privacy. It works with your device similarly to iCloud; you don’t have to log on or click ‘Okay’ every time your device needs to use it, it connects automatically using your Apple ID. As with iCloud, it will let you know if there is no wifi connection. This is known as ‘off-device’ processing.

PCC extends the privacy and security features of Apple devices to the cloud, ensuring that user data is only used for the specific request and is not retained or accessed by Apple. The haters, doubters and datanoids can read more about PCC here.

It’s possible that performing a complex or difficult Apple Intelligence process on an M1 Mac might require off-device processing due to its slower Neural Engine (11 TOPS) and slower CPU, but the same Apple Intelligence task could be done on-device with an M4 Mac due to its more powerful Neural Engine (38 TOPS) and its faster CPU. As the technology evolves and improves, we can expect to see increasingly more Apple Intelligence processing occurring ‘on-device’ – which is the goal.

What Can It Do?

For most people, the current offerings from Apple Intelligence are underwhelming. There are better options out there, but those who dismiss it are forgetting two important aspects of Apple Intelligence. The first is that what Apple Intelligence is offering will ultimately be done entirely on your device (much of it already is), with no need for an internet connection. This is great for privacy and for those with poor internet connections. The second is that the Apple Intelligence tools currently available are really just teasers to encourage third-party app developers to get on board. To understand that plan, we need a surface-level historical perspective…

When Apple released the Macintosh in 1984 it came with a number of simple apps such as MacWrite (word processor), MacPaint (bit-mapped drawing) and MacDraw (vector-based drawing). These pre-installed apps made the machine useful straight out of the box, and gave potential app developers a glimpse of what was possible on this new and different type of personal computer. It was the first readily-available personal computer to use a GUI (Graphical User Interface) based on the WIMPS (Windows, Icons, Mouse, Pointing Software) system developed by researchers at Xerox’s Palo Alto Research Centre (aka PARC). Computer users weren’t used to the mouse, or to menus that popped down from the top of the screen, or to dragging and dropping things, or even to having icons 0n the screen. Prior to the Macintosh, computers had screens that displayed everything in rows of green or amber text, and every program (we now call them ‘apps’) had its own set of keyboard commands that needed to be remembered. There was no consistency between apps, and doing anything required typing arcane terms, held together with backslashes and full colons, into a Command Line Interface (CLI), or remembering combinations of Function keys and other keys – a process itself that limited computer usage to technical types. [For those who want a glimpse into those days, open your Mac’s ‘Terminal’ app and try to work from there. Good luck!]

MacWrite, MacPaint and MacDraw were simple programs; fully functional, but also intended to encourage developers to make their own versions with greater features and capabilities. Apple provided the basic example, but kept it simple enough that third-party developers could create something better to compete against it. Apple also created physical folders of information known as the Macintosh Human Interface Guidelines, intended for developers to make apps that maintained the consistent Macintosh look and feel. The basic message was simple: “if you want to develop apps for the Macintosh, these are the rules for how it must interact with the user”. The idea was to create a personal computing environment that was open to third-party developers while maintaining a highly consistent look and feel from one app to another. For example, the File menu was, and still is, always first, with standardised terminology options including New, Open, Save, Save As, Print, and Quit. The Edit menu was, and still is, the second menu, with standardised terminology options including Undo, Cut, Copy, Paste, and Delete. Apple intended, and succeeded, in simplifying personal computers – taking them beyond the offices and boardrooms of the corporate world, and into the homes.

It should be no surprise that Apple introduced the Macintosh as ‘the computer for the rest of us’…

What’s this got to do with Apple Intelligence? More than meets the eye. There are many parallels between the current state of the AI industry and the pre-Macintosh days of personal computing. There are AIs for all sorts of purposes, just as there were programs (aka ‘apps’) back then for all sorts of purposes, but each AI has its own interface and instructions with little consistency between them – just like the programs (apps) of the pre-Macintosh days. The whole AI concept feels unapproachable and over-technical to many potential users, leaving it largely in the hands of nerds. Apple Intelligence aims to bring AI into the user’s personal computer by building it into the apps, and making it just another menu item or tool that conforms with the Human Interface Guidelines and is, therefore, consistent and approachable.

As with MacWrite et al 40 years ago, Apple Intelligence at present is intentionally just a teaser of what’s capable. Apple wants third-party app developers to see the potential and make serious use of it, integrating it into their apps and giving their customers a reason to upgrade.

To that end, Apple are giving developers access to Apple Intelligence through their Foundation Models Framework, allowing app developers to integrate private, fast and powerful Apple Intelligence capabilities into their apps, so the user can access them either through the menu system in apps that rely on menus (writing, spreadsheets, etc.), or as tools in tool-based apps (i.e. drawing, photo editing, etc.).

You don’t need to be a genius to see the parallels between what Apple did with personal computers and the Human Interface Guidelines 40 years ago, and what they’re doing now with AI and the Foundation Models Framework. They’re taking technology out of the hands of corporations and nerds, and putting it into the home.

It should therefore be no surprise that Apple are introducing Apple Intelligence as ‘AI for the rest of us’…

They can do this because, with Apple Silicon and the M-series, they now ‘build the entire widget’. This means they can integrate the OS, the LLM, the Neural Engine, and the M- or A- series chip into a device knowing there will be no compatibility issues. It’s an envious position to be in, especially if the world starts wanting AI to be just another part of a personal computer or mobile device. Will Microsoft build an LLM into their Windows operating systems, and will Windows computer manufacturers build Neural Engines into their computers that are compatible with Microsoft’s LLM? Likewise, will Google build an LLM into their Android operating system, and will Android phone manufacturers build Neural Engines into their phones that are compatible with Android’s LLM? Licensing and subscription issues abound, and it’s highly unlikely it will be free. Interesting times ahead…

Now let’s look at something else that is unique to Apple’s ‘build the entire widget’ philosophy, is built into the hardware, OS and apps of Apple devices, and would be difficult for other manufacturers who don’t build the entire widget to emulate with the same ease…

I, CLOUDIUS

The term ‘iCloud’ refers to Apple’s multi-faceted cloud storage service that allows users to store files and access them across multiple Apple devices. It’s a central hub for documents, photos, and other files, keeping them synced and up-to-date across your iPhone, iPad, Mac, Vision Pro, and even Windows computers (if they have iCloud for Windows installed).

The Entire Widget

Before delving further into iCloud, let’s look deeper into a phrase that’s been thrown around a lot throughout this article: ‘build the entire widget’ (where ‘widget’ refers to a product or device). Apple is one of the few – if not the only – personal computer or device manufacturer that ‘builds the entire widget’: hardware, software and firmware. [Haters will point out that Apple don’t build their own chips and therefore don’t ‘build the entire widget’. Hate on. Apple design and specify their proprietary chips and have them made by specialised chip manufacturers. In this context, only the most litigious of pedants would argue against how the word ‘build’ is used in ‘build the entire widget’.]

This ‘entire widget’ concept was a fundamental part of Apple co-founder Steve Jobs’ philosophy, and has allowed Apple to evolve their products independently and quickly, without relying on the support of other manufacturers to ‘get on board’ by embracing a new type of interface connector or protocol to make it an industry standard. Every new standard they adopt only has to become an Apple standard, going beyond that is a bonus…

What’s this got to do with iCloud?

As part of the ‘entire widget’ concept, iCloud has been coded in to the operating systems of all Apple’s devices since 2011; it is literally part of the Apple DNA, and integrates so seamlessly that you can turn on whatever parts of it you want to use and forget it’s there.

iCloud’s problem, however, is that it has so many aspects it’s hard to know which ones you want to use, or even if they’re relevant to your device.

So let’s take a closer look at each aspect of iCloud, what devices it applies to, and when and how you would or should use it.

iCloud

Think of the term iCloud as the name Apple calls its on-line storage, backup and synchronisation service. According to Apple, iCloud “securely stores your photos, files, notes, passwords and other data in the cloud and keeps it up to date across all your devices automatically…” It goes on to say, “You can also back up your iPhone or iPad using iCloud.”

iCloud is available in two versions. The free version, simply called iCloud, provides up to 5GB of storage – this includes personal details (passwords, fingerprints, facial recognition, and similar things that teach the device about you and allow you log onto numerous devices with ease), along with backups of iOS devices (iPhone, iPad), photos, and app data (Contacts, Notes, Reminders, etc.).

The second is the paid version, iCloud+, which offers paid storage from 50GB up to 12TB. I’m using the 2TB package ($9.99 USD per month), which has more than enough storage to a) synchronise contents between my MacBook Air, The iMortal, my iPhone and The Recently Deceased (before it became The Recently Deceased), b) provide about 1TB of drag-and-drop cloud storage, and c) store backups of my iPhone and iPad. I haven’t deleted the last backup of The Recently Deceased to save space yet because, like an eternally optimistic mourner, I’m still hoping to hear a knock on the coffin lid. Drag-and-drop dead…

The pic above is the opening screen of my iCloud account, accessed by clicking on the Apple menu, selecting ‘System Settings…’ and scrolling down to iCloud (as highlighted in blue in the lower left corner). We can see that I have an iCloud+ account allowing up to 2TB of storage, and the green horizontal bar shows that 687.2GB of that 2TB is currently being used. Below that we can see how some of that storage is being used – sometimes shown as numbers of items, and other times as memory values.

The Photos app is storing 8,978 photos, there is 432.1MB of files stored in iCloud Drive, I have allowed all of my passwords to be stored in iCloud, there are 985 items stored from the Notes app, a paltry 2.4kB from the Messages app, and 24.6kB from the Mail app. Everything shown here can be accessed from any of my Apple devices at any time, as long as I am logged in to my Apple account. If I take a photo with my iPhone, it gets stored in my iCloud account and, within seconds, I can view it on any or all of my Apple devices simultaneously. I can be working on something in the Notes app on my MacBook Air, go out for a coffee, and continue working on that same note via my iPhone. All of those files and images are stored on my iCloud drive, with simple thumbnails stored on my MacBook Air. I can tell each of my devices which ones of those 8,978 photos I want it to show, without deleting those images from iCloud.

iCloud Drive

Consider iCloud Drive to be iCloud’s fundamental storage system. It’s connected to all devices that share the same Apple ID and have iCloud enabled. You can drag-and-drop files on and off iCloud Drive (e.g. for long term storage or to clear storage space on your internal SSD) from any connected devices, via the Finder on macOS devices or via the Files app on iOS devices. Because the iCloud interface is built into the operating system, you don’t need to open a special app, you don’t need to log on to it, and you don’t have to go through any arcane ‘download and unzip’ routines when you want to move files from iCloud Drive back onto your device. It’s just another volume (i.e. drive) on your desktop, with the same window and file structure as everything else. Simply drag files across and/or back again, just as you would with any other storage device (SSD, HDD, SD card, etc.), except it’s accessed via the internet rather than a USB port or internal connection.

The illustration above breaks down how the 2TB of my iCloud+ account is being used. There’s 687.2GB currently in use; 21% is iCloud Photos, 13% is backups of iOS devices (iPhone 8 and The Recently Deceased), and there’s a few token ‘placeholder registers’ taking up 0.1% each. There’s 1.33TB remaining of the 2GB available; I could probably increase that further by removing the iOS backups the collectively account for 13%. More about that later…

iCloud Synchronisation

I am writing this article in Apple’s Pages word processor. I have iCloud Synchronisation enabled on my MacBook Air (as outlined in red in the illustration below) and underneath that we can see that I’ve allowed iCloud to sync my Desktop and my Documents folders.

This Pages document that I’m currently writing and you’re currently reading appears to be stored in a folder on my Desktop called ‘From Intel to Apple Silicon’. It looks and feels like it is in a folder on my MacBook Air, I can move it from one folder to another, duplicate it, delete it, rename it and so on, but what’s actually in the folder is just a thumbnail (or ‘alias’ as Apple used to call it) that links to where this document is actually stored – on iCloud.

Double-clicking to open it will download the document from iCloud into my MacBook Air’s storage and/or Unified Memory, and it will be ready to go in about the same time it would take to open the same file stored on an internal spinning hard disk (assuming I have a half decent wifi connection). When I select Save from the File menu, the changes are saved to the document in the Unified Memory, and the revised document is sent back to iCloud in the background as I continue working.

Although there is only one version of the document – the one on iCloud – I can open and edit it on any or all of my devices (MacBook Air, MacBook Pro, iPad, iPhone) that have the Pages app installed. Each device’s Pages app knows that the document exists on iCloud, and if I move myself from one device to another, all I have to do is choose ‘Open Recent…’ from the File menu and that document (along with others I’ve recently been working on) will appear in a menu for me to open and work on. It appears as a separate file on each device, but there’s no duplication and no concerns over whether or not I’ve opened the latest version. There is only one version…

The Pages word processor app is part of Apple’s iWork productivity suite that also includes Numbers (spreadsheet) and Keynote (presentation) – all free downloads (if not already pre-installed) from the App store with any macOS or iOS device purchase. Each of these apps – along with other fundamentals such as Contacts, Calendar, Notes, Reminders and so on – offer iCloud Synchronisation, ensuring consistent information across all of my Apple devices. Wth iCloud Synchronisation enabled, I can put an appointment in my Calendar on my iPhone, it will automagically be visible on the Calendar app of all my devices.

The illustration above shows which apps are currently using iCloud Synchronisation. From here I can choose which apps’ documents I want to store on iCloud, and which apps’ documents I want to keep on my MacBook Air’s internal storage. If I know I’m going somewhere without reliable wifi, perhaps a long train ride through the country, I can choose to keep whatever documents I need on the MacBook Air, so it’s there regardless of whether or not I’ve got an internet connection. After reaching civilisation I can tell the latest version of that document to return to iCloud Synchronisation.

The document I’m writing now and you’re reading now currently has a size of 490kB when open, but when not in use it consumes only a few kB on my MacBook Air. The full 490kB document is stored in a lossless data compression format on iCloud, and only transferred to my MacBook Air (or other device) when I want to work on it. This allows me to consider iCloud as long-term storage, and the MacBook Air’s internal storage and Unified Memory as Pages’ workspace. The best example of this approach, however, is found in…

iCloud Photos

This is the same as the synchronisation mentioned above, but specialised for your photo collection – I can see all of my photos on every device, but in Gallery views they are smaller thumbnails for reference. It’s worth mentioning, however, that the compression scheme Apple uses to make the thumbnails is very impressive; there’s no need to click open the full resolution image to make a quick check for any obvious issues with the composition or colour. For example, I can create an album, load a selection of photos in it, and a simple glance at the album in Gallery view (thumbnails) will show me if an image is too dark, too light, too saturated, or otherwise out of place with the other images.

The full resolution version of an image will be downloaded when the low res version is clicked on – at which point Apple’s QoS kicks in and it feels as though the machine has my complete attention. As I swipe through the images at normal size on my MacBook Air’s 13-inch screen, I barely have time to notice the full-size image’s lower resolution before the full resolution image has downloaded and snapped into place. Swipe, blink, it’s there…

I can take a photo with my iPhone or any other device in the Apple ecosystem, and edit it on my MacBook Air or any other device in the Apple ecosystem, and I’m always working on the same file – unless I intentionally create a duplicate to test a dubious idea or similar.

My iCloud account currently holds 8978 images/videos and consumes 423GB of my iCloud storage. Although all of the images can all be seen on all of my devices, I only have 2275 images and 231 videos visible in my MacBook Air’s at present (the rest I have told iCloud Photos not to display on this device). How much of my MacBook Air’s internal storage do those 2275 images and 231 videos consume? 2.96GB, as shown above…

iCloud Backup

This is available for iOS and iPadOS devices, and offers what it says – a backup of your mobile device. The backup process is usually fast and the resulting file can be surprisingly smaller than expected if you’re already using iCloud Synchronisation, primarily because most of the device’s contents are already on iCloud (photos, files, etc.) or on Apple’s server (e.g. apps, OS, etc.); so the backup only needs to contain things specific to the device and its user – such as preferred settings, files that are not on iCloud, etc.

You cannot poke around inside this type of backup to extract any specific files; it is primarily a snapshot backup in the event that you lose your device and/or want to transfer its contents to a replacement device. As mentioned at the start of the companion piece, I borrowed a loaner iPad to transfer The Recently Deceased’s backup on to. This allowed me to access what was there. Unfortunately, prior to the The Recently Deceased  tragic end, I was not using iCloud Synchronisation for much of my work-in-progress; it was all on The Recently Deceased’s internal storage. If I had been using iCloud Synchronisation I could’ve opened those files from any of my other devices via iCloud with the appropriate apps. Instead, I had to restore the iCloud Backup to a borrowed iPad, find the files, and AirDrop them to The iMortal.

iCloud Backup’s restoration process is very simple. For example, if replacing a lost phone with a new phone, the first thing the new phone will ask upon power up is whether you are starting fresh or if you want to restore from a backup – in the latter case, once you’ve established an internet connection and entered your Apple ID, all you have to do is wait as iCloud Backup does its thing. When finished, your new device will have the same contents and configuration as your lost device. You may need to do some tests for fingerprint recognition or facial recognition if the new phone has features or tech your old phone didn’t have, but that’s about it.

The illustration above tells us that 13% of the available 2TB contains iCloud Backups; one will be for my phone, and one will be for The Recently Deceased. If we look towards the bottom left side of the image we see an icon labelled ‘Backups’, and in this case it totals 263.3GB of storage – which is 12.85% of 2048GB. Close enough to 13%…

[It should come as no surprise that one of the first things I did after setting up the new MacBook Air (i.e. remove unwanted apps, install wanted apps, etc.) was enable iCloud Synchronisation. Why? Because the initial syncing process – although a background operation – can take a long time when there’s a lot of data to sync (as I learnt with The iMortal, see ‘User Error’ in the companion article). It’s best to start when there’s not much data on board (as with a new computer) and when there’s a good wifi connection. Leave it overnight and, voila! From then on, it’s updating files as you use them…]

Incremental Backups: A Caveat

File synchronisation, as offered by iCloud, is not like making a Time Machine backup or making a spare copy of a document ‘just in case’, because iCloud does not offer an incremental backup function where there is always an unaltered ‘original’ version to go back to in case something goes wrong. If you can’t get back to an earlier version with repeated presses of the Undo key command, it means the earlier version is gone. That’s what Time Machine backup is for on macOS. With iCloud synchronisation there is only the latest version you worked on and whatever Undos the app or its metadata retains.

The document I’m typing into now is named ‘Intel to M-series 15’ (that name will have been changed to something like ‘From Intel To Apple Silicon’ by the time you’re reading this), where ’15’ is the incremental version number. This means there are currently 15 versions of this document, from 01 to 15. Each represents a point where I was happy with what I had written so far but I wanted to try a structural change and I was not sure if it was going to work. This process of keeping numbered backups as I go is known as making incremental backups, and as long as I keep those backups I’ll have an audit trail that can be followed all the way back to the first version of this document.

If I now decided to make some structural changes that might lead me to a point of no return – such as adding this text explaining the value of incremental backups – I’d create a new copy of this file called ‘Intel to M-series 16’ – incrementing the version number from 15 to 16. ‘Intel to M-series 16’ is the version I would continue working on, knowing that if the stuff I’m adding about incremental backups doesn’t work, I can delete the file and return to ‘Intel to M-series 15’ – which has remained unaffected.

The classic use of incremental backups in audio engineering is to duplicate the session file and increment its version number after each milestone of the recording, overdubbing and mixing stages has been reached, such as before making a difficult drop-in, before adding automation, or before doing anything else complicated or experimental where you know that if something goes wrong it will be smarter to go back to the earlier version and start again, rather than trying to fix it. iCloud Synchronisation is not going to be helpful if you want to go back to an earlier version: it’s focused on going forwards by keeping the latest versions of everything accessible to all of your devices.

iCloud Summary

It’s important to remember that iCloud is coded into the operating systems of Apple’s devices. As with Apple Intelligence, it doesn’t require you to log in and out of a third-party service on the web, adapt to a different GUI, transfer documents and images back and forth between a web service and your apps, and so on. It’s an inherent part of the system.

My iCloud+ account gives me 2TB of storage for $9.99 per month, and it’s useful and active storage that all of my devices can access, edit, add, remove and update. I could go to a local cafe for a break, get a burst of writing inspiration, open this document on my iPhone and update it. This is in stark contrast to the 40TB of inactive HDD storage that The iMortal is slowly working through. That information can only be accessed by The iMortal. The 40TB is riddled with safety copies, duplicates and incremental backups, and the sorting process is like reducing the Great Pyramid of Giza into powder with a nail file…

As we can see from the image below, iCloud Drive consumes only 27.7MB of my MacBook Air’s storage. This essentially represents the amount of my MacBook Air’s internal storage that is being used by Cloud Drive to manage synchronising my files, excluding the 2.96GB currently used consumed by iCloud Photo’s thumbnails. (There’s more to it than that simple explanation, but 27.7MB it’s hardly worth worrying about when the total internal storage is 256GB. I consider it as the space sacrificed on my MacBook Air’s internal storage that allows it to access to the 2TB of iCloud+ storage that’s shared among all of my devices.

BRINGING IT ALL TOGETHER

The companion article describes how I bought the cheapest Mac available, the base model MacBook Air with 16GB of Unified Memory and 256GB of internal storage at $999 USD, and by spending an extra $100 USD  on a Crucial X9 Pro SSD ended up with a total storage capacity of 128TB (256GB internal + 1024GB external).

Adding  iCloud+ at $9.99 USD per month means my MacBook Air has a potential total storage of total of 256GB + 1024GB + 2048GB = 3,328GB, or 3.328TB of storage. Allowing room for necessities such as macOS, Apple Intelligence, iCloud Photos thumbnails (all 2506 of them including photos and videos) and apps, the MacBook Air currently has about 184GB of available space in its internal storage. Likewise, allowing for iCloud Photos, iCloud Backups of my iPhone and of The Recently Deceased, and iCloud Synchronising of my Desktop and Document folders, my iCloud+ account is down to about 1.33TB of free space this is shared among all of my devicecs. So, in its fully operational state, ready to tackle the audio, video, photographic and writing work that I do, my MacBook Air system currently has 180GB + 1024GB + 1362GB = 2566GB (approx 2.51TB) of storage available. Ignoring the sunk costs of the MacBook Air and the Crucial external SSD, it costs me $9.99 USD per month to retain the iCloud element of that storage, but that $9.99 USD cost also brings with it the benefits of iCloud Synchronising, iCloud Backup, iCloud Photos and iCloud Drive across all of my devices.

Furthermore, because the free space on iCloud Drive appears just like any other open folder window on the desktop and it can be accessed by all of my devices, I can transfer a pile of videos and/or sound files from The iMortal, and/or from two different SD cards at once (one from my camera, one from my field recorder) via the CreateMate on the MacBook Air, directly into iCloud Drive’s unused space, then pull the files into the MacBook Air one at a time to preview and tidy up as discussed in ‘Do It Clean’ in the companion article – ensuring only the segments I’m going to use (plus handles) make their way on to the MacBook Air and into the project folder. The main transfers in and out of iCloud could take a while depending on the size of the files and the internet speed, but that can be planned around – such as downloading the next file in the background while working on the current one. Waiting aside, such transfers in and out of iCloud Drive are simply a drag-and-drop operation like anything else on macOS.

With iCloud+ and the Crucial SSD, I honestly cannot see a need for more than 256GB internal storage on my MacBook Air, and I now completely understand what the woman behind the counter at the Apple Reseller meant when she told me their biggest selling item was the base model M4 MacBook Air with 256GB of internal storage (as I had just bought) couples with an iCloud+ account.

A Small Fly In The Ointment

However, I have had to change one working habit, which is actually returning to an older approach. Well-seasoned readers will remember a mantra from the early days of DAWs, which was ‘always work from an external drive’, and that’s the approach I’ve returned to. Why? Because I’m using iCloud Synchronisation and I keep forgetting (read: can’t be bothered) to turn it off at the start of each session and turn it on again at the end of a session. Because it is always on, if I copy a session folder on to my MacBook Air’s internal storage, iCloud Synchronisation will dutifully go about transferring that session file all related files onto my iCloud Synchronisation and replace those large space-consuming audio and video files with iCloud thumbnails (or you might prefer the older but more appropriate Apple term, ‘aliases’). So when I open the  session, I have to wait for those files to download into the MacBook Air’s Unified Memory and/or internal storage before I can run the session. This can be a major PITA when working with long (or lots of short) 4K video footage – especially if the client is sitting there waiting with you.

To avoid this little fly in an otherwise all-round excellent ointment, I have reverted to the old approach of working directly off the SSD. It’s proven fast enough so far, and if for some reason that changes in the future (perhaps a very large job with lots of large files), all I have to do is stop being lazy and turn iCloud Synchronisation off before copying the session into the MacBook Air’s internal storage.

Regardless of whether I’m working directly off the SSD or turning iCloud Synchronisation off at the start and back on at the end, the last thing I do after closing the session is copy the entire folder onto the MacBook Air’s internal storage, knowing iCloud Synchronisation will get busy creating what is essentially a backup, or if there’s not enough space I can drag the session directly onto iCloud Drive instead. In either case, there’ll be an up-to-date backup on iCloud, a corporate cloud system belonging to one of the richest companies in the world It’s one of the safest places for it to be. As I write long ago in ‘The Internet Is Fast Enough, Dude‘: “Any event big enough to cause those corporations to lose my data, without advanced warning, will probably be so catastrophic that losing my data will be the least of my worries. Nuclear war, alien invasion, zombie apocalypse, the closing scene of Fight Club. Get the idea?”

MOVE WHILE YOU STILL CAN…

As mentioned earlier, macOS Tahoe (released on September 15th) is the last version of macOS to support Intel-based machines. After that, you’re options are as outlined early in this article. Remember the genesis of the new mantra as explained in the companion article: “Technology moves through society like a steamroller. If you’re not part of the machine, you’re part of the road.”

If you choose to be part of the machine – and thereby stay up to date with all of the latest apps and the new features they’ll be adding due to Apple Intelligence, along with the benefits of iCloud – then right now is the time to ditch that soon-to-be-vintage Intel-based machine and get on-board with the M-series Macs. I hope these two companion articles have provided helpful information for upgrading to an M-series Mac – assuming you’ve managed to read through them both. Most readers no longer need the fastest or most powerful computer they can buy to do their audio work, they just need the most appropriate.

My personal example, which resulted in choosing the base model MacBook Air, is simple and affordable. Your needs may be bigger than mine, and/or your comfort zone may be smaller, but the thinking behind the upgrade is scalable. Consider your needs, choose the machine that suits them best, get off the road and get on board the machine…

INCENTIVES & OVER-CAPITALISING

Apple’s website cleverly incentivises memory and storage upgrades. Let’s use the base model 13-inch MacBook Air as an example, in keeping with other examples used throughout this and the companion article https://www.audiotechnology.com/tutorials/from-intel-to-apple-silicon. It’s the cheapest Macbook on the market at present, so whatever we find here can be scaled up appropriately to the more powerful machines.

Moving down the ‘Buy’ page for the base model 13-inch MacBook Air, the first option we see is to upgrade the M4 chip from eight GPU cores to ten. This costs $100 USD, and might be appealing to those doing visual work (video editing, image creation, 3D rendering, zooming in and out on the Rx spectogram), which is where those extra two GPU cores could save some refresh and/or rendering time. They might also speed up zooming in and out of Rx’s spectral display: but there are already eight cores so I’m not sure if those extra two would be significant on what is essentially a two dimensional image.

Further down are the options to upgrade the Unified Memory. The base model has 16GB, but there are upgrades to take it to 24GB (+ $200 USD) or 32GB (+ $400 USD). These Unified Memory upgrades will appeal to people who have to deal with multiple large media files, such as video editors and audio people who use lots of tracks. If you choose either of these Unified Memory upgrades you automatically get the upgraded M4 chip mentioned above as part of the package.

Below those are the options to upgrade the internal storage. The base model has 256GB of internal storage but there are upgrades to take it to 512GB (+ $200 USD), 1TB (+ $400 USD) and 2TB (+ $800 USD). These storage upgrades will appeal to people who have a lot of different sessions or projects on the go, where it is inconvenient or time consuming to regularly transfer between internal and external storage. As with the Unified Memory upgrades described above, if the storage is upgraded beyond the base model’s 256GB you also get the M4 upgrade (two extra GPU cores) described above and you also get to choose one of two charger upgrades: a 35W dual charger or a 70W ‘fast charge’ model. The dual charger will appeal to those who want to charge their MacBook Air and a phone from the same charger, while the 70W charger will appeal to users working in situations where the MacBook Air cannot be connected to a charger for long periods of time and therefore need fast charging.

The above ‘free’ upgrade incentives provide a good package for anyone on a tight budget doing complex visual work, animations, editing large video files or working with lots of tracks on a DAW (i.e. all people who should be using a MacBook Pro), but we have to keep the package in perspective to see where the value is.

For example, taking the Unified Memory upgrade to 24GB for an extra $200 USD is equivalent to paying $100 USD for the upgraded M4 chip, and $100 USD for the additional 8GB of Unified Memory. It works well as a package, but you can’t say “I’ll pay $100 USD for the extra 8GB of Unified Memory but I don’t need the upgraded M4 chip so can I have the remaining $100 USD refunded?”

Similarly, taking the storage up to 512GB for an extra $200 USD is equivalent to paying $100 for the upgraded M4 chip, $20 US for either the 35W dual charger or the 70W fast charger, and $80 US for the extra 256GB of storage. Again, it works well as a package but you cannot say “I’ll pay $80 USD for the extra 256GB of storage but I don’t need the upgraded M4 chip and I don’t need the upgraded charger, so can I have the other $120 USD refunded?”

These are incentive packages. We know from elsewhere on Apple’s website that we can buy the upgraded M4 chip with 10 GPU cores instead of eight for an additional $100 USD, and we also know that we can buy the 35W dual charger or the 70W fast charger for an additional $20 USD each. We can also get them bundled in an incentive package, but we can’t get them bundled out.

We also have to be careful not to over-capitalise our purchase. For example, a 13-inch MacBook Air optioned up with 24GB of Unified Memory and 1TB of storage costs $1599 USD. It’s the same price as the base model 14-inch MacBook Pro, which comes with the MacBook Air’s ‘upgraded’ M4 chip as standard. At that price point the MacBook Pro only has 16GB of Unified Memory and 512GB of storage (half of the equivalently priced MacBook Air), but it comes with a larger and higher quality display, three Thunderbolt 4 ports (one more than the MacBook Air), an HDMI port and an SDXC card slot (neither available on the MacBook Air), cooling fans so it can run faster and therefore hotter, and the 70W fast charger. It’s a bigger, heavier and superior machine all around, but which is the better choice?

For someone like me, who needs to sort through decades of (mostly) direct-to-stereo field recordings accompanied by thousands of photos and numerous snippets of video footage, the MacBook Air with 1TB of storage is the better option. I don’t need a lot of processing power, but that 1TB of storage would allow me to put everything related to a single expedition (including safety copies and incremental backups) into one folder where I could sort through it all without switching or transferring between drives. However, for someone who is working on video production side-by-side with their client and is charging by the hour, the MacBook Pro is the obvious choice – its cooling fans allow it to run faster for longer, and speeds up long video renders significantly (some good insights on this is given from Youtuber Evan Ranft here.

The post From Intel to Apple Silicon appeared first on AudioTechnology.

Trending Products

- 10% Video Conference Lighting Kit, Ring...
Original price was: $21.99.Current price is: $19.79.

Video Conference Lighting Kit, Ring...

0
Add to compare
- 32% 15W LED Video Light Kit
Original price was: $50.60.Current price is: $34.19.

15W LED Video Light Kit

0
Add to compare
- 25% Desk Ring Light for Zoom Meetings &...
Original price was: $29.25.Current price is: $21.99.

Desk Ring Light for Zoom Meetings &...

0
Add to compare
- 37% SAMHOUSING Ipad Tripod Stand, with ...
Original price was: $28.60.Current price is: $17.99.

SAMHOUSING Ipad Tripod Stand, with ...

0
Add to compare
- 40% LISEN Tablet Stand
Original price was: $44.80.Current price is: $26.99.

LISEN Tablet Stand

0
Add to compare
- 37% LanPavilion iPad Stand, Adjustable ...
Original price was: $47.38.Current price is: $29.99.

LanPavilion iPad Stand, Adjustable ...

0
Add to compare
- 28% DJI Osmo Mobile SE, 3-Axis Phone Gi...
Original price was: $95.91.Current price is: $69.00.

DJI Osmo Mobile SE, 3-Axis Phone Gi...

0
Add to compare
- 33% 2pcs Original Joystick Parts for Zh...
Original price was: $22.49.Current price is: $14.99.

2pcs Original Joystick Parts for Zh...

0
Add to compare
- 35% hohem iSteady M7 Gimbal
Original price was: $416.95.Current price is: $269.00.

hohem iSteady M7 Gimbal

0
Add to compare
- 20% Hohem iSteady M6 Kit Smartphone Gim...
Original price was: $209.00.Current price is: $167.20.

Hohem iSteady M6 Kit Smartphone Gim...

0
Add to compare
.

We will be happy to hear your thoughts

Leave a reply

ICU Must Haves
Logo
Register New Account
Compare items
  • Total (0)
Compare
0
Shopping cart