E 코어가 Apple Silicon을 빠르게 만드는 이유

Why E cores make Apple silicon fast

152 pointsby ingve2026. 2. 8.166 comments
원문 보기 (eclecticlight.co)

요약

Apple Silicon Mac의 인상적인 성능은 종종 성능 코어 때문이라고 생각하지만, 효율성 코어는 백그라운드 작업을 처리하는 데 중요한 역할을 하여 P 코어가 사용자 애플리케이션을 위해 자유롭게 유지되도록 합니다. ARM의 big.LITTLE을 현대적으로 구현한 이 아키텍처는 서비스 품질(QoS)을 사용하여 스레드를 지능적으로 할당합니다. QoS는 포그라운드 작업을 P 코어(때로는 E 코어)에 우선적으로 할당하는 동시에 백그라운드 작업을 E 코어로 제한하여 사용자 경험과 배터리 수명에 영향을 미치지 않도록 하며, 이는 이전 Intel Mac에 비해 상당한 개선입니다.

댓글 (172)

tyleo6시간 전
These processors are good all around. The P cores kick butt too.

I ran a performance test back in October comparing M4 laptops against high-end Windows desktops, and the results showed the M-series chips coming out on top.

https://www.tyleo.com/blog/compiler-performance-on-2025-devi...

murderfs5시간 전
This is likely more of a Windows filesystem benchmark than anything else: there are fundamental restrictions on how fast file access can be on Windows due to filesystem filter drivers. I would bet that if you tried again with Linux (or even in WSL2, as long as you stay in the WSL filesystem image), you'd see significantly improved results.
cubefox5시간 전
Here is a more recent comparison with Intel's new Panther Lake chips: https://www.tomsguide.com/computing/cpus/panther-lake-is-int...
etrvic5시간 전
From your article it seems like you benchmark compile times. I am not an expert on the subject, but I don't see the point in comparing ARM compilation times with Intel. There are probably different tricks involved in compilation and the instructions set are not the same.
wpm55분 전
My M4 mini is probably the fastest computer/watt in my home. And it was the cheapest.

Not even a bad little gaming machine on the rare occasion

roomey5시간 전
Genuine question, when people talk about apple silicon being fast, is the comparison to windows intel laptops, or Mac intel architecture?

Because, when running a Linux intel laptop, even with crowd strike and a LOT of corporate ware, there is no slowness.

When blogs talk about "fast" like this I always assumed it was for heavy lifting, such as video editing or AI stuff, not just day to day regular stuff.

I'm confused, is there a speed difference in day to day corporate work between new Macs and new Linux laptops?

Thank you

newsclues5시간 전
Power management with Mac’s is the big benefit, imo.

It’s all about the perf per watt.

cj5시간 전
For me it’s things like boot speed. How long does it take to restart the computer. To log out, and log back in with all my apps opening.

Mac on intel feels like it was about 2x slower at these basic functions. (I don’t have real data points)

Intel Mac had lag when opening apps. Silicon Mac is instant and always responsive.

No idea how that compares to Linux.

smw5시간 전
Apple silicon is very fast per size/watt. The mind blowing thing is the macbook air that has weighs very little, doesn't have a fan, and feels competitive with top of the line desktop pcs.
throwa3562625시간 전
First of all, Apple CPUs are not the fastest. In fact top 20 fastest CPUs right now is probably an AMD and Intel only affair.

Apples CPUs are most powerful efficient however, due to a bunch of design and manufacturing choices.

But to answer your question, yes Windows 11 with modern security crap feels 2-3 slower than vanilla Linux on the same hardware.

nerdsniper4시간 전
I use pretty much all platforms and architectures as my "daily drivers" - x64, Apple Silicon, and ARM Cortex, with various mixtures of Linux/Mac/Windows.

When Apple released Apple Silicon, it was a huge breath of fresh air - suddenly the web became snappy again! And the battery lasted forever! Software has bloated to slow down MacBooks again, RAM can often be a major limiting factor in performance, and battery life is more variable now.

Intel is finally catching up to Apple for the first time since 2020. Panther Lake is very competitive on everything except single-core performance (including battery life). Panther Lake CPU's arguably have better features as well - Intel QSV is great if you compile ffmpeg to use it for encoding, and it's easier to use local AI models with OpenVINO than it is to figure out how to use the Apple NPU's. Intel has better tools for sampling/tracing performance analysis, and you can actually see you're loading the iGPU (which is quite performant) and how much VRAM you're using. Last I looked, there was still no way to actually check if an AI model was running on Apple's CPU, GPU, or NPU. The iGPU's can also be configured to use varying amounts of system RAM - I'm not sure how that compares to Apple's unified memory for effective VRAM, and Apple has higher memory bandwidth/lower latency.

I'm not saying that Intel has matched Apple, but it's competitive in the latest generation.

rngfnby4시간 전
New Mac arm user here.

Replaced a good Windows machine (Ryzen 5? 32 Gb) and I have a late intel Mac and a Linux workstation (6 core Ryzen 5, 32 Gb).

Obviously the Mac is newer. But wow. It's faster even on things that CPU shouldn't matter, like going through a remote samba mount through our corporate VPN.

- Much faster than my intel Mac

- Faster than my Windows

- Haven't noticed any improvements over my Linux machines, but with my current job I no longer get to use them much for desktop (unfortunately).

Of course, while I love my Debian setup, boot up is long on my workstation; screensaver/sleep/wake up is a nightmare on my entertainment box (my fault, but common!). The Mac just sleeps/wakes up with no problems.

The Mac (smallest air) is also by far the best laptop Ive ever had from a mobility POV. Immediate start up, long battery, decent enough keyboard (but If rather sacrifice for a longer keypress)

qoez4시간 전
I love apple and mainly use one for personal use, but apple users consistently overrate how fast their machines are. I used to see sentiment like "how will nvidia ever catch up with apples unified silicon approach" a few years ago. But if you just try nvidia vs apple and compare on a per dollar level, nvidia is so obviously the winner.
dangus4시간 전
I think you should spend some time looking at actual laptop review coverage before asking questions like this.

There are dozens of outlets out there that run synthetic and real world benchmarks that answer these questions.

Apple’s chips are very strong on creative tasks like video transcoding, they have the best single core performance as well as strong multi-core performance. They also have top tier power efficiency, battery life, and quiet operation, which is a lot of what people look for when doing corporate tasks.

Depending on the chip model, the graphics performance is impressive for the power draw, but you can get better integrated graphics from Intel Panther Lake, and you can get better dedicated class graphics from Nvidia.

Some outlets like Just Josh tech on YouTube are good at demonstrating these differences.

lrem3시간 전
You can notice that memory bandwidth advantage even in workloads like photo editing and code compilation. That and the performance cores reserved for foreground compute, on top of the usual "Linux sucks at swap" (was it fixed? I haven't enabled swap on my Linux machines for ages by now), does make a day-to-day difference in my usage.
irae3시간 전
I've used Linux as a daily driver for 6 months and I am now back to my M1 Max for the past month.

I didn't find any reply mentioning the easy of use, benefits and handy things the mac does and Linux won't. Spotlight, Photos app with all the face recognition and general image index, contact sync, etc. Takes ages to setup those on Linux and with macs everything just works with an Apple account. So I wonder if Linux had to do all this background stuff, if it would be able to run smoothly as Macs run this days.

For context: I was running Linux for 6 months for the first time in 10 years (which I was daily driving macs). My M1 Max still beats my full tower gaming PC, which I was using linux at. I've used Windows and Linux before, and Windows for gaming too. My Linux setup was very snappy without any corporate stuff. But my office was getting warm because of the PC. My M1 barely turn on the fans, even with large DB migrations and other heavy operation during software development.

bluedino3시간 전
Somehow my 2011 MacBook Pro was the fastest laptop I had ever used.

After I put an SSD in it, that is.

I wonder what my Apple silicon laptop is even doing sometimes.

testdelacc12시간 전
I haven’t used a laptop other than a mac in 10 years. I remember being extremely frustrated with the Intel macs. What I hated most was getting into video meetings, which would make the Intel CPU sound like a 747 taxiing.

The switch from a top spec, new Intel Mac to a base model M1 Macbook Air was like a breath of fresh air. I still use that 5 year old laptop happily because it was such a leap forward in performance. I dont recall ever being happy with a 5 year old device.

ricardobeat5시간 전
> The fact that an idle Mac has over 2,000 threads running in over 600 processes is good news

Not when one of those decides to wreck havoc - spotlight indexing issues slowly eating away your disk space, icloud sync spinning over and over and hanging any app that tries to read your Documents folder, Photos sync pegging all cores at 100%… it feels like things might be getting a little out of hand. How can anyone model/predict system behaviour with so many moving parts?

fragmede5시간 전
and if it paid off, that would almost be acceptable! But no. After spotlight has indexed my /Applications folder, when I hit command-spacebar and type "preview.app", it takes ~4 seconds on my M4 laptop to search the sqlite database for it and return that entry.

grumble

hmokiguess4시간 전
for me it’s iMessage, it gets out of sync way too often and then it eats the CPU away
LtdJorge3시간 전
Sounds like typical Windows experience
lrem3시간 전
It's slowly approaching what SRE has been dealing with for distributed systems... You just have to accept things won't be fully understood and whip out your statistical tooling, it's ok. And if they get the engineering right, you might still keep your low latency corner where only an understandable set of things are allowed.
twoodfin3시간 전
My pet peeve with the modern macOS architecture & its 600 coordinating processes & Grand Central Dispatch work queues is debugability.

Fifteen years ago, if an application started spinning or mail stopped coming in, you could open up Console.app and have reasonable confidence the app in question would have logged an easy to tag error diagnostic. This was how the plague of mysterious DNS resolution issues got tied to the half-baked discoveryd so quickly.

Now, those 600 processes and 2000 threads are blasting thousands of log entries per second, with dozens of errors happening in unrecognizable daemons doing thrice-delegated work.

amelius5시간 전
That's just framing. A different wording could be: by moving more work to slow (but power efficient) cores, the other cores (let's call them performance cores) are free to do other stuff.
drob5184시간 전
Does anyone have any insight into the MacOS scheduler and the algorithm it uses to place threads on E vs. P cores? Is it as simple as noting whether a thread was last suspended blocking on I/O or for a time slice timeout and mapping I/O blockers to E cores and time slice blockers to P cores? Or does the programmer indicate a static mapping at thread creation? I write code on a Mac all the time, but I use Clojure and all the low level OS decisions are opaque to me.
sys_647384시간 전
The article mentions P or E is generally decided by if it's a "background" process (whatever than means). Possible some (undocumented) designation in code or directive to the compiler of the binary decides this at compile time.
masklinn4시간 전
The baseline is static: low QoS tasks are dispatched to the E cores, while high QoS tasks are dispatched to P cores. IIRC high QoS cores can migrate to the E cores if all P cores are loaded, but my understanding is that the lowest QoS tasks (background) never get promoted to P cores.
macshome4시간 전
Check out the scheduler documentation that Apple has in the xnu repo. https://github.com/apple-oss-distributions/xnu/blob/main/doc...
[삭제된 댓글]
sys_647384시간 전
My M2 MBA doesn't have a fan but literally smokes the majority on Intel systems which are space heaters this time of year. Those legacy x86 apps don't really exist for the majority of people anymore.
ProllyInfamous1시간 전
If you place 1mm thermal pads between the sinks and the case, the CPUs/GPUs won't throttle as readily. At least for my M3 MBA (check your actual clearance).

I replaced a MacPro5,1 with an M2Pro — which uses soooooo much less energy performing similarly mundane tasks (~15x+). Idle is ~25W v. 160W

ksec4시간 전
>If you use an Apple silicon Mac I’m sure you have been impressed by its performance.

This article couldn't have come at a better time. Because frankly speaking I am not that impressed after I tested Omarchy Linux. Everything was snappy. It is like back to DOS or Windows 3.11 era. ( Not quite but close ) It makes me wonder why Mac couldn't be like that.

Apple Silicon is fast, no doubt about it. It isn't some benchmarks but even under emulation, compiling or other workload it is fast if not the fastest. So there are plenty of evidence it isn't benchmark specific which some people claims Apple is only fast on Geekbench. The problem is macOS is slow. And for whatever reason haven't improved much. I am hoping dropping support for x86 in next macOS meant they have time and excuses to do a lot of work on macOS under the hood. Especially with OOM and Paging.

microtonal4시간 전
I have a ThinkPad besides my main MacBook. I recently switched to KDE, a full desktop environment, and it is just insane how much faster everything renders than on macOS. And that's on a relatively underpowered integrated Ryzen GPU. Window dragging is butter smooth on a 120Hz screen, which I cannot say of macOS (though it outright terrible with the recent Electron issue).

Apple Silicon is awesome and was a game changer when it came out. Still very impressive that they have been able to keep the MacBook Air passively cooled since the first M1. But yeah, macOS is holding it back.

kgeist3시간 전
Can't Windows/Linux pin background threads to specific cores on Intel too? So that your foreground app isn't slowed down by all the background activity going on? Or there's something else to it that I don't understand. I thought E cores' main advantage is that they use less power which is good for battery life on laptops. But the article makes it sound like main advantage of Apple Silicon is that it splits foreground/background workloads better. Isn't it something that can already be done without a P/E distinction?
ctrlrsf3시간 전
Linux yes, of course.
LtdJorge3시간 전
Yes, it's the job of the scheduler
cosmic_cheese3시간 전
It’s both.

Multithreading has been more ubiquitous in Mac apps for a long time thanks to Apple having offered mainstream multi-CPU machines very early on (circa 2000), predating even OS X itself, and has made a point of making multithreading easier in its SDK. By contrast multicore machines weren’t common in the Windows/x86 world until around the late 2000s with the boom of Intel’s Core series CPUs, but single core x86 CPUs persisted for several years following and Windows developer culture still hasn’t embraced multithreading as fully as its Mac counterpart has.

This then made it dead simple for Mac developers to adopt task prioritization/QoS. Work was already cleanly split into threads, so it’s just a matter of specifying which are best suited for putting on e-cores and which to keep on P-cores. And overwhelmingly, Mac devs have done that.

So the system scheduler is a good deal more effective than its Windows counterpart because third party devs have given it cues to guide it. The tasks most impactful to the user’s perception of snappiness remain on the P-cores, the E-cores stay busy with auxiliary work and keep the P-cores unblocked and able to sleep more quickly and often.

m1322시간 전
It's the combination of the two that yields the best of both worlds.

Android SoCs have adopted heterogenous CPU architectures ("big.LITTLE" in the ARM sphere) years before Apple, and as a result, there have been multiple attempts to tackle this in Linux. The latest, upstream, and perhaps the most widely deployed way of efficiently using such processors involves using Energy-Aware Scheduling [1]. This allows the kernel to differentiate between performant and efficient cores, and schedule work accordingly, avoiding situations in which brief workloads are put on P cores and the demanding ones start hogging E cores. Thanks to this, P cores can also be put to sleep when their extra power is not needed, saving power.

One advantage macOS still has over Linux is that its kernel can tell performance-critical and background workloads apart without taking guesses. This is beneficial on all sorts of systems, but particularly shines on those heterogenous ones, allowing unimportant workloads to always occupy E cores, and freeing P cores for loads that would benefit from them, or simply letting them sleep for longer. Apple solved this problem by defining a standard interface for the user-space to communicate such information down [2]. As far as I'm aware, Linux currently lacks an equivalent [3].

Technically, your application can still pin its threads to individual cores, but to know which core is which, it would have to parse information internal to the scheduler. I haven't seen any Linux application that does this.

[1] https://www.kernel.org/doc/html/latest/scheduler/sched-energ...

[2] https://developer.apple.com/library/archive/documentation/Pe...

[3] https://github.com/swiftlang/swift-corelibs-libdispatch?tab=...

3eb7988a16632시간 전
Similarly, are there any modern benchmarks of the performance impact of pinning programs to a core in Linux? Are we talking <1% or something actually notable for a CPU bound program?

I have read there are some potential security benefits if you were to keep your most exploitable programs (eg web browser) on its own dedicated core.

anupamchugh59분 전
Pinning exists, but the interesting part is signal quality: macOS gets consistent “urgency” signals (QoS) from a lot of frameworks/apps, so scheduling on heterogeneous cores is less guessy than infer from runtime behavior.
saagarjha3시간 전
> Admittedly the impression isn’t helped by a dreadful piece of psychology, as those E cores at 100% are probably running at a frequency a quarter of those of P cores shown at the same 100%

It’s about half, actually

> The fact that an idle Mac has over 2,000 threads running in over 600 processes is good news

I mean, only if they’re doing something useful

psanford3시간 전
I'm curious how asahi linux manages scheduling across e cores and p cores. Has anyone done experiments with this?

Edit: It looks like there was some discussion about this on the Asahi blog 2 years ago[0].

[0]: https://asahilinux.org/2024/01/fedora-asahi-new/

eviks2시간 전
> The fact that an idle Mac has over 2,000 threads running in over 600 processes is good news, and the more of those that are run on the E cores, the faster our apps will be

This doesn't make sense in a rather fundamental way - there is no way to design a real computer where doing some useless work is better than doing no work, just think about energy consumption and battery life since this is laptops. Or that's just resources your current app can't use

Besides, they aren't that well engineered, bugs exist and last and come back, etc, so even when on average the impact isn't big, you can get a few photo analysis indexing going haywire for awhile and get stuck