E 코어가 Apple Silicon을 빠르게 만드는 이유
Why E cores make Apple silicon fast
요약
Apple Silicon Mac의 인상적인 성능은 종종 성능 코어 때문이라고 생각하지만, 효율성 코어는 백그라운드 작업을 처리하는 데 중요한 역할을 하여 P 코어가 사용자 애플리케이션을 위해 자유롭게 유지되도록 합니다. ARM의 big.LITTLE을 현대적으로 구현한 이 아키텍처는 서비스 품질(QoS)을 사용하여 스레드를 지능적으로 할당합니다. QoS는 포그라운드 작업을 P 코어(때로는 E 코어)에 우선적으로 할당하는 동시에 백그라운드 작업을 E 코어로 제한하여 사용자 경험과 배터리 수명에 영향을 미치지 않도록 하며, 이는 이전 Intel Mac에 비해 상당한 개선입니다.
댓글 (172)
I ran a performance test back in October comparing M4 laptops against high-end Windows desktops, and the results showed the M-series chips coming out on top.
https://www.tyleo.com/blog/compiler-performance-on-2025-devi...
Not even a bad little gaming machine on the rare occasion
Because, when running a Linux intel laptop, even with crowd strike and a LOT of corporate ware, there is no slowness.
When blogs talk about "fast" like this I always assumed it was for heavy lifting, such as video editing or AI stuff, not just day to day regular stuff.
I'm confused, is there a speed difference in day to day corporate work between new Macs and new Linux laptops?
Thank you
It’s all about the perf per watt.
Mac on intel feels like it was about 2x slower at these basic functions. (I don’t have real data points)
Intel Mac had lag when opening apps. Silicon Mac is instant and always responsive.
No idea how that compares to Linux.
Apples CPUs are most powerful efficient however, due to a bunch of design and manufacturing choices.
But to answer your question, yes Windows 11 with modern security crap feels 2-3 slower than vanilla Linux on the same hardware.
When Apple released Apple Silicon, it was a huge breath of fresh air - suddenly the web became snappy again! And the battery lasted forever! Software has bloated to slow down MacBooks again, RAM can often be a major limiting factor in performance, and battery life is more variable now.
Intel is finally catching up to Apple for the first time since 2020. Panther Lake is very competitive on everything except single-core performance (including battery life). Panther Lake CPU's arguably have better features as well - Intel QSV is great if you compile ffmpeg to use it for encoding, and it's easier to use local AI models with OpenVINO than it is to figure out how to use the Apple NPU's. Intel has better tools for sampling/tracing performance analysis, and you can actually see you're loading the iGPU (which is quite performant) and how much VRAM you're using. Last I looked, there was still no way to actually check if an AI model was running on Apple's CPU, GPU, or NPU. The iGPU's can also be configured to use varying amounts of system RAM - I'm not sure how that compares to Apple's unified memory for effective VRAM, and Apple has higher memory bandwidth/lower latency.
I'm not saying that Intel has matched Apple, but it's competitive in the latest generation.
Replaced a good Windows machine (Ryzen 5? 32 Gb) and I have a late intel Mac and a Linux workstation (6 core Ryzen 5, 32 Gb).
Obviously the Mac is newer. But wow. It's faster even on things that CPU shouldn't matter, like going through a remote samba mount through our corporate VPN.
- Much faster than my intel Mac
- Faster than my Windows
- Haven't noticed any improvements over my Linux machines, but with my current job I no longer get to use them much for desktop (unfortunately).
Of course, while I love my Debian setup, boot up is long on my workstation; screensaver/sleep/wake up is a nightmare on my entertainment box (my fault, but common!). The Mac just sleeps/wakes up with no problems.
The Mac (smallest air) is also by far the best laptop Ive ever had from a mobility POV. Immediate start up, long battery, decent enough keyboard (but If rather sacrifice for a longer keypress)
There are dozens of outlets out there that run synthetic and real world benchmarks that answer these questions.
Apple’s chips are very strong on creative tasks like video transcoding, they have the best single core performance as well as strong multi-core performance. They also have top tier power efficiency, battery life, and quiet operation, which is a lot of what people look for when doing corporate tasks.
Depending on the chip model, the graphics performance is impressive for the power draw, but you can get better integrated graphics from Intel Panther Lake, and you can get better dedicated class graphics from Nvidia.
Some outlets like Just Josh tech on YouTube are good at demonstrating these differences.
I didn't find any reply mentioning the easy of use, benefits and handy things the mac does and Linux won't. Spotlight, Photos app with all the face recognition and general image index, contact sync, etc. Takes ages to setup those on Linux and with macs everything just works with an Apple account. So I wonder if Linux had to do all this background stuff, if it would be able to run smoothly as Macs run this days.
For context: I was running Linux for 6 months for the first time in 10 years (which I was daily driving macs). My M1 Max still beats my full tower gaming PC, which I was using linux at. I've used Windows and Linux before, and Windows for gaming too. My Linux setup was very snappy without any corporate stuff. But my office was getting warm because of the PC. My M1 barely turn on the fans, even with large DB migrations and other heavy operation during software development.
After I put an SSD in it, that is.
I wonder what my Apple silicon laptop is even doing sometimes.
The switch from a top spec, new Intel Mac to a base model M1 Macbook Air was like a breath of fresh air. I still use that 5 year old laptop happily because it was such a leap forward in performance. I dont recall ever being happy with a 5 year old device.
Not when one of those decides to wreck havoc - spotlight indexing issues slowly eating away your disk space, icloud sync spinning over and over and hanging any app that tries to read your Documents folder, Photos sync pegging all cores at 100%… it feels like things might be getting a little out of hand. How can anyone model/predict system behaviour with so many moving parts?
grumble
Fifteen years ago, if an application started spinning or mail stopped coming in, you could open up Console.app and have reasonable confidence the app in question would have logged an easy to tag error diagnostic. This was how the plague of mysterious DNS resolution issues got tied to the half-baked discoveryd so quickly.
Now, those 600 processes and 2000 threads are blasting thousands of log entries per second, with dozens of errors happening in unrecognizable daemons doing thrice-delegated work.
I replaced a MacPro5,1 with an M2Pro — which uses soooooo much less energy performing similarly mundane tasks (~15x+). Idle is ~25W v. 160W
This article couldn't have come at a better time. Because frankly speaking I am not that impressed after I tested Omarchy Linux. Everything was snappy. It is like back to DOS or Windows 3.11 era. ( Not quite but close ) It makes me wonder why Mac couldn't be like that.
Apple Silicon is fast, no doubt about it. It isn't some benchmarks but even under emulation, compiling or other workload it is fast if not the fastest. So there are plenty of evidence it isn't benchmark specific which some people claims Apple is only fast on Geekbench. The problem is macOS is slow. And for whatever reason haven't improved much. I am hoping dropping support for x86 in next macOS meant they have time and excuses to do a lot of work on macOS under the hood. Especially with OOM and Paging.
Apple Silicon is awesome and was a game changer when it came out. Still very impressive that they have been able to keep the MacBook Air passively cooled since the first M1. But yeah, macOS is holding it back.
Multithreading has been more ubiquitous in Mac apps for a long time thanks to Apple having offered mainstream multi-CPU machines very early on (circa 2000), predating even OS X itself, and has made a point of making multithreading easier in its SDK. By contrast multicore machines weren’t common in the Windows/x86 world until around the late 2000s with the boom of Intel’s Core series CPUs, but single core x86 CPUs persisted for several years following and Windows developer culture still hasn’t embraced multithreading as fully as its Mac counterpart has.
This then made it dead simple for Mac developers to adopt task prioritization/QoS. Work was already cleanly split into threads, so it’s just a matter of specifying which are best suited for putting on e-cores and which to keep on P-cores. And overwhelmingly, Mac devs have done that.
So the system scheduler is a good deal more effective than its Windows counterpart because third party devs have given it cues to guide it. The tasks most impactful to the user’s perception of snappiness remain on the P-cores, the E-cores stay busy with auxiliary work and keep the P-cores unblocked and able to sleep more quickly and often.
Android SoCs have adopted heterogenous CPU architectures ("big.LITTLE" in the ARM sphere) years before Apple, and as a result, there have been multiple attempts to tackle this in Linux. The latest, upstream, and perhaps the most widely deployed way of efficiently using such processors involves using Energy-Aware Scheduling [1]. This allows the kernel to differentiate between performant and efficient cores, and schedule work accordingly, avoiding situations in which brief workloads are put on P cores and the demanding ones start hogging E cores. Thanks to this, P cores can also be put to sleep when their extra power is not needed, saving power.
One advantage macOS still has over Linux is that its kernel can tell performance-critical and background workloads apart without taking guesses. This is beneficial on all sorts of systems, but particularly shines on those heterogenous ones, allowing unimportant workloads to always occupy E cores, and freeing P cores for loads that would benefit from them, or simply letting them sleep for longer. Apple solved this problem by defining a standard interface for the user-space to communicate such information down [2]. As far as I'm aware, Linux currently lacks an equivalent [3].
Technically, your application can still pin its threads to individual cores, but to know which core is which, it would have to parse information internal to the scheduler. I haven't seen any Linux application that does this.
[1] https://www.kernel.org/doc/html/latest/scheduler/sched-energ...
[2] https://developer.apple.com/library/archive/documentation/Pe...
[3] https://github.com/swiftlang/swift-corelibs-libdispatch?tab=...
I have read there are some potential security benefits if you were to keep your most exploitable programs (eg web browser) on its own dedicated core.
It’s about half, actually
> The fact that an idle Mac has over 2,000 threads running in over 600 processes is good news
I mean, only if they’re doing something useful
Edit: It looks like there was some discussion about this on the Asahi blog 2 years ago[0].
This doesn't make sense in a rather fundamental way - there is no way to design a real computer where doing some useless work is better than doing no work, just think about energy consumption and battery life since this is laptops. Or that's just resources your current app can't use
Besides, they aren't that well engineered, bugs exist and last and come back, etc, so even when on average the impact isn't big, you can get a few photo analysis indexing going haywire for awhile and get stuck