Before we get into the details of how we’ve worked to achieve better simulation speeds, I’ll hand it over to Daan for a quick summary of our results.
Straight-to-the-Point Summary
So you want some quick info and conclusions up front? Let me try and summarize it for you!
With All Under Heaven included, the game is about 30% to 40% bigger, in the sense of the amount of playable land, and playable living characters. We focused on reducing simulation tick duration to keep the experience as close to CK3 as it is now.
Our numbers, measured on both low-spec and high-spec machines, indicate that we have reached tick speeds comparable to the current live version. Comparable means we are slightly slower in the early game, but on-par or slightly faster in the mid to late game, running at speed 5 over 150 to 250 years, starting in 1066.
The following graph shows tick duration over 150 years compared to the current release (higher means longer tick duration, therefore slower):

[Rough Tick Duration Graph - Red: All Under Heaven, Yellow: Current Version]
We also implemented GUI, 3D graphics, and memory-usage improvements, though they were a lower priority than simulation speed.
Caveat: Results will vary by the world you create and the world the simulation creates around you! There is no single number or graph that covers all games, hardware, and play styles, but we have aimed to deliver a playable, stable, fast experience with All Under Heaven.
That’s the short version. If you want more details, more graphs, and more insights, then please read on!
Definitions
Now let’s get back to the regular schedule and start digging into the details
First, we need to define what we mean when we say speed or performance. Normally, when we talk about performance, we refer to two categories: rendering and simulation. When developing DLCs for Grand Strategy Games, we create more systems and objects to simulate over time. Graphical complexity will also increase with more development, but during DLC development it rarely causes new bottlenecks for us.
However, increased simulation demands put more load on the CPU for calculations and data transformations due to new features and systems to keep track of. This makes the CPU our most common bottleneck when it comes to optimizing for simulation tick speed. So our most common task when it comes to optimizing the code for our games is to look at where our CPU-cycles are spent and where we can be more efficient, in order to keep down the average time-per-tick for a playthrough.
This makes the time-per-tick measurement the most important metric for us to track throughout development, and the one we will be talking about the most in this Dev Diary.
Measuring Performance
So how do you get accurate measurements for tick speed when working with a complex simulation? There are many variables in play that will affect the final outcome. Examples of variables that affect results include: graphical settings, hardware, test length, early- or late-game state, background tasks, and random variation.
All of these variables matter in analyzing how the game performs, but we also want to stress that most of our performance improvements are going to have a similar effect across the board of different hardware. For that reason, the most valuable test is early-game tick performance on fixed hardware with fixed settings and a fixed random seed. This gives us a test that we can quickly repeat to allow us to track the tick speed trends over time for different versions of the code during development and allows us to quickly spot degradations.
With that said, there are some optimizations which don't fully benefit lower-core hardware as much: while the early-game simulation is generally just a throughput issue for the code, the end-game optimization will instead focus on trying to contain the growth in complexity. We do so by ensuring calculations scale well with a larger simulation in later stages of the game, for example with more characters or larger realms. For that reason it’s still important for us to do spot checks on low-spec machines and profiling of endgame saves to avoid focusing only on solutions that require more CPU cores to parallelize computations.
Adaptation
One saying in software engineering (and many other professions) is that “perfect is the enemy of good.” It’s a sure way to spend ages on a feature if we’re determined from the outset that it has to use the most perfect and streamlined code. This is especially true in game development, where we often start out with one design of a feature to later tweak and modify the feature to better fit with the rest of the game and gameplay feedback for what is a fun player experience.
At times you can know from experience where we will have future performance problems and plan for a more thought-through data architecture to mitigate that, but most of the time the most important thing is to get features up and running to verify that they are fun. After that, we can identify which systems are not performing according to our requirements and start improving them from both a coherence and efficiency perspective.
The Journey of a Thousand Miles Begins with One Step
What I’ve so far described is our usual approaches to performance work for DLC projects. However with All Under Heaven this all gets upended by the scale. Beyond having more individual features than any prior DLC, the biggest challenge for simulation tick speed is the addition of two subcontinents. Both East Asia and Southeast Asia are massive additions to the world that we need to simulate for the game.
This was going to be a larger challenge than any previous expansion we had done for CK3. In addition to the typical ~20% slowdowns we would see from unoptimized feature additions, we would now also need to deal with a 32% increase in baronies (and rulers) to the game. In our simulation, rulers are the smallest building block for moving the world forward: this is also linearly correlated with the amount of work the CPU must perform. This means that just by putting the rest of Asia on the map, we have immediately made the game slower by an equal amount of the size increase.
In the planning stages for AUH we set aside additional time and resources compared to a normal expansion specifically for looking into how to make the game faster and offset the increased simulation scope. We also knew that we couldn’t just rely on easy wins by keeping new features in line with good practices. This time we had to look even deeper into what old systems we had that were holding us back.
With that said, I’ll hand it over to two of our very talented Engineers, Anton and Carl-Henrik, to explain in more depth how we find underperforming components in the codebase and the methods we use to make the game perform according to the principle “gotta go fast.”
Hello, I’m Anton, one of the Senior Programmers who has been on the Crusader Kings III team for many, many years. I’m going to talk a bit about how the code is structured and how we measure and think about performance improvements.
Focusing on the Most Expensive Systems
One approach to performance work is to look at various game systems and what it takes to simulate each system every daily tick. We start with the most expensive system, because optimizing it yields the largest impact compared to how much time we spend on it. Individual systems are also more self-contained and have only a limited number of connections with the rest of the game. It becomes easier to reason about it, and easier to focus on what matters right this moment. Our internal tools help us visualize various game features and compare the performance impact of each one.

[Example of a serial update performance graph]
Here’s an example: 25 years of game time aggregated, with average time shown for the most expensive parts of the game. 63 different systems update every day; 44 of them take less than 0.3ms each, and can be ignored. For the 19 remaining, we show their names along with the average time and percentage they take each day. Some game systems will always be expensive. During development, different systems may appear with larger or smaller shares of time in the chart. We evaluate how reasonable it is for this or that system to take so much time per day. There were periods when succession or situation systems were at the top of this graph, a clear indication that something excessive was happening and that we needed to investigate it. Situations were updated too frequently when nothing really changed, and succession was running a very expensive script on too many people. As you can see now, both systems are now included in the “other” category and are mostly fine. Characters and modifiers are typically at the top; they are the core of the game and will always take more time than anything else. This is always a difficult question - how much faster can I make something that will always be expensive? Is it worth spending a few more days, or is it good enough and I should look for easier gains elsewhere?
Average is not the only important number. This particular graph shows another problem. You can notice light orange columns (activities) sometimes spike disproportionately, making certain months much slower than average. We want both to have a low average time, and no big spikes when something important happens. This is a big advantage of such visualization techniques over just looking at the aggregate numbers. Activities will be a prime suspect for the next optimization.
Parallel vs. Sequential Updates
The previous graph showed the serial part of the daily tick. The next two graphs show parallel updates.

[Parallel AI update]

[Parallel pre-update of game systems]
It’s very easy to reason about and develop a system with a single-threaded approach. You know things happen in order, things are predictable and repeatable. It makes development faster, and features can be tested and balanced sooner. Usually only after the feature's skeleton is ready and it starts working in the game, it gets connected with the rest of the systems, and after that, we may take another pass and make some of the work go in parallel. These two graphs show parallel work during a daily tick. You can still see total wall-clock time spent on the update every tick; it’s below 20ms for each graph. It’s drawn as a thick red line at the bottom of each graph; notice how much more work gets done in total in parallel. This example is on a PC with about 20 logical cores.
Why not do everything in parallel? We need a deterministic order of observable changes in the game for multiplayer to work, so all clients have the same game state as everyone else. Another important part is again the ability to reason about the correctness of the game. Most changes in the game have propagating side effects. A Ruler conquering a new title means that another Ruler is going to lose a title. Loss of a title means changes in income and military power. A Ruler who gets weaker is more endangered by factions and other enemies. During any of those steps additional events may get triggered. A strictly determined order of actions and cascading side effects is necessary to understand and predict game outcomes.
Keeping that in mind, we’d like to do as much work in parallel as possible. We have an internal framework that allows us to split parts of the feature into sequential and parallel steps. It was covered a long time ago in pre-release Dev Diary 36. The parallel part goes first and is called “pre-update”. During the pre-update, nothing observable changes in the game. Every parallel thread sees the same visible game state, like it’s frozen. In parallel various systems can do heavy lifting to independently calculate what should be changed during the next sequential step. Everyone calculates new income independently, every AI actor makes decisions independently, heavy logic, triggers, and math that can be done in advance are pre-calculated. All this is done to minimize the final amount of sequential changes - apply already known values instead of doing math, execute the final decision instead of making a whole decision-making process etc.
Even during sequential updates, we still want to utilize more than one thread. And this part is extra risky and complex. If I can prove that certain modifications can be done irrespective of the order of operations, or if some actions guarantee to have only limited side effects, then it’s safe to do parallel work there.
One more important note here: only a small fraction of the game is updated each day. Our updates are separated into daily, monthly, and yearly updates. Only very few crucial systems need to be updated every day. AI movement of units is one such case. Most systems are updated only once a month. It’s a compromise between frequency and efficiency. Every day only 1/30th of all people, all construction, all epidemics etc are updated. This spreads expensive updates equally over the entire duration of the game. Obvious drawback compared to older games - you never know for sure when it happens to you, the player. You don’t know the date when you get your personal monthly income, it just happens every 30 days.
Applying Optimization to Specific Systems
Let’s talk about more specific changes to the game - how it all applies to individual game systems.
One example mentioned earlier was an expensive succession update. All Under Heaven introduces China, an enormous realm with unique succession mechanics; lots of people compete for counties, duchies, and kingdoms, and candidates are appointed according to their merit rank and score based on various properties of a person, their family, and events in their life. At some point it was the most expensive system to update daily, even when you take into account that we update succession very rarely, compared to other systems. That’s how expensive it was. Many individually reasonable decisions led to this outcome: China has lots of titles to appoint, and the design was to allow almost everyone to be a candidate for any county in the entirety of China. This is a quadratic problem that quickly grows out of control. What worked somewhat okay for big Byzantium, although already too slow, was no longer suitable for China.
The first attempt was to change the order of operations: can we eliminate as many unsuitable candidates as quickly as possible, so we don’t have to run expensive logic and math on scoring? It’s much cheaper to find the best out of 100 than out of 4,000 people. Another obvious omission was a sequential scoring; doing appointment score math is an immutable operation, and can be safely done in parallel on all people at once rather than one at a time. This alone made it three times faster, but it was still not good enough. It remained the third-most-expensive system.
The third step was to go and talk with our designers about their intent. Do we really care if every courtier in China can be a valid candidate for any county? Surely we can find some compromise. It’s important that any landless ruler has this chance. Every member of a proper noble family should be given a chance to serve the realm. But what about the lowborn? Can we somehow limit their participation to only titles where they personally live? This design step halves the number of candidates, yielding large performance gains while maintaining the same player experience. Players can still see important people all around the realm taking jobs and the system still feels competitive and alive, while taking even less time. It’s always important to keep in mind what we want to achieve with any game system: what matters to the player and what goals designers have.
The fourth step was less impactful, but still valuable. After the previous 3 steps most of the succession update was spent doing scripted scoring math. I was lucky to get suspicious there and look deeper. It was just a single line of script that calculated holding income. One very old trigger was always fast enough, but with the introduction of governor efficiency for administrative realms, holding income is now affected by governor efficiency, which made it slower due to incorrect caching. What was supposed to be just a simple return of a precalculated value turned into a whole sequence of very expensive math on demand. With the change that made cached income always valid and available, succession became so fast that it disappeared from the graph and was included in the “other”. Also, any other script that was using holding income became faster overall.
Lessons from System-Level Optimizations
One more way to make the game faster is to do things less frequently or not do it at all. Turns out, AI for barons was trying to do lots of things that made little sense.
Barons don’t have councils or court positions, yet they were still evaluating all of them; they no longer do that now. if it takes about three years to build a building when you have only a single holding, then they should only attempt new construction every three years; Barons now only attempt to start a construction every three years.
Lots of game decisions and interactions can never be attempted by barons as they will fail various triggers, but what if we don’t have to run those triggers in the first place? A sweeping review of availability by ruler tier for those mechanics freed even more time, aaaand in the end the result was not that impressive. Overall 0.5-1ms of total daily tick time, which can be overshadowed by random fluctuations in hardware or the current game state. The main cause for it - proper rulers spend so much time doing heavy tasks, that optimizing barons was barely worth it. All AI decision-making already runs in parallel as well, so any performance gains end up being spread over multiple cores and are less noticeable as a result.
That’s it from me. Thanks for letting me share my thoughts on performance. I’ll now hand over to Carl-Henrik who has also done a lot of performance work during the development of AUH.
The Full Picture
Dear Daily Tick Enthusiasts: I am Carl-Henrik, Principal Programmer on the team, and I have mostly been working on improving the performance and memory usage of Crusader Kings III!
My personal hobby involves sizecoding on 8-bit and 16-bit CPUs in assembler, so working with performance is close to my heart. I even won a couple of size/performance coding competitions!
I am also fairly new here, so I rely a lot on people around me including Daan, Joel, Jimmy, and Anton, as well as the design team and team partners! (I do cast a long shadow, however, as the mentor of Johan Andersson prior to his time at Paradox)
Generally I have found that the code that I optimize works well, but we have since it was written introduced a large number of titles and characters. This means that the assumptions made at the time don’t hold up any longer and this is the opportunity to improve performance. No code has changed because it was in any way inferior.
Loading
My initial focus was on the game's load screen. As my personal computer at home falls below the minimum specifications for CK3, addressing this would significantly improve my ability to play the game. I was unprepared for the sheer volume of activity, and the only available performance measurement was debug logging, which was intertwined with all other processes occurring during this game phase.
To better analyze load time, I made a new performance-tracking tool that can keep track of the whole sequence, shown in the image below. The graphs show what each CPU is busy loading or setting up and the black portions don’t necessarily indicate that a CPU was idle, they may have been too quick to show or not included in the loading functions.

[The streaming profiler tool, my current PDT (“Personal Development Time”) project]
While we identified numerous areas for improvement, implementing those changes proved too extensive at the time. I aim to begin enhancing these systems soon.
Memory
Right around the release of Khans of the Steppe, we needed to save some RAM. The minimum spec computer was having trouble running the game and this had previously been researched by our console porting team. Bringing the entirety of Crusader Kings III to modern platforms is an impressive feat, and having access to their work meant we could make these memory improvements fast.
One particular improvement was the memory usage for tooltips in the game, which was surprising. Bringing this one change to the PC version saved several gigabytes!
We also looked at memory savings from the updated GUI code in Clausewitz, but those turned out to be too different to use for All Under Heaven.
Performance
With the memory improvements in place we also needed to look at code performance. Early measurements showed as much as one and a half times the performance degradation from our earlier DLC Khans of the Steppe.
My approach prioritizes code improvements, while other team members, who specialize in design, focus on elements like scripts and character action frequency within the game.

[A performance sample over 100 years from a work computer (32 cores)]
To monitor optimization progress and identify development-related issues, I run the performance analyzer daily for 100 years. This is done on both my high-spec computer (32 processors, 64 GB RAM, Windows 11) and a lower-spec machine (8 cores, 16 GB RAM).

[A performance sample over 100 years from a low spec (8 cores) test computer]
From the graphs we can investigate any of a multitude of systems for performance issues.
Once a system has been identified and the relevant (slow) code tracked down, we can start making improvements. Most of the time I find things updating other things that don’t all need to be updated.
For example, in the performance graph for Pre Update, Script Variables was one of the top users. In Saved Variables/Script Variables we narrowed the check for variables that need to be updated, as they can time out, so instead of checking around 500,000 variables only a few hundred need to be tested for each tick. This change made the category disappear entirely from the graph!
Rendering
Graphics tend to be one of the largest performance categories for games, but even with the latest map updates Crusader Kings III is not significantly impacted in this area. To improve graphics performance, we introduced Adaptive Framerate. This new setting benefits computers with fewer than 10 processors by subtly lowering the rendering frame rate when there is minimal screen activity. It should be enabled with the low-quality graphics preset.
Adaptive Frame Rate works together with the Maximum FPS setting, which has a new default option called “Display refresh rate”. Previously disabling VSync and leaving the max FPS off led to a significant decrease in performance due to the frame rate being uncapped by default. For optimal performance with minimal rendering impact, enable Adaptive Frame Rate and set the Maximum FPS to 30.
The updated map and other improvements have increased the detail for the GPU to process. This may lead to longer frame processing times and a lower screen-update rate, potentially making the interface feel less responsive. While this does not affect the daily tick rate as the CPU and GPU work in parallel, we plan to investigate ways to better balance this workload. This optimization has not yet been prioritized alongside other planned improvements.
Complexity
Many optimizations are straightforward and easy to reason about, but sometimes you find something small and innocent looking that can drag you deep into a rabbit hole.
There is a function that does a few checks and runs a script (if the checks are fine). This is about 30 lines of code but contained enough performance issues that the improvement took over 2 weeks.
The slowest line of code was a call to IsScopeOK that basically just checks that the script scope (parameter) matches what the Trigger or Effect is expecting. Making some trivial improvements yielded no results which simply indicates that the compiler automatically did that work for us.
Even though the function is not inherently slow, the sheer volume of scripts executed per tick makes this the most resource-intensive aspect of the entire game.
The actual problem with this line turned out to be that to check the scope each Trigger generated a 128 bit flag field and compared that with a 128 bit flag field generated from the scope. Replacing this with simply comparing the numbers of the scopes greatly improved performance.
Unfortunately Triggers and Effects can accept multiple kinds of scopes so ultimately if the initial check fails the bit flag test is necessary as a fallback but it is already a great improvement.
In most cases Trigger and Effects return the scope number but there is also a concept of Links that only return the scope bit flags. I ended up spending two days going through all these classes and adding the missing functions.

[IsScopeOk code block rewrite that puts the less expensive checks at the top]
The Trigger/Effect also seemed to suffer from cache misses and adding a cache prefetch helps a bit more.
But this was just one line of the function. We have an in-game script profiler that we use extensively trying to improve performance of scripts. This should only have a performance cost when it is used so it required a bit more investigation.
We need to remember the filename and the line of the file for the script and this particular data type had an inefficient implementation to store that information, so that helped quite a bit. However, this efficiency is still redundant when not in use. Therefore, employing an in-place new instead of the default constructor made this line acceptable.
While time-consuming, each step of experimentation, testing, and profiling proved invaluable due to the significant performance improvements achieved.
Simplicity
Sometimes making things simpler also makes them faster. Every character has a lot of opinions about other characters, and the strength of each opinion can change over time. The system that tracks opinions cleverly cached the current opinion value and calculated the future date when this opinion changed 1 point.
Now to get hold of the current opinion value between characters, it just needed to be found in the cache and all opinions checked each day whether it was time to step. This check was in a sorted list so only the opinions that were about to change needed to be considered. Unfortunately, after each step the new date needed to be re-inserted into the list, so this improvement was becoming more expensive as we added more characters.
Turns out the calculation is simply a multiplication with a change rate constant and an addition, so getting rid of the cache and just calculating the opinion on the fly was much faster than maintaining a sorted array.
Defeat
Occasionally I try to improve the performance of a function only to discover that overall the change is not worth it.
Looking up modifier values is a frequent operation during a daily tick, so speeding up this code should offer significant benefits. I experimented with two methods that substantially improved function duration. However, these improvements were offset by equivalent slowdowns in other systems, resulting in no overall gain. Despite considerable time spent on these code changes, they were ultimately discarded.
Performance from a Script & Design Perspective
Hi all, Daan here, Senior Programmer on CK3, and the coordinator for the Code discipline on All Under Heaven. Additionally I am also the design “feature steward” for China specifically.
So that puts me in a good position to talk about how we worked on performance from a Design and “script” perspective! Note that I did not do most of the script tweaking myself, but I offered tools and suggestions to do so.
Optimizing our Script
Our script files, present as text files in our game, are responsible for a large portion of our game mechanics, and offer great power to our Designers, and also to our Modders.
With that great power comes… some responsibilities - but also a big downside - our Paradox Script is not as fast as the equivalent C++ code! And it can be fairly easy to accidentally make something slower than is needed.
To counteract that we have added some tooling in recent releases that allow us to find slow scripts. The Script Profiler for example is accessible in game already when in dev-mode.
Using these tools we went through almost all script files in the game looking for performance gains. We reshuffed trigger orders, simplified structures, used “equivalent but slightly faster” micro-optimizations, and a lot more. I will apologize to the modders that might prefer that we keep our scripts largely unchanged, we adjusted a lot while optimizing things!
We also looked at our most used triggers & effects and optimized our C++ side of these as well. Sometimes we even created new triggers - a good example is the aptly, but awkwardly named `is_available_quick` trigger:

[Generated script documentation showing the ‘is_available_quick’ trigger]
This new trigger allows us to check a number of already existing availability ‘status’ checks in one go, because we discovered we check them together a lot. They result in functionally equivalent logic to calling each of these separately, but under the hood it is just one single “smarter” and faster block in C++.
Similarly, we have added ‘shortcut’ options to a number of script-list generators for often used filters, speeding up script with only text tweaks.
Script Order Optimizer
We also investigated and implemented an automatic script order-optimizer, which shuffles the order of script execution according to weights - to gain performance by stopping execution earlier.
But we decided to leave this script-optimizer disabled for now, because we were not always seeing enough consistent performance improvement from it. It is a good example of something that you can encounter when working on performance: one performance improvement can eat into the gains of another! And in this case our other script optimizations snipped away the biggest gains of this tool.
We may yet enable this if we deem it statistically useful!
Regardless, we will use it for future meta-programming improvements for the script language.
For the curious, you can enable the basic version of it by running the game with a command-line option ‘-script_optimizer`. I could write another dev-diary on this tool and its options, but I will leave it for another day.
Flexible Frequency
Sometimes we simulate things we shouldn’t - things that do not matter, or things that will not change as frequently for certain character types. A lot of our script-logic did not have easy tooling to adapt AI frequency in a smart way. For example, most character interactions were considered as frequently by a Baron in Iceland as by the Emperor of China.
So we adapted a number of our systems (interactions, decisions, and more) to be able to configure tier-based frequencies for AI consideration. This also provides us with a statically defined way to say how frequently a specific tier of ruler should consider something, but also very importantly when they never should (a frequency of zero).
For example we have the ‘release_as_tributary_interaction’ for Nomadic, Mandala, and Wanua rulers. Should Barons or Counts consider this? No, never, they are mechanically unable to do so. Dukes? Maybe… not very important for them. Yet in our old interaction definition logic we would have to do a check, and it would get checked every year, by every ruler. With a relatively ‘expensive’ script check.
With tier-based AI frequency definition we can eliminate this check, and put this interaction in a bucket that never gets checked by a Count or a Baron. (‘never’ is something we can optimize around easily!)

[Configuration of the AI frequency of the ‘release_as_tributary_interaction’ interaction]
As an important bonus point, this also allows us to make things be considered more frequently for higher tier AI rulers without losing performance due to lower tier characters - making the game AI more responsive - when it matters.
We’ve added this style of tier-based frequency configuration to almost all character interactions, decisions, activities and great projects - on a total of about 550 types of “things” our ruler AI is considering to do.
Eliminating Stray Nobodies
One of the more commonly used mods out there to make the game faster is the “Population Control” mod - which starts eliminating people when the world’s population reaches a certain number.
Now we haven’t done something exactly like that, but we have tried to combat the underlying issue: we accumulate characters that don’t do much - and after a couple of hundred years these can count in the tens of thousands.
So we went and reduced or eliminated a bunch of ‘sources’ of characters that generate boring, useless, random or invisible characters that then linger in guest pools, the sidelines of courts of barons and counts, and that take up space and cost performance.
At the same time we keep the more interesting characters that have a bit of history around longer, and we draw them into events more, often replacing a randomly generated nobody.
This allowed us to trim the useless fat that accumulates in long-running games, reducing the strain that even relatively light non-ruler characters leave on the simulation systems.
Conclusion
So let’s take a look at where we are at now, a couple weeks before release at the time of writing: We did initially measure nearly one-and-a-half-times degradation in daily tick rate and set a goal to not be more than marginally slower than our previous release.
We’ve added significantly more map area and, with it, significantly more characters.
With just a couple weeks until release, we are in the final stages of development and implementing daily changes. We've made significant performance improvements, with the game now running significantly closer to the current release’s speed than our previously set tick-rate target. This has been confirmed on both high-end and low-spec computers used for testing.
We’re hopeful we can get it a little bit faster than that once the dust settles from the final stretch.
Play fast!
Updated Minimum Hardware Specs
[p]And then some closing words about the minimum hardware requirements from me, Jimmy, the Technical Director for Studio Black.First of all, I am amazed at the work the team has done with performance for this very technically demanding expansion that is All Under Heaven. As part of our performance review for All Under Heaven, we revisited our minimum hardware requirements, specifically the CPU baseline. As has been mentioned in this Dev Diary, after months of optimization work, we’ve recovered nearly all of the slowdown seen during early development. From an initial big increase in daily tick duration at the start of development, the game now runs comparably to our current live version (with some fluctuation between the early and late game) - even with the expanded map and heavier simulation. And these tests have been done on both high and low end hardware. In other words, this update to hardware specs isn’t about unoptimized code, it’s about ensuring the game runs reliably for everyone.
During extensive testing that our hardware lab did we could see that older CPUs like the Intel i3-2120, released back in 2011, struggled to behave predictable under heavy load, especially alongside systems with just 6 GB of RAM. This represented our original 2020 minimum spec, impressive longevity for hardware that’s nearly fifteen years old. But to ensure a consistent experience, we’re now updating the minimum CPU recommendation to an Intel i5-750 or AMD FX-4300, paired with 8 GB of RAM. These configurations are still modest by today’s standards, but they provide the stable and predictable performance needed for what Crusader Kings is today, five years after release.
[/p]