Kerbal Space Program (KSP) is a computer game in which the player can build rockets (and spaceplanes, rovers, etc.) from a large set of predefined parts and then pilot them in a simulated solar system. In the simulation, every vessel in space travels according to Kepler's laws (i.e., the vessel's path follows a conic section, as though it feels gravity from only one planetary body). This simplification has been the topic of many conversations in the game's forums and I thought it would be interesting to explore whether and how more accurate trajectories for vessels could be computed using Newton's laws. I believe that using Newtonian trajectories in KSP is possible, although whether there is a compelling gameplay reason to make the change is a different question.
The approach to space travel currently used by KSP is called patched conics, because the path taken by a vessel moving from one planetary body to another takes the form of pieces of conic sections patched together as the vessel moves through the spheres of influence (SOIs) of different planetary bodies. The motion of the planetary bodies themselves in KSP's solar system are also elliptical orbits. Thus all objects in space in KSP follow Kepler's laws of planetary motion. In a system containing several bodies that exert gravity (i.e., the sun and multiple planets and moons), these conic sections are only approximations; the true path of an object subject to Newton's laws of motion is more complex. But the patched conics approach has the advantage of simplicity, both for the game's developers and for the player's computer. A Keplerian orbit is fully defined by just six parameters (Keplerian orbital elements) and these parameters remain constant as the object follows the orbit over time. This makes it easy to display an object's projected orbit in map view in the game and to step quickly forwards in time (time warp) without concerns about numerical accuracy.
An alternative to this approach is to calculate a vessel's motion by applying Newton's second law of motion, using Newton's law of universal gravitation to calculate the acceleration on the vessel caused by the planetary bodies in the solar system. This approach comes in two flavors: one can continue to use Kepler's laws for the planets and moons and just integrate the motion of vessels using Newton's laws. Or, one could do a true 'n-body' simulation in which the motion of the moons and planets is also calculated using Newton's laws. It is the first flavor that I am looking at here: I leave the planets and moons in the Keplerian elliptical orbits they have now, and I look at the motion of vessels subject to Newtonian mechanics in the solar system.
This piece is generally aimed at the reader who is familiar with KSP and related concepts of space travel, but is not necessarily familiar with numerical methods. The piece roughly follows the path of my own investigations, and so it has gotten pretty long. Sorry; feel free to skip to the end if you just want the bottom line. My hope is that this approach will help the interested reader understand some of the basic issues involved with numerical solution techniques and their application to KSP.
I start by looking at the performance of different numerical integration schemes when applied to orbits where I know what the exact analytical solution should be. By comparing the numerically integrated solution to the exact solution, I can evaluate how accurate the numerical scheme is. Next, I look at how much deviation there is from Keplerian orbits for objects placed in the KSP solar system and compare that to the numerical error, and also at how much drift there is in different orbits over a long period of time. I also look at the special case of objects placed at a Lagrangian point (L-point) in the solar system. I then look at the speed of this numerical integration and a solution to the problem of doing the integration over large time steps while still showing the position of vessels every frame of the game. I sum everything up with a proposal for how Newtonian trajectories could be implemented in KSP.
In calculating an object's trajectory, the basic problem is to solve Newton's second law of motion,
$$ \boldsymbol{A} = \frac{d}{dt}\left(\frac{d\boldsymbol{x}}{dt}\right) = \frac{\boldsymbol{F}(\boldsymbol{x}, t)}{m}, $$
where the force on the object is due to gravity from each body (the sun, planets, and moons) in the solar system,
$$ \boldsymbol{F}(\boldsymbol{x}, t) = \sum_i \frac{-\boldsymbol{\hat{r}}_i G M_i m}{|\boldsymbol{r}_i|^2} = m \sum_i \frac{-\boldsymbol{\hat{r}}_i \mu_i}{|\boldsymbol{r}_i^2|} . $$
Here, x is the location of the object (three-element vector holding the object's x, y, and z coordinates), A is the acceleration experienced by the object, and m is the object's mass. In the second equation, G is Newton's universal gravitational constant, Mi is the mass of planetary body i, and ri is the vector pointing from that body to the object. The total force felt by the object, F, is the sum of gravitational force from each body in the solar system. The product of a planetary body's mass and the gravitational constant is often combined into the gravitational parameter, μ.
The first equation is a second-order differential equation (so-called because it contains the second derivative of the variable we are solving for, x). However, most numerical solution techniques work on first-order differential equations. The problem is easily solved: I simply replace the single second-order differential equation with a system of two first-order equations,
$$ \frac{d\boldsymbol{v}}{dt} = \sum_i \frac{-\boldsymbol{\hat{r}}_i \mu_i}{|r_i|^2}, \frac{d\boldsymbol{x}}{dt} = \boldsymbol{v}, $$
that we solve simultaneously for both position and velocity. The combination of the object's position and velocity is its state. The object's state at a given time gives us enough information to calculate it's trajectory. (By the way, notice how the state consists of six numbers -- three position coordinates and three velocity elements -- the same amount of information needed to fully define a Keplerian orbit. It's not a coincidence!)
Because F(x, t) is a rather complicated function of x and t, it is not possible to solve this equation analytically (that is, we cannot get a function x(t) = ...). So we must approximate the solution numerically instead, which proceeds in discrete timesteps. The time step size is denoted h, i.e. tn+1 = tn + h. We use a numerical integration scheme to advance the solution one time step, i.e. determine xn+1 and vn+1 from xn and vn. With an integration scheme and a known starting state t0 = 0 s, x0 = (..., ..., ...) m, v0 = (..., ..., ...) m/s, we can march forwards in time to determine the object's state at any desired time.
There are many numerical integration schemes. The simplest and most straightforward scheme, conceptually, is to just update the velocity based on the acceleration at the current time step, and update the position based on the current velocity,
$$ \boldsymbol{x}_{n+1} = \boldsymbol{x}_n + h \boldsymbol{v}_n, \boldsymbol{v}_{n+1} = \boldsymbol{v}_n + \frac{h \boldsymbol{F}(\boldsymbol{x}_n, t_n)}{m}. $$
This is called the Euler integration scheme. It is simple but, as will be seen shortly, it is not very accurate. Note that we only have to calculate the forces on the object once to advance to the next time step.
Another integration scheme is the Verlet method. This scheme was published by Loup Verlet in 1967 for use in molecular dynamics simulations. These schemes are going to get increasingly complicated, so I will just provide a link to Wikipedia's explanation of the Velocity Verlet (VV) scheme (opens in new window, as do all off-site links), which is what I used. Even though the gravitational forces have to be calculated at two different times (tn and tn+1), the F(xn+1, tn+1) of this time step is the same as F(xn, tn) of the next time step. So the forces only need to be calculated once per time step (same as for Euler integration), if the final forces are saved from one step to the next. This scheme is second order accurate (more on that later), but only if the forces depend on x only and not on v.
I also looked at third, fourth, and fifth order accurate Runge Kutta schemes (RK3, RK4, RK5). These methods are increasingly expensive in terms of calculations per time step: RK3 requires three calculations of the forces per time step, RK4 requires four, and RK5 requires six. Why bother using these, then? Well, the expectation is that the increased computations needed per time step are offset by better accuracy at larger time steps.
I compared these numerical integration schemes by simulating one circular orbit around Kerbin. I chose a low orbit, just outside the atmosphere (75 km altitude; 675 km radius from the center of the planet). These tests involve Kerbin alone (no Kerbol, no Muns, etc.). That way, I can calculate exactly where the object should be after one orbit using Kepler's laws. For each scheme I computed orbits using a range of time step sizes and then calculated the distance between where the simulated object ended up and where the exact solution says it should be. The results are shown in Fig. 4.1. There are a number of features to note here.
First, I mentioned order of accuracy earlier. A scheme's order of accuracy is how quickly it's error decreases as the time step size decreases, and on this log-log plot the slope of a curve reveals the scheme's order of accuracy. The Euler scheme is first-order accurate, meaning its error varies linearly with the step size. Cutting the step size by a factor of ten reduces the error by a factor of ten as well. The Verlet scheme is second order accurate, so cutting h by a factor of ten reduces its error by a factor of one hundred. The RK3, RK4, and RK5 schemes are 3rd, 4th, and 5th order accurate, so their errors shrink even faster as h is reduced. A higher-order scheme is not necessarily more accurate than a lower-order one, although it usually is; a higher order just means it gets accurate more quickly as h is reduced. (Looked at the other way, a higher-order scheme gets worse more quickly as h is increased!)
For the range of conditions I'm showing here, higher-order methods are always more accurate than lower-order methods for the same step size. I cut the plot off at a maximum error of 10 km; if more than 10 km of error builds up in a single orbit, the simulation is definitely bad! The Euler scheme is clearly inadequate. For the smallest step size I looked at, 0.1 ms, the method still built up an error of 10 m after one orbit. Using such small time steps is out of the question, especially when one can switch to the VV scheme and get the same accuracy using a step size of 1 s (ten thousand times larger!). In fact, the Euler method is known to be inadequate for this type of simulation, and I have included it here only because it is the first thing one is likely to think of when first encountering numerical integration. For any desired error level, it is always advantageous to use a higher-order scheme. For an error of 1 cm after one orbit, the VV scheme requires a time step of ~25 ms, RK3 requires a time step of ~800 ms, and RK5 requires a time step of ~25 s. The reduction in the number of time steps achieved in going to higher-order methods more than offsets the increased computations needed per time step.
There is a minimum possible error that can be achieved, which all the schemes eventually reach except for Euler. I performed these simulations with the center of Kerbin displaced by about 16 Gm from the origin (as it would be in the Kerbol system if the sun were placed at the origin). The smallest error that is achieved, about 10 μm, is therefore a relative position error of O(10-15). That's about the limit of machine precision for the double precision floating point I used in my calculations. The error can't get much smaller than that no matter what integration scheme is used. Note that the minimum possible error increases slowly as h is reduced further. As the time steps get smaller, the distance traveled in each time step gets smaller as well, and the effects of the limited machine precision get somewhat larger. The minimum error that can be achieved gets smaller with higher-order methods because they hit the machine precision limit at larger time steps.
Does this mean that we can choose the RK5 integration scheme and a step size of 25 s, and go on our merry way? Well, we've only looked at one orbit so far. Figure 5.1 shows the position error after one orbit of Kerbin using RK5 for different step sizes and different orbital radii. (Note that the x-axis is distance from the center of the planet, not the surface. The lowest orbit shown has radius 675 km = 75 km altitude.) The highest orbit shown, 84 Mm, is the limit of Kerbin's SOI in KSP. Clearly, the maximum step size we can use grows as we go to higher orbits. This is a good thing; a very high orbit takes more than 2 Ms to complete, and we would much rather compute it 10 ks at a time rather than 25 s. On the other hand, trying to compute a lower orbit with such a large time step would be disastrous. And the player will not restrict him/herself to circular orbits, of course! So how do we determine the appropriate time step for each situation?
As it happens, the particular RK5 scheme I used is part of a so-called embedded RK scheme, specifically Dormand and Prince's RK5(4)7M. Follow the link for the original paper, but briefly an embedded Runge-Kutta scheme allows one to calculate two solutions, one with higher order than the other, at the same time. Since, as we saw in Fig. 4.1, a higher-order solution is usually more accurate, we can then take the difference between the two solutions as an estimate of the error in the solution. That error estimate is used to adjust the step size at each time step to keep the error below some desired limit.
Figure 5.2 shows the value of an embedded scheme by calculating the error accumulated in a very highly-elliptical orbit of Kerbin. This orbit has apoapsis of 80 Mm and periapsis of 675 km. The figure shows the total position error after one orbit for different average
So, it appears that the ideal approach (out of the schemes I looked at) is to use a high-order accuracy method that allows for adaptive time step size. For circular orbits, we can expect that such an approach can result in less than 1 cm of numerical error after one orbit, although the minimum error we can achieve gets larger for highly elliptical orbits. My next question was, is this accuracy good enough? That is, can these small errors build up over time to cause vessels to fall out of orbit, and are the errors small compared to the orbital perturbations caused by other bodies in the solar system?
In the previous section I looked at orbits around a single planetary body (Kerbin) in isolation, a situation in which there is an analytical solution I could use to evaluate the simulated trajectories. Now that I know what it takes to simulate an orbit accurately, I want to see how orbits behave in a system with multiple bodies. In this section I look at a subset of the Kerbol system consisting of Kerbol, Kerbin, Mun, and Minmus. The masses and orbital parameters of all four bodies are taken from the KSP wiki. The planetary bodies follow Keplerian orbits according to their Kepler orbital parameters. Kerbol is placed at the origin of my coordinate system, so I expect to be able to achieve numerical accuracies for circular orbits the same as shown in Fig. 4.1.
I start this section by looking at the same circular orbits around Kerbin that I started with previously. Once again, I look at the difference between the numerically integrated position after one orbit and that of a Keplerian orbit. The results are shown in Fig. 6.1, but this time the difference isn't numerical error, instead it's the effects of the other bodies and the motion of Kerbin in its orbit. I look at two cases, one in which there is only Kerbin orbiting Kerbol, and one in which the moons Mun and Minmus are also present. For each case, I look at three different orientations of orbits: equatorial, polar parallel, and polar perpendicular (i.e. the x-y, x-z, and y-z planes).
For the no moons case, the polar perpendicular orbit always has a somewhat smaller error than the other two orientations, perhaps because in this orientation only the orbit has nearly constant distance from Kerbol. The position error is less than 1 m for low orbits, and increases steadily until very high orbits are reached. In contrast, when the moons are included the error is always at least 200 m after one low orbit, and the error varies erratically for orbital radii above 8 Mm. The reason for this is, of course, the proximity of the moons in the higher orbits. Mun and Minmus have orbital radii of 12 Mm and 47 Mm respectively.
For most cases, the error after one orbit shoots up very rapidly for the largest orbits, and above about 70 Mm the error is even larger than the orbital radius. That indicates that the craft isn't really in orbit around Kerbin any more. Figure 6.2 shows the paths for 1.5 orbital periods for some of the larger equatorial orbits with the moons present, with Kerbin at the origin of the plot. While the 40 Mm orbit doesn't show much perturbation at this scale, the 50 Mm orbit is definitely following a different path after one full orbit and the 60 Mm orbit is more than 10 Mm below its starting position at the end of one orbit. The 70 Mm trajectory gets a slingshot around one of the moons and departs from Kerbin, and the 80 Mm trajectory just wanders away.
My next question was, if there is so much deviation from a Keplerian trajectory after one orbit, what happens over a longer period of time? Will a low orbit eventually drop down into the atmosphere, or can Mun eventually slingshot a vessel out of Kerbin orbit entirely? Currently in KSP, only the vessel that the player is currently focused on can take any action at all. It would be no good to have the player's space station in low Kerbin orbit reenter the atmosphere while he or she was watching an interplanetary ship fly out to Jool at high time-warp. So, I ran the previous simulations for a much longer period of time. I used the initial position and velocity of the orbit to define a plane (relative to Kerbin) normal to the initial velocity. I recorded the location of the craft each time it crossed the plane travelling in the same direction as its initial velocity.
Figures 7.1 and 7.2 show the evolution of two orbits over a period of 30Ms (347 days, or three and one quarter Kerbin years). The radius of the lower orbit (Fig. 7.1) oscillates with a magnitude of about 17 km over time and drifts longitudinally very slightly (less than 20 m). The longitudinal drift is due to the influence of Minmus, as all the other bodies (Kerbol, Kerbin, and Mun) are in the same plane as the original orbit. The large (17 km) oscillations in radius are due to the Mun and have the same period as the Mun's orbit. An FFT of the radius reveals a very small oscillation with the same period as Minmus's orbit, but it is much too small to see on the scale of this figure. For the higher orbit, the variation in the orbit is much larger; the radius varies by about 4 Mm, a significant fraction of the starting radius. The initial radius, 7.2 Mm, proved to be the highest starting value I could use in which the orbit remained 'stable' (that is, not crashing into Kerbin or being ejected entirely) for the full 30 Ms. Any higher and the influence of the Mun was great enough to eventually eject the craft.
In the current version of KSP, Mun and Minmus have SOIs of 2.4 Mm and 2.2 Mm, respectively, so a circular orbit around Kerbin is influenced by them only if its radius is in the ranges 9.6 - 14.4 or 44.8 - 49.2 Mm. So there are some orbits in KSP that are perfectly stable, but should not be. On the other hand, the drift of low orbits is not so significant, and not simulating that behavior does not seem to be any great loss.
The topic of Lagrangian points (L-points) comes up periodically in discussions of KSP. Lagrangian points are special points in a system consisting of a smaller body travelling in a circular orbit around a larger one (e.g., Kerbin around Kerbol, or Mun around Kerbin) where the gravitational and centrifugal forces balance in a rotating frame of reference. A craft placed in one of the points would be in equilibrium and remain stationary (in that rotating frame of reference). There are five L-points, but only two are stable equilibrium points -- L4 and L5, located approximately 60 degrees in front of and behind the smaller orbiting body.
With KSP's system of patched conics, L-points do not exist. The player can place a vessel where the L4 or L5 point ought to be, but there are no forces holding it there; it will drift away at a rate determined by how accurately the player came to matching the correct orbit. When solving Newton's laws of motion for the vessel instead, it will be subjected to gravity from both bodies, and therefore it should be possible to fly vessels to L-points and have them stay there.
I started by placing an object 60 degrees ahead of Mun in its orbit around Kerbin (Mun's L4 point). The blue curve in Fig. 8.1 is the object's trajectory and the green circle is Mun's trajectory. (Kerbin is located at the origin.) Obviously, the object is not staying in anything like an L-point! It falls to a lower circular orbit, stays there for a few orbits until it catches up with Mun, and is then pushed into a higher circular orbit. In the figure the trajectory ends at this point, but when I ran the simulation for a longer period of time the craft stayed in the higher orbit for a few orbits, then fell back down and repeated the cycle. It seems to be a stable situation, in that the craft never crashes into Kerbin or gets ejected entirely, but it is definitely not staying at the L4 point.
I also tried placing a craft in Kerbin's L4 point. Looking at it from the scale of Kerbin's orbit, it appears that the craft is stable at the L4 point (and so I am not showing that figure, as it is just a circle). But when I plot the distance of the craft from the L4 point, as in Fig. 8.2, it becomes obvious that the craft is spiralling away from the point.
So, L-points don't seem to exist in this system either! What's going on here? I believe the problem is that I do not calculate the motions of the planetary bodies with Newton's laws of motion. Kerbin follows a conic section (a circle, in fact) with Kerbol at its center. In reality, Kerbol and Kerbin should both orbit around their center of mass (barycenter), which would be displaced slightly from Kerbol's center. This small displacement of Kerbol changes the balance of forces on the craft, and eliminates the L-points. The problem is even more stark in the case of Mun's L-point because the ratio of Kerbin's and Mun's masses is closer to one than Kerbol's and Kerbin's.
Or, it is possible that I simply set up the orbits incorrectly and missed the L-point. (Any hints in that direction would be welcome.) It may be that in this sort of system the L4 point isn't exactly 60 degrees ahead or isn't in exactly the same orbit as the planetary body.
Up to this point I have been considering only the simulation aspects of the problem, with the goal of achieving sufficienty accuracy with minimum computational effort. And so I've ended up with large time steps and a high order numerical scheme. But KSP isn't a simulation, it's a game. And the player expects to see his/her spacecraft moving smoothly through space, not sitting stationary and then jumping forward every 25 s or more. So how to reconcile simulating big stretches of time with large time steps (e.g. for map view) with continually updating the position of each vessel each frame?
It turns out I'm far from the first to ask this question, and one answer is interpolation. Interpolating RK5(4)7M was addressed by Shampine in 1986. In short, with this scheme I already know the values of the variables I'm interested in (x, v) at the beginning and end of each time step, and I know their derivatives at those points as well. Shampine showed how to also calculate a fourth-order accurate value for the variables at the midpoint of the time step essentially for free, using the accelerations that were already computed as part of the RK5(4)7M scheme. That gives me five data points in each time step, which is enough to define a fourth-power interpolating function for each variable (e.g. vx(t) = a + b*t + c*t2 + d*t3 + e*t4). Evaluating such a function is very fast, at least compared to integrating a time step.
I can also take advantage of the fact that one of the variables I'm interested in, velocity, is the time derivative of the other variable, position. I don't actually need to store separate functions for both position and velocity. I just need to store the quartic interpolating function for velocity, and by integrating it I get a fifth-order interpolating function for position (e.g. x(t) = f + a*t + b*t2/2 + c*t3/3 + d*t4/4 + e*t5/5).
I performed a test of this interpolation scheme by integrating a number of time steps of an orbit around Kerbin and then calculating position and velocity at arbitrary times. I calculated the state both with the interpolation function and by integrating with RK5 from the beginning of the applicable time step. The quartic interpolation function for position was usually off by a few bits (up to 2 μm), and the fifth-order interpolation function for position was usually exactly the same as the RK5 integrated position to the limit of machine precision. The difference in interpolated and integrated velocity was also on the order of a few bits (O(10-12 m/s)). This is much better accuracy than I expected. It is accurate enough that the interpolated state could be used to continue the simulation from an arbitrary time, not just to display the current state in map view or on instruments.
Using interpolation also makes tasks of finding particular points on a craft's trajectory cheaper. For example, finding the point of closest approach between two orbits will usually require a bit of hunting back and forth in time to home in on the desired point. Evaluating fourth- or fifth-order functions to get the state at each time is much cheaper than integrating, which requires finding the position of all the planetary bodies in the system multiple times so that accelerations can be calculated.
Another important consideration is how quickly the integrated trajectories can be produced. KSP currently allows time-warp up to 100,000x. Whether a player's computer can keep up with that speed depends on the number of steps that can be computed per second, the time step size, and the number of trajectories to compute. For this exercise I have tried to write reasonably efficient code, but I have not taken any special measures to optimize it. My Windows laptop has an Intel Core i7 CPU running at 1.73 GHz, and is capable of running KSP acceptably with most graphics options turned down and fairly simple rockets (I've never launched a craft with more than 100 parts). With compiler optimizations turned on and the full solar system being simulated (17 planetary bodies), this laptop can compute around 10k - 12k steps per second in my C++ implementation of these simulations. Interestingly, around 90 % of the time is spent in the routine that calculates a body's position and velocity from its Keplerian orbital elements, which is called every time the gravitational force from a body is computed. The routine contains a number of sines and cosines, which appear to be the culprits. My code uses the library functions in whatever version of glibc comes with Code::Blocks, and my understanding is that this library's math functions are known to be slow. My goal here is not to perform an exercise in optimization, so I will simply make the assumption that any computer capable of running KSP will be able to compute at least 50k steps per second with reasonable optimization of the code. I think this is a pretty conservative estimate.
The size of the time step depends on the error limit given to the RK5(4)7M scheme. In all of the studies here, I have used a very tight limit of 10 μm, which is close to the machine precision limit. I ran some more of the orbital drift simulations with the 7.2 Mm orbit, and I could increase the error limit to about 10 cm before I saw any noticeable deviation from what was shown in Fig 7.2. Relaxing the error limit by a few orders of magnitude like this lets the time steps be several times larger. But with the tight error limit, the average step size for a low orbit around Kerbin is about 10 s, for a 7 Mm orbit is around 100 s, and for 47 Mm orbits (the orbit of Minmus) is around 200 s. For craft in interplanetary space the time step is much larger; up to many millions of seconds for vessels outside of Jool's orbit. For a time step of 10 s and 50k steps calculated per second, only five trajectories could be calculated at a timewarp of 100,000x. For a time step of 100 s, fifty objects could be tracked at that speed. (This is single-threaded, but assuming the CPU isn't doing anything else. If implemented in the game, trajectory calculation could perhaps be spun out into its own thread.) These estimates are based on conservative assumptions, and so I believe any computer capable of running KSP could also calculate an acceptably large number of trajectories at 100,000x warp.
This concludes my little study of using numerical integration to calculate Newtonian trajectories in the solar system of KSP.
In summary:
Putting all this together, this is how Newtonian motion of objects could be implemented in KSP:
The perturbation of low equatorial orbits around Kerbin due to other bodies is negligible, and I assume that this holds true for other planets and other orbital inclinations. Therefore there is no big gain in realism to calculate Newtonian trajectories in these cases. I propose that each body have a much-reduced SOI, within which trajectories continue to be calculated using patched conics. The SOI radius should be somewhere near, but below, the point at which other bodies can perturb an object clear out of orbit. Players would recognize that orbits entirely within a planet's SOI are 'safe', in that objects in these orbits will always follow the same path without variation. For example, Kerbin's SOI could be reduced to somewhere between 3.5 Mm and 7 Mm. This is high enough that satellites or space stations placed in synchronous orbit would be safe. The SOIs would be small enough to prevent any overlap (unlike the current situation, where every planet's SOI is carved out of Kerbol's SOI, and each moon's SOI is carved out of its planet's SOI).
The region outside of any body's SOI, and this includes all interplanetary space, would be no-man's-land, in which Newtonian mechanics apply. In this region trajectories would be calculated using a scheme such as RK5(4)7M that includes adaptive time steps and interpolation within each time step. By limiting the application of Newtonian trajectories to no-man's-land, it is ensured that reasonably large time steps can be used to calculate the trajectories. This ensures that 100,000x timewarp will still be attainable even for slower computers and games that have many vessels and pieces of debris in space.
Trajectories for each vessel are calculated ahead of time out to some limit. The limit would be a certain number of SOI boundary crossings (just as the map view currently shows a certain number of conic section patches for the player's vessel) or, if no SOI boundaries are encountered, a certain number of time steps or certain number of orbits around a body. The time steps, with interpolation information, are saved, so that as time progresses in the game the position of any craft can be interpolated for the current time. When a craft passes an SOI boundary or passes the end of an RK time step, the game would look ahead as far as the trajectory was already calculated, and calculate the next step or set of steps. I have found single orbits to contain between ~200 steps (low Kerbin orbit) and ~1000 steps (a circular orbit of Kerbol out around Eeloo's apoapsis). The total number of steps that would be stored for each vessel would be on the order of a couple of thousand, at most, so the storage requirements would not be excessive.
This approach of precalculating trajectories and using interpolation enables the trajectories to be shown in map view without having to do a large number of trajectory calculations. It also ensures that vessels will follow the path shown in map view, with no chance of numerical drift causing deviation away from it.
However, whenever a vessel applies any thrust (via rocket motors, stack separators, etc.), that trajectory is no longer valid and the whole thing needs to be recalculated. Luckily, the player can only control one vessel at a time, so there will usually be no more than one trajectory that requires continual recalculation. While the RK5 scheme is pretty efficient, the trajectory of a player's vessel would usually not be able to be projected in map view out to its normal limit every frame while the craft is accelerating. A balance will have to be found between how current the trajectory the player sees is (i.e., how frequently the old trajectory is thrown out) and how far out the trajectory is projected. I think this would be a cool game mechanic. Players trying to obtain a precise trajectory (say, hitting a desired periapsis for an encounter at Duna starting from Kerbin) will have to thrust, pause, thrust, pause, etc. while the trajectory calculation catches up. Additionally, if something happens that generates a large number of new independent vessels (i.e., the player's vessel blows up), the trajectories of all of those parts shown in map view will grow outwards from their current locations as all the trajectories get calculated.
As far as I can tell, there is no reason why this scheme could not be implemented in KSP. Whether it would be worthwhile to do so is another question. Given that KSP is a game, the current patched conics scheme may well be good enough. Realism for the sake of realism is probably not a sufficient reason to change. I'm not sure I've seen anything in this little study that provides a compelling gameplay reason for Newtonian mechanics, except that the more complex trajectories are prettier and, I think, more interesting shapes.
Comments
You have to realize these guys have more managers than coders, they're brogrammers at its finest. They're more interested in upselling gravatar's like they got taken over by the crazy frog or something (kerbalizer, I think?) than making the game less broken let alone good!
Without modders and add on packs I doubt anyone would play KSP as more than a novelty for a few hours. Without MechJeb it's a banality to operate and poorly designed. Without part packs its limited and frustratingly dull, no way to interact with the environment, two capsules after three years, etc. and the crashing, oh god the crashing. You blink it crashes. It can't even run on more than one core and bricks itself if you even think too hard about implementing more than the American and Soviet packs.
With how heinously broken it is, and with the fact sales have dropped sharply since they're now asking for four times as much for pretty much the same broken 1/10th of a video game that it is, I do not think they will ever take something this big too seriously.
With the effort (and intelligence) you've shown I strongly suggest finding a few coders in the KSP black hangars where the 30-40% of KSP players who didn't flatter Squad enough and thus got banned for things as insecurity-oriented as too many bug reports and have a look at some of the rebuild physics engines other developers are using that are pure silence based and everything KSP promised to be!
However I do agree that the Runge Kutta methods are a better choice, also because they allow you to choose different time steps.
But you also have to remember that this is a game and that not all players will like having to deal with orbital perturbations and instability. And even if this would be implemented, some might also start saying that they should implement that celestial bodies should have none spherical gravitational field (such as that the earth has a bigger diameter at the equator that between the poles) which also cause orbital perturbations at low orbit. I haven't played Orbiter, but I have heard that is has n-body physics (not sure if it has none spherical gravitational fields).
In fact, the Verlet method has only second order global accuracy -- the cited 4th order accuracy is local accuracy. (And it has only first-order accuracy if there are velocity dependent forces, although that isn't a concern here.) The Velocity Verlet method that I used is the same thing, but rearranged to avoid the difference in position found on the righthand side of the equation in the original Verlet method. The original Verlet method is said to have trouble with numerical errors due to that difference when the position is large compared to the change in position between time steps (which certainly happens here), so I did not try using it.
I do think that the implementation I described here would address most challenges with gameplay and keeping the game accessible to new players, but the only proof would be to actually implement it. I would like to do so in a mod, but just don't have time for it at present.
Regarding nonspherical bodies, I think that would actually be very cool. We could have sun-synchronous orbits! I wonder if it would be possible to do so by representing each planet as a small ring of several discrete masses (which, added together, equal the actual mass of the planet). That's another thing I'd like to try but just don't have time for.
RSS feed for comments to this post