# Optimization

With the knowledge of movement formulas, we can formally define optimization problems in the context of parkour.

There are two possible approaches (problems):

- Minimum Time: Given a set of obstacles, what is the minimum number of ticks to reach a goal?
- Maximum Distance: Given a number of ticks and a set of obstacles, what is the furthest distance (X/Z) the player can jump?

Both problems are somewhat equivalent: one can use solutions of the other to compute a solution for itself by dichotomy.

The question is to determine which problem is easier to solve (depending on the context).

Formal definitions:

- The variables are the player's direction on each tick (we'll assume the player is always using 45° sprintjumps)
- The
**objective function**is the value we want to optimize (time or distance) - The
**constraints**are obstacles: area of momentum, walls.

A few observations:

- There are a finite number of inputs and angles to consider, therefore an optimal strategy exists for a given set of constraints.
- The existence of momentum threshold means the objective function is not continuous (had the angles been a dense interval).

Limitations:

This optimization problem is difficult in general. For the sake of complexity, and without losing too much generality, we have to impose these restrictions:

- We'll assume the player never collides with walls (this would only matter in the case of specific squeeze jumps).

- We'll ignore Half Angles for convenience (except when we consider linear movement). It is unrealistic to assume the player could use multiple half angles consecutively (though it may be possible by tweaking the sensitivity).

## MaxDistance - Numerical Approximation

We'll assume the interval of angles is dense (not just 65536 different angles), and that sin() and cos() are continuous.

A natural approach is to reformulate MaxDistance as mathematical programming:

- The objective function is X or Z distance
- The constraints are inequalities modelling obstacles

If the objective function is differentiable, we can use the method of Lagrange Multipliers.

The problem in our case is that the objective function isn't even continuous because of momentum threshold, which cancels the player's speed when it's within a certain range. To avoid this problem, we approximate momentum threshold with a differentiable function:

This way, we can use the Lagrange Multiplier method.

The choice of the smooth threshold function is important: it must be close enough to the original, and easy to differentiate through chain rule. We must be able to sharpen the slope around 0.005 at will. This is a possible candidate, though the derivative is not exactly simple:

The variables of our problem are the player's direction on each tick from 0 to T: .

Let's define the player's X speed:

Vz has a similar definition, with sin() instead of cos(). V is the initial speed.

The player's position on tick t is simply the sum of their velocities.

We must make some assumptions on which tick the player crosses a corner (and find the maximum across all iterations)

Thus the optimization problem is presented as:

and may be variables.

and are also variables to optimize, but outside the scope of the Lagrange Multiplier method.

We must choose the values of x_i and z_i carefully: they represent the corners the player must avoid.

This formulation can solve a wide range of problems and not just neos.

**Greedy method**

Due to chain rule differentiation, the gradients become quite difficult to handle (unless we somehow find a "simpler" smooth approximation for the threshold).

We could instead use a greedy approach: initially, let's assume momentum threshold is not activated at all. Once we have an optimal solution under that constraint, we check whether the player's velocity is outside threshold range. If not, we identify which tick is problematic, and choose the best of two:

- Optimize again, but with the constraint that the specified speed = +- 0.005
- Optimize again assuming threshold is active, with the constraint that the specified speed = +- 0.005

Repeat until there is no conflict with threshold (keep track of which ticks are thresholded, they may need to be activated/deactivated).

This method does not guarantee an optimal solution, but it is reasonably efficient, both time and approximation wise.

## MinTime - Branch and Bound

A* is a candidate algorithm for this problem: the graph is generated implicitly by the player's position and velocity. Edges between two nodes correspond to the player's direction; they cost 1 tick, therefore a "shortest path" corresponds to an optimal solution in terms of time.

The major flaw of A* is the amount of memory it uses when we have to consider such a large graph. Instead, we use a Branch and Bound approach.

A heuristic is a function that gives an approximate solution to the problem (or a solution to a simpler problem). If possible, we are looking for an admissible heuristic (i.e it should provide a lower bound for the MinTime problem), since that property guarantees the optimality of the search algorithm.

Having a good heuristic is essential for "guiding" the algorithm towards computing a solution in a reasonable amount of time.

**Idea 1:** ignore all walls [easy to compute, but not very helpful]

**Idea 2:** compute the shortest path from the start to the goal (going around walls), calculate the total distance, and find the amount of time it would take to travel in a straight line starting from the current speed (scalar product). [good compromise between complexity and accuracy, can pre-compute the shortest-path tree]

To include Z-facing, cut a diagonal through all Z-facing corners.

Greedy optimization: instead of considering the entire range of angles (65536), we can try to find a solution for less angles (e.g. 256)

### Algorithm outline:

```
Initialization:
compute shortest path tree
establish GOAL, PRECISION (default = 16, recommended 8), MIN_ANGLE (default=0), MAX_ANGLE(default=65535)
MinTime(position, velocity):
if position is within GOAL, return 0.
if position is within WALLS, return INFINITY
best = INFINITY
establish a list of angles to consider.
for each angle in that list:
calculate heuristic given that angle.
if the lower bound is greater than the current best time, skip.
else calculate MinTime(position+new_velocity, new_velocity) and compare with best.
return best+1
```

The problem with this algorithm is that it does not allow us to treat the initial position as a variable which we could optimize. We could include this variable as the first "branch" of the algorithm, before optimizing velocity.

## Linear jump optimization

The simplest case problem to optimize is linear jumps: no constraints, unlimited momentum, just a starting point and a goal.

For convenience, let's assume momentum threshold is not involved (it would only concern a small portion of jumps).

We can easily calculate the maximum speed the player can get with unlimited momentum, and then calculate the maximum jump distance.

## Neo optimization

A good optimization principle for neos is that the player's rotation should be monotonous (i.e the player would only be turning left, for example).

We can adapt the MinTime algorithm to reflect that.

## Parkour Bot

Our ultimate goal is to program a bot that can analyse and perform jumps, with minimum guidance from a human.

Jumps should be doable individually, and not require help from a previous jump (chained neos are out of the question).

The MinTime approach (with target time) definitely seems to be the way to go.