To be able to edit code and run cells, you need to run the notebook yourself. Where would you like to run the notebook?

This notebook takes about 4 minutes to run.

In the cloud (experimental)

Binder is a free, open source service that runs scientific notebooks in the cloud! It will take a while, usually 2-7 minutes to get a session.

On your computer

(Recommended if you want to store your changes.)

  1. Copy the notebook URL:
  2. Run Pluto

    (Also see: How to install Julia and Pluto)

  3. Paste URL in the Open box

Frontmatter

If you are publishing this notebook on the web, you can set the parameters below to provide HTML metadata. This is useful for search engines and social media.

Author 1

homework 6, version 0

👀 Reading hidden code
227 μs

Submission by: Jazzy Doe (jazz@mit.edu)

👀 Reading hidden code
12.6 ms

Oopsie! You need to update Pluto to the latest version

Close Pluto, go to the REPL, and type:

julia> import Pkg
julia> Pkg.update("Pluto")
👀 Reading hidden code
129 μs

Homework 6: Epidemic modeling III

18.S191, fall 2020

This notebook contains built-in, live answer checks! In some exercises you will see a coloured box, which runs a test case on your code, and provides feedback based on the result. Simply edit the code, run it, and the check runs again.

For MIT students: there will also be some additional (secret) test cases that will be run as part of the grading process, and we will look at your notebook and write comments.

Feel free to ask questions!

👀 Reading hidden code
602 μs
👀 Reading hidden code
# edit the code below to set your name and kerberos ID (i.e. email without @mit.edu)

student = (name = "Jazzy Doe", kerberos_id = "jazz")

# you might need to wait until all other cells in this notebook have completed running.
# scroll around the page to see what's up
23.5 μs

Let's create a package environment:

👀 Reading hidden code
233 μs
begin
using Plots
gr()
using PlutoUI
end
👀 Reading hidden code
66.2 s

We began this module with data on the COVID-19 epidemic, but then looked at mathematical models. How can we make the connection between data and models?

Models have parameters, such as the rate of recovery from infection. Where do the parameter values come from? Ideally we would like to extract them from data. The goal of this homework is to do this by fitting a model to data.

For simplicity we will use data that generated from the spatial model in Homework 5, rather than real-world data, and we will fit the simplest SIR model. But the same ideas apply more generally.

There are many ways to fit a function to data, but all must involve some form of optimization, usually minimization of a particular function, a loss function; this is the basis of the vast field of machine learning.

The loss function is a function of the model parameters; it measures how far the model output is from the data, for the given values of the parameters.

We emphasise that this material is pedagogical; there is no suggestion that these specific techniques should be used actual calculations; rather, it is the underlying ideas that are important.

👀 Reading hidden code
862 μs

Exercise 1: Calculus without calculus

👀 Reading hidden code
262 μs

Before we jump in to simulating the SIR equations, let's experiment with a simple 1D function. In calculus, we learn techniques for differentiating and integrating symbolic equations, e.g. ddxxn=nxn1. But in real applications, it is often impossible to apply these techniques, either because the problem is too complicated to solve symbolically, or because our problem has no symbolic expression, like when working with experimental results.

Instead, we use ✨ computers ✨ to approximate derivatives and integrals. Instead of applying rules to symbolic expressions, we use much simpler strategies that only use the output values of our function.

As a first example, we will approximate the derivative of a function. Our method is inspired by the analytical definition of the derivative!

f(a):=limh0f(a+h)f(a)h.

The finite difference method simply fixes a small value for h, say h=103, and then approximates the derivative as:

f(a)f(a+h)f(a)h.

👀 Reading hidden code
562 μs

Exercise 1.1 - tangent line

👉 Write a function finite_difference_slope that takes a function f and numbers a and h. It returns the slope f(a), approximated using the finite difference formula above.

👀 Reading hidden code
323 μs
finite_difference_slope (generic function with 2 methods)
function finite_difference_slope(f::Function, a, h=1e-3)
(f(a+h) - f(a)) / h
end
👀 Reading hidden code
758 μs
# function finite_difference_slope(f::Function, a, h=1e-3)
# return missing
# end
👀 Reading hidden code
16.8 μs
0.2
finite_difference_slope(sqrt, 4.0, 5.0)
👀 Reading hidden code
4.2 ms

Got it!

Well done!

👀 Reading hidden code
8.0 ms

👉 Write a function tangent_line that takes the same arguments f, a and g, but it returns a function. This function (RR) is the tangent line with slope f(a) (computed using finite_difference_slope) that passes through (a,f(a)).

👀 Reading hidden code
375 μs

Hint

Remember that functions are objects! For example, here is a function that returns the square root function:

function the_square_root_function()
	f = x -> sqrt(x)
	return f
end
👀 Reading hidden code
44.5 ms
tangent_line (generic function with 1 method)
function tangent_line(f, a, h)
slope = finite_difference_slope(f, a, h)
value = f(a)
x -> (x - a)*slope + value
end
👀 Reading hidden code
1.1 ms
# function tangent_line(f, a, h)
# return missing
# end
👀 Reading hidden code
18.7 μs

Got it!

Splendid!

👀 Reading hidden code
3.8 ms
👀 Reading hidden code
43.9 ms
👀 Reading hidden code
881 ms
# this is our test function
wavy(x) = .1x^3 - 1.6x^2 + 7x - 3;
👀 Reading hidden code
625 μs

The slider below controls h using a log scale. In the (mathematical) definition of the derivative, we take limh0. This corresponds to moving the slider to the left.

Notice that, as you decrease h, the tangent line gets more accurate, but what happens if you make h too small?

👀 Reading hidden code
340 μs
👀 Reading hidden code
355 ms
0.31622776601683794
👀 Reading hidden code
25.5 μs
👀 Reading hidden code
15.1 μs

Exercise 1.2 - antiderivative

In the finite differences method, we approximated the derivative of a function:

f(a)f(a+h)f(a)h

We can do something very similar to approximate the 'antiderivate' of a function. Finding the antiderivative means that we use the slope f to compute f numerically!

This antiderivative problem is illustrated below. The only information that we have is the slope at any point aR, and we have one initial value, f(1).

👀 Reading hidden code
757 μs
# in this exercise, only the derivative is given
wavy_deriv(x) = .3x^2 - 3.2x + 7;
👀 Reading hidden code
510 μs
👀 Reading hidden code
742 μs
👀 Reading hidden code
186 ms

Using only this information, we want to reconstruct f.

By rearranging the equation above, we get the Euler method:

f(a+h)hf(a)+f(a)

Using this formula, we only need to know the value f(a) and the slope f(a) of a function at a to get the value at a+h. Doing this repeatedly can give us the value at a+2h, at a+3h, etc., all from one initial value f(a).

👉 Write a function euler_integrate_step that applies this formula to a known function f at a, with step size h and the initial value f(a). It returns the next value, f(a+h).

👀 Reading hidden code
636 μs
euler_integrate_step (generic function with 1 method)
function euler_integrate_step(fprime::Function, fa::Number,
a::Number, h::Number)
fa + h*fprime(a + h)
end
👀 Reading hidden code
473 μs
# function euler_integrate_step(fprime::Function, fa::Number,
# a::Number, h::Number)
# return missing
# end
👀 Reading hidden code
15.4 μs

Got it!

Fantastic!

👀 Reading hidden code
33.3 ms

👉 Write a function euler_integrate that takes takes a known function f, the initial value f(a) and a range T with a == first(T) and h == step(T). It applies the function euler_integrate_step repeatedly, once per entry in T, to produce the sequence of values f(a+h), f(a+2h), etc.

👀 Reading hidden code
310 μs
euler_integrate (generic function with 1 method)
function euler_integrate(fprime::Function, fa::Number,
T::AbstractRange)
a0 = T[1]
h = step(T)
accumulate(T, init=fa) do prev, a
euler_integrate_step(fprime, prev, a, h)
end
end
👀 Reading hidden code
1.2 ms
# function euler_integrate(fprime::Function, fa::Number,
# T::AbstractRange)
# a0 = T[1]
# h = step(T)
# return missing
# end
👀 Reading hidden code
30.1 μs

Let's try it out on f(x)=3x2 and T ranging from 0 to 10.

We already know the analytical solution f(x)=x3, so the result should be an array going from (approximately) 0.0 to 1000.0.

👀 Reading hidden code
299 μs
euler_test = let
fprime(x) = 3x^2
T = 0 : 0.1 : 10
euler_integrate(fprime, 0, T)
end
👀 Reading hidden code
113 ms

Got it!

Fantastic!

👀 Reading hidden code
14.6 ms
👀 Reading hidden code
46.2 ms
👀 Reading hidden code
424 ms

You see that our numerical antiderivate is not very accurate, but we can get a smaller error by choosing a smaller step size. Try it out!

There are also alternative integration methods that are more accurate with the same step size. Some methods also use the second derivative, other methods use multiple steps at once, etc.! This is the study of Numerical Methods.

👀 Reading hidden code
272 μs

Exercise 2: Simulating the SIR differential equations

Recall from the lectures that the ordinary differential equations (ODEs) for the SIR model are as follows:

s˙=βsii˙=+βsiγir˙=+γi

where s˙:=dsdt is the derivative of s with respect to time. Recall that s denotes the proportion (fraction) of the population that is susceptible, a number between 0 and 1.

We will use the simplest possible method to simulate these, namely the Euler method. The Euler method is not always a good method to solve ODEs accurately, but for our purposes it is good enough.

In the previous exercise, we introduced the euler method for a 1D function, which you can see as an ODE that only depends on time. For the SIR equations, we have an ODE that only depends on the previous value, not on time, and we have 3 equations instead of 1.

The solution is quite simple, we apply the euler method to each of the differential equations within a single time step to get new values for each of s, i and r at the end of the time step in terms of the values at the start of the time step. The euler discretised equations are:

s(t+h)=s(t)hβs(t)i(t)i(t+h)=i(t)+h(βs(t)i(t)γi(t))r(t+h)=r(t)+hγi(t)

👉 Implement a function euler_SIR_step(β, γ, sir_0, h) that performs a single Euler step for these equations with the given parameter values and initial values, with a step size h.

sir_0 is a 3-element vector, and you should return a new 3-element vector with the values after the timestep.

👀 Reading hidden code
948 μs
euler_SIR_step (generic function with 1 method)
function euler_SIR_step(β, γ, sir_0::Vector, h::Number)
s, i, r = sir_0
return [
s - h * β*s*i,
i + h * (β*s*i - γ*i),
r + h * γ*i,
]
end
👀 Reading hidden code
975 μs
# function euler_SIR_step(β, γ, sir_0::Vector, h::Number)
# s, i, r = sir_0
# return [
# missing,
# missing,
# missing,
# ]
# end
👀 Reading hidden code
15.6 μs
euler_SIR_step(0.1, 0.05,
[0.99, 0.01, 0.00],
0.1)
👀 Reading hidden code
16.3 μs

👉 Implement a function euler_SIR(β, γ, sir_0, T) that applies the previously defined function over a time range T.

You should return a vector of vectors: a 3-element vector for each point in time.

👀 Reading hidden code
266 μs
euler_SIR (generic function with 1 method)
function euler_SIR(β, γ, sir_0::Vector, T::AbstractRange)
# T is a range, you get the step size and number of steps like so:
h = step(T)
return accumulate(T; init=sir_0) do sir, _
euler_SIR_step(β, γ, sir, h)
end
end
👀 Reading hidden code
1.2 ms
# function euler_SIR(β, γ, sir_0::Vector, T::AbstractRange)
# # T is a range, you get the step size and number of steps like so:
# h = step(T)
# num_steps = length(T)
# return missing
# end
👀 Reading hidden code
15.9 μs
0.0:0.1:60.0
sir_T = 0 : 0.1 : 60.0
👀 Reading hidden code
16.4 μs
sir_results = euler_SIR(0.3, 0.15,
[0.99, 0.01, 0.00],
sir_T)
👀 Reading hidden code
40.2 ms

Let's plot s, i and r as a function of time.

👀 Reading hidden code
225 μs
👀 Reading hidden code
2.8 s
plot_sir! (generic function with 1 method)
👀 Reading hidden code
2.3 ms

👉 Do you see an epidemic outbreak (i.e. a rapid growth in number of infected individuals, followed by a decline)? What happens after a long time? Does everybody get infected?

👀 Reading hidden code
221 μs

blabla

default_SIR_parameters_observation = md"""
blabla
"""
👀 Reading hidden code
234 μs

👉 Make an interactive visualization in which you vary β and γ. What relation should β and γ have for an epidemic outbreak to occur?

👀 Reading hidden code
233 μs

👀 Reading hidden code
81.0 μs

Exercise 3: Numerical gradient

For fitting we need optimization, and for optimization we will use derivatives (rates of change). In Exercise 1, we wrote a function finite_difference_slope(f, a) to approximate f(a). In this exercise we will write a function to compute partial derivatives.

👀 Reading hidden code
426 μs

Exercise 3.1

👉 Write functions ∂x(f, a, b) and ∂y(f, a, b) that calculate the partial derivatives fx and fy at (a,b) of a function f:R2R (i.e. a function that takes two real numbers and returns one real).

Recall that fx is the derivative of the single-variable function g(x):=f(x,b) obtained by fixing the value of y to b.

You should use anonymous functions for this. These have the form x -> x^2, meaning "the function that sends x to x2".

👀 Reading hidden code
514 μs
∂x (generic function with 1 method)
function ∂x(f::Function, a, b)
directional = h -> f(a + h, b)
finite_difference_slope(directional, 0)
end
👀 Reading hidden code
958 μs
# function ∂x(f::Function, a, b)
# return missing
# end
👀 Reading hidden code
15.3 μs
42.00699999999813
∂x(
(x, y) -> 7x^2 + y,
3, 7
)
👀 Reading hidden code
12.7 ms

Got it!

Awesome!

👀 Reading hidden code
188 μs
∂y (generic function with 1 method)
function ∂y(f::Function, a, b)
directional = h -> f(a, b + h)
finite_difference_slope(directional, 0)
end
👀 Reading hidden code
979 μs
# function ∂y(f::Function, a, b)
# return missing
# end
👀 Reading hidden code
15.8 μs
1.0000000000047748
∂y(
(x, y) -> 7x^2 + y,
3, 7
)
👀 Reading hidden code
5.3 ms

Got it!

Great! 🎉

👀 Reading hidden code
149 μs

Exercise 3.2

👉 Write a function gradient(f, a, b) that calculates the gradient of a function f at the point (a,b), given by the vector f(a,b):=(fx(a,b),fy(a,b)).

👀 Reading hidden code
344 μs
gradient (generic function with 1 method)
function gradient(f::Function, a, b)
[∂x(f, a, b), ∂y(f, a, b)]
end
👀 Reading hidden code
469 μs
# function gradient(f::Function, a, b)
# return missing
# end
👀 Reading hidden code
14.4 μs
gradient(
(x, y) -> 7x^2 + y,
3, 7
)
👀 Reading hidden code
17.6 ms

Got it!

Good job!

👀 Reading hidden code
165 μs

Exercise 4: Minimisation using gradient descent

In this exercise we will use gradient descent to find local minima of (smooth enough) functions.

To do so we will think of a function as a hill. To find a minimum we should "roll down the hill".

Exercise 4.1

We want to minimize a 1D function, i.e. a function f:RR. To do so we notice that the derivative tells us the direction in which the function increases. Positive slope means that the minimum is to the left, negative slope means to the right. So our gradient descent method is to take steps in the opposite direction, of a small size ηf(x0).

👉 Write a function gradient_descent_1d_step(f, x0) that performs a single gradient descrent step, from the point x0 and using your function finite_difference_slope to approximate the derivative. The result should be the next guess for x.

👀 Reading hidden code
772 μs
gradient_descent_1d_step (generic function with 1 method)
function gradient_descent_1d_step(f, x0; η=0.01)
slope = finite_difference_slope(f, x0)
x0 - η*slope
end
👀 Reading hidden code
1.4 ms
# function gradient_descent_1d_step(f, x0; η=0.01)
# return missing
# end
👀 Reading hidden code
15.2 μs
4.899989999999974
let
f = x -> x^2
# the minimum is at 0, so we should take a small step to the left
gradient_descent_1d_step(f, 5)
end
👀 Reading hidden code
12.6 ms

Got it!

You got the right answer!

👀 Reading hidden code
26.1 ms
👀 Reading hidden code
636 μs
👀 Reading hidden code
141 ms

x0= -1.0

👀 Reading hidden code
32.0 ms
gradient_1d_viz (generic function with 1 method)
👀 Reading hidden code
2.9 ms

👉 Write a function gradient_descent_1d(f, x0) that repeatedly applies the previous function (N_steps times), starting from the point x0, like in the vizualisation above. The result should be the final guess for x.

👀 Reading hidden code
244 μs
gradient_descent_1d (generic function with 1 method)
function gradient_descent_1d(f, x0; η=0.01, N_steps=1000)
reduce(1:N_steps; init=x0) do old, _
gradient_descent_1d_step(f, old; η=η)
end
end
👀 Reading hidden code
2.2 ms
# function gradient_descent_1d(f, x0; η=0.01, N_steps=1000)
# return missing
# end
👀 Reading hidden code
15.9 μs
4.999499991586009
let
f = x -> (x - 5)^2 - 3
# minimum should be at x = 5
gradient_descent_1d(f, 0.0)
end
👀 Reading hidden code
26.3 ms

Got it!

Splendid!

👀 Reading hidden code
46.9 ms

Right now we take a fixed number of steps, even if the minimum is found quickly. What would be a better way to decide when to end the function?

👀 Reading hidden code
205 μs

blabla

better_stopping_idea = md"""
blabla
"""
👀 Reading hidden code
234 μs

Exericse 4.2

Multivariable calculus tells us that the gradient f(a,b) at a point (a,b) is the direction in which the function increases the fastest. So again we should take a small step in the opposite direction. Note that the gradient is a vector which tells us which direction to move in the plane (a,b). We multiply this vector with the scalar η to control the step size.

👉 Write functions gradient_descent_2d_step(f, x0, y0) and gradient_descent_2d(f, x0, y0) that do the same for functions f(x,y) of two variables.

👀 Reading hidden code
464 μs
gradient_descent_2d_step (generic function with 1 method)
function gradient_descent_2d_step(f, x0, y0; η=0.01)
antidirection = gradient(f, x0, y0)

[x0, y0] .- η * antidirection
end
👀 Reading hidden code
1.6 ms
gradient_descent_2d (generic function with 1 method)
function gradient_descent_2d(f, x0, y0; η=0.01, N_steps=1000)
reduce(1:N_steps; init=(x0, y0)) do old, _
gradient_descent_2d_step(f, old...; η=η)
end
end
👀 Reading hidden code
2.2 ms
# function gradient_descent_2d_step(f, x0, y0; η=0.01)
# return missing
# end
👀 Reading hidden code
17.4 μs
# function gradient_descent_2d(f, x0, y0; η=0.01)
# return missing
# end
👀 Reading hidden code
15.0 μs
gradient_descent_2d(himmelbau, 0, 0)
👀 Reading hidden code
354 ms
@bind N_gradient_2d Slider(0:20)
👀 Reading hidden code
632 μs
👀 Reading hidden code
949 ms

x0= 0.0

👀 Reading hidden code
1.1 ms

y0= 0.0

👀 Reading hidden code
1.0 ms
himmelbau (generic function with 1 method)
himmelbau(x, y) = (x^2 + y - 11)^2 + (x + y^2 - 7)^2
👀 Reading hidden code
658 μs

We also prepared a 3D visualisation if you like! It's a bit slow...

👀 Reading hidden code
207 μs
false
run_3d_visualisation = false
👀 Reading hidden code
11.2 μs
👀 Reading hidden code
46.9 μs
gradient_2d_viz_3d (generic function with 1 method)
👀 Reading hidden code
2.8 ms
gradient_2d_viz_2d (generic function with 1 method)
👀 Reading hidden code
2.1 ms

👉 Can you find different minima?

👀 Reading hidden code
198 μs

👀 Reading hidden code
75.6 μs

Exercise 5: Learning parameter values

In this exercise we will apply gradient descent to fit a simple function y=fα,β(x) to some data given as pairs (xi,yi). Here α and β are parameters that appear in the form of the function f. We want to find the parameters that provide the best fit, i.e. the version fα,β of the function that is closest to the data when we vary α and β.

To do so we need to define what "best" means. We will define a measure of the distance between the function and the data, given by a loss function, which itself depends on the values of α and β. Then we will minimize the loss function over α and β to find those values that minimize this distance, and hence are "best" in this precise sense.

The iterative procedure by which we gradually adjust the parameter values to improve the loss function is often called machine learning or just learning, since the computer is "discovering" information in a gradual way, which is supposed to remind us of how humans learn. [Hint: This is not how humans learn.]

Exercise 5.1 - 🎲 frequencies

We generate a small dataset by throwing 10 dice, and counting the sum. We repeat this experiment many times, giving us a frequency distribution in a familiar shape.

👀 Reading hidden code
897 μs
import Statistics
👀 Reading hidden code
151 μs
dice_frequencies (generic function with 1 method)
👀 Reading hidden code
2.1 ms
dice_x, dice_y = dice_frequencies(10, 20_000)
👀 Reading hidden code
2.1 ms
👀 Reading hidden code
127 ms

Let's try to fit a gaussian (normal) distribution. Its PDF with mean μ and standard deviation σ is

fμ,σ(x):=1σ2πexp[(xμ)22σ2]

👉 (Not graded) Manually fit a Gaussian distribution to our data by adjusting μ and σ until you find a good fit.

👀 Reading hidden code
323 μs

μ = 24.0

👀 Reading hidden code
17.2 ms

σ = 12.0

👀 Reading hidden code
1.3 ms

Show manual fit:

👀 Reading hidden code
42.7 ms
gauss (generic function with 1 method)
function gauss(x, μ, σ)
(1 / (sqrt(2π) * σ)) * exp(-(x-μ)^2 / σ^2 / 2)
end
👀 Reading hidden code
662 μs

What we just did was adjusting the function parameters until we found the best possible fit. Let's automate this process! To do so, we need to quantify how good or bad a fit is.

👉 Define a loss function to measure the "distance" between the actual data and the function. It will depend on the values of μ and σ that you choose:

L(μ,σ):=i[fμ,σ(xi)yi]2

👀 Reading hidden code
364 μs
loss_dice (generic function with 1 method)
function loss_dice(μ, σ)
fit = gauss.(dice_x, [μ], [σ])
sum((fit - dice_y) .^ 2)
end
👀 Reading hidden code
617 μs
# function loss_dice(μ, σ)
# return missing
# end
👀 Reading hidden code
18.2 μs
false
loss_dice(guess_μ + 3, guess_σ) >
loss_dice(guess_μ, guess_σ)
👀 Reading hidden code
212 ms

👉 Use your gradient_descent_2d function to find a local minimum of L, starting with initial values μ=30 and σ=1. Call the found parameters found_μ and found_σ.

👀 Reading hidden code
266 μs
found_μ, found_σ = let
gradient_descent_2d(loss_dice, 30, 1, η=10)
end
👀 Reading hidden code
498 ms
# found_μ, found_σ = let
# # your code here
# missing, missing
# end
👀 Reading hidden code
15.1 μs

Go back to the graph to see your optimized gaussian curve!

If your fit is close, then probability theory tells us that the found parameter μ should be close to the weighted mean of our data, and σ should approximate the sample standard deviation. We have already computed these values, and we check how close they are:

👀 Reading hidden code
372 μs
0.01867851401522813
abs(stats_μ - found_μ)
👀 Reading hidden code
40.2 μs
0.09295362937137064
abs(stats_σ - found_σ)
👀 Reading hidden code
13.8 μs

Got it!

Well done!

👀 Reading hidden code
3.2 ms
35.00255
👀 Reading hidden code
60.6 ms
5.43109045197188
👀 Reading hidden code
120 ms

Exercise 6: Putting it all together — fitting an SIR model to data

In this exercise we will fit the (non-spatial) SIR ODE model from Exercise 1 to some data generated from the spatial model in Problem Set 4. If we are able to find a good fit, that would suggest that the spatial aspect "does not matter" too much for the dynamics of these models. If the fit is not so good, perhaps there is an important effect of space. (As usual in statistics, and indeed in modelling in general, we should be very cautious of making claims of this nature.)

This fitting procedure will be different from that in Exercise 4, however: we no longer have an explicit form for the function that we are fitting – rather, it is the output of an ODE! So what should we do?

We will try to find the parameters β and γ for which the output of the ODEs when we simulate it with those parameters best matches the data!

Exercise 6.1

Below the result from Homework 4, Exercise 3.2. These are the average S, I, R fractions of running 20 simulations. Click on it!

👀 Reading hidden code
640 μs
1:1000
👀 Reading hidden code
15.0 μs
👀 Reading hidden code
137 ms
👀 Reading hidden code
677 μs
👀 Reading hidden code
270 ms

👉 (Not graded) Manually fit the SIR curves to our data by adjusting β and γ until you find a good fit.

👀 Reading hidden code
265 μs

β = 0.05

👀 Reading hidden code
1.6 ms

γ = 0.005

👀 Reading hidden code
1.1 ms

Show manual fit:

👀 Reading hidden code
748 μs

👉 To do this automatically, we will again need to define a loss function L(β,γ). This will compare the solution of the SIR equations with parameters β and γ with your data.

This time, instead of comparing two vectors of numbers, we need to compare two vectors of vectors, the S, I, R values.

👀 Reading hidden code
355 μs
loss_sir (generic function with 1 method)
function loss_sir(β, γ)
results = euler_SIR(β, γ,
[0.99, 0.01, 0.00],
hw4_T)
sum(results .- hw4_results) do diff
sum(diff .^ 2)
end
end
👀 Reading hidden code
995 μs
# function loss_sir(β, γ)
# return missing
# end
👀 Reading hidden code
14.8 μs
350.1798977340281
loss_sir(guess_β, guess_γ)
👀 Reading hidden code
218 ms

👉 Use this loss function to find the optimal parameters β and γ.

👀 Reading hidden code
228 μs
found_β, found_γ = let
gradient_descent_2d(loss_sir, guess_β, guess_γ; η=1e-8)
end
👀 Reading hidden code
1.0 s
# found_β, found_γ = let
# # your code here
# missing, missing
# end
👀 Reading hidden code
15.5 μs

Got it!

If you made it this far, congratulations – you have just taken your first step into the exciting world of scientific machine learning!

👀 Reading hidden code
1.2 ms

Before you submit

Remember to fill in your name and Kerberos ID at the top of this notebook.

👀 Reading hidden code
427 μs
Error message

cannot assign a value to variable PlutoUI.as_svg from module workspace#3

Stack trace

Here is what happened, the most recent locations are first:

  1. eval
"""
Return the argument, but force it to be shown as SVG.

This is an optimization for Plots.jl GR plots: it makes them less jittery and keeps the page DOM small.
"""
as_svg = as_mime(MIME"image/svg+xml"())
👀 Reading hidden code
---
as_mime (generic function with 1 method)
👀 Reading hidden code
862 μs

Function library

Just some helper functions used in the notebook.

👀 Reading hidden code
246 μs
hint (generic function with 1 method)
👀 Reading hidden code
468 μs
almost (generic function with 1 method)
👀 Reading hidden code
478 μs
still_missing (generic function with 2 methods)
👀 Reading hidden code
871 μs
keep_working (generic function with 2 methods)
👀 Reading hidden code
849 μs
👀 Reading hidden code
10.7 ms
correct (generic function with 2 methods)
👀 Reading hidden code
785 μs
not_defined (generic function with 1 method)
👀 Reading hidden code
746 μs
TODO
👀 Reading hidden code
33.2 μs