Lecture 1 of IRC, LIAM, Fields-CQAM MfPH Training Series
2020-03-09
@York University
We divide the second equation of (2.1) by the first to get
where
\(\rho = \frac{\gamma}{\beta}\) is the relative removal rate.
(2.2) has the following solution form:
where the constant \(C=1 - \rho\ln(S_0)\) since \(S_0+I_0=1\), hence
From (2.3) it is clear at \(S=\rho\) the infection achieves its peak:
\(S(\infty)\) exists since \(S(t)\) is monotone and bounded.
\(I(\infty)=0\)
hence (2.3) gives \(0 = 1 - S(\infty) + \rho\ln(S(\infty)/S_0)\) which can be solved for \(S(\infty)\)
\(R(\infty)\) alone, the intensity, could indicate the severity of an epidemic, since \(I(\infty)=0\) and \(S(\infty)=1-R(\infty)\)
For \(S(\infty)\), \(R(\infty)\), only \(\rho\) matters.
if we increase the removal rate \(\gamma\) by isolating some infected
the relative removal rate \(\rho\) increases
the peak infection \(I_{max}\) decreases since \(\rho<S_0\)
\(S(\infty)\) increases
\(R(\infty)\) decreases since \(R(\infty)+S(\infty)=1\)
Qualitatively vaccination is equivalent to jump from one solution path in Fig.1 to a lower solution path.
Consider a finite control period \(T=15\Delta\) instead of \(\infty\), and a stepwise vaccination function instead of a continuous one,
State transition \(\Gamma(\cdot)\) is given by the first two equations and initial conditions in (3.1)
\(\Gamma(\cdot)\) shall be rounded to integer in DP after Runte-Kutta.
Terminal reward function involves only one forward step,
where \(C(M) = (M d \Delta)^2\) if feasible else \(\infty\)
#setup, see Table 1
T = 14
rho = 0.1
@functools.lru_cache()
def Gamma(S,I,M): #state transition
pass #Runte-Kutta Eq 3.1
@functools.lru_cache()
def f(k,S,I): #cost table
if k==1:
pass # (A.1)
else:
pass # (A.2)
I0 = 0.02
f(14, 1 - I0, S0) # seek optimal schedul at I0
I0 = 0.04
f(14, 1 - I0, S0) # seek optimal schedule at I0
With the found control, we may
re-calculate the dynamic system to see state difference,
check violation of the path and point constraints.
One might need to adaptively refine the grid and start over.
implement the DP algorithm using memoization and lazy evaluation techniques;
reproduce the Table 1;
reproduce Fig 3-8.
due March 9 in class.
We will solve the problem as a continuous optimal control problem by collocation, single shooting and multiple shooting algorithms.
HERBERT W. HETHCOTE AND PAUL WALTMAN, Optimal Vaccination Schedules in a Deterministic Epidemic Model, MATHEMATICAL BIOSCIENCES 18, 365-381 (1973)