1. Trang chủ
  2. » Luận Văn - Báo Cáo

Luận án tiến sĩ đồng hóa số liệu trong truyền nhiệt

113 1 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 113
Dung lượng 3,38 MB

Nội dung

MINISTRY OF EDUCATION AND TRAINING VIETNAM ACADEMY OF SCIENCE AND TECHNOLOGY INSTITUTE OF MATHEMATICS NGUYEN THI NGOC OANH DATA ASSIMILATION IN HEAT CONDUCTION THESIS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY IN MATHEMATICS HANOI  2017 MINISTRY OF VIETNAM ACADEMY OF EDUCATION AND TRAINING SCIENCE AND TECHNOLOGY INSTITUTE OF MATHEMATICS NGUYEN THI NGOC OANH DATA ASSIMILATION IN HEAT CONDUCTION Spe iality: Dierential and Integral Equations Spe iality Code: 62 46 01 03 THESIS FOR THE DEGREE OF DOCTOR OF PHYLOSOPHY IN MATHEMATICS Supervisor: PROF DR HABIL ĐINH NHO HÀO HANOI  2017 BỘ GIÁO DỤC VÀ ĐÀO TẠO VIỆN HÀN LÂM KHOA HỌC VÀ CƠNG NGHỆ VIỆT NAM VIỆN TỐN HỌC NGUYỄN THỊ NGỌC OANH ĐỒNG HÓA SỐ LIỆU TRONG TRUYỀN NHIỆT Chun ngành: Phương trình Vi phân Tích phân Mã số: 62 46 01 03 LUẬN ÁN TIẾN SĨ TOÁN HỌC Người hướng dẫn khoa học: GS TSKH ĐINH NHO HÀO HÀ NỘI – 2017 A knowledgments I first learned about inverse and ill-posed problems when I met Professor Đinh Nho Hào in 2007, my final year of bachelor’s study I have been extremely fortunate to have a chance to study under his guidance since then I am deeply indebted to him not only for his supervision, patience, encouragement and support in my research, but also for his precious advices in life I would like to express my special appreciation to Professor Hà Tiến Ngoạn, Professor Nguyễn Minh Trí, Doctor Nguyễn Anh Tú, the other members of the seminar at Department of Differential Equations and all friends in Professor Đinh Nho Hào’s group seminar for their valuable comments and suggestions to my thesis I am very grateful to Doctor Nguyễn Trung Thành (Iowa State University) for his kind help on MATLAB programming I would like to thank the Institute of Mathematics for providing me with such an excellent study environment Furthermore, I would like to thank the leaders of College of Sciences, Thai Nguyen University, the Dean board as well as to all of my colleagues at the Faculty of Mathematics and Informatics for their encouragement and support throughout my PhD study Last but not least, I could not have finished this work without the constant love and unconditional support from my parents, my parents-in-law, my husband, my little children and my dearest aunt I would like to express my sincere gratitude to all of them Abstra t The problems of re onstru ting the initial ondition in paraboli equations from the observation at the nal time, from interior integral observations, and from boundary observations are studied We reformulate these inverse problems as variational problems of minimizing appropriate mist fun tionals We prove that these fun tionals are Fré het dierentiable and derive a formula for their gradient via adjoint problems The dire t problems are rst dis retized in spa e variables by the nite dieren e method and the variational problems are orrespondingly dis retized The onvergen e of the solution of the dis retized variational problems to the solution of the ontinuous ones is proved To solve the problems numeri ally, we further dis retize them in time by the splitting method It is proved that the ompletely dis retized fun tionals are Fré het dierentiable and the formulas for their gradient are derived via dis rete adjoint problems The problems are then solved by the onjugate gradient method and the numeri al algorithms are tested on omputer As a by-produ t of the variational method based on Lan zos' algorithm, we suggest a simple method to demonstrate the ill-posedness i Tóm tắt Các tốn xác định điều kiện ban đầu phương trình parabolic từ quan sát thời điểm cuối, từ quan sát tích phân bên trong, từ quan sát biên nghiên cứu Chúng sử dụng phương pháp biến phân nghiên cứu tốn ngược cách cực tiểu hóa phiếm hàm chỉnh Chúng chứng minh phiếm hàm khả vi Fréchet đưa công thức gradient chúng thơng qua tốn liên hợp Trước tiên, sử dụng phương pháp sai phân hữu hạn để rời rạc hóa tốn thuận tốn liên hợp tương ứng theo biến khơng gian Chúng chứng minh hội tụ nghiệm toán biến phân rời rạc tới nghiệm toán biến phân liên tục Để giải số toán, chúng tơi tiếp tục rời rạc tốn theo biến thời gian phương pháp sai phân phân rã (phương pháp splitting) Chúng chứng minh phiếm hàm rời rạc khả vi Fréchet đưa công thức gradient chúng thông qua tốn liên hợp rời rạc Sau chúng tơi sử dụng phương pháp gradient liên hợp để giải thuật tốn số thử nghiệm máy tính Ngồi ra, sản phẩm phụ phương pháp biến phân, dựa thuật tốn Lanczos, chúng tơi đề xuất phương pháp đơn giản để minh họa tính đặt khơng chỉnh tốn ii De laration This work has been completed at Institute of Mathematics, Vietnam Academy of Science and Technology under the supervision of Prof Dr Habil Đinh Nho Hào I declare hereby that the results presented in it are new and have never been published elsewhere Author: Nguyen Thi Ngoc Oanh iii List of Figures 2.1 Example 1: Singular values 2.2 Example 2: Re onstru tion results: (a) exa t fun tion v ; (b) estimated one; ( ) point-wise error; 52 (d) the omparison of v|x1 =1/2 and its re onstru tion (the dashed urve: the exa t fun tion, the solid urve: the estimated fun tion) 2.3 53 Example 3: Re onstru tion result: (a) exa t fun tion v ; (b) estimated one; ( ) point-wise error; (d) the omparison of v|x1 =1/2 and its re onstru tion (the dashed urve: the exa t fun tion, the solid urve: the estimated fun tion) 2.4 54 Example 4: Re onstru tion result: (a) exa t fun tion v ; (b) estimated one; ( ) point-wise error; (d) the omparison of v|x1 =1/2 and its re onstru tion (the dashed urve: the exa t fun tion, the solid urve: the estimated fun tion) 2.5 55 Example 5: Re onstru tion result: (a) exa t fun tion v ; (b) estimated one; ( ) point-wise error; (d) the omparison of v|x1 =1/2 and its re onstru tion (the dashed urve: the exa t fun tion, the solid urve: the estimated fun tion) 56 3.1 Example Singular values: three observations and various time intervals of observations 3.2 Example 2: Re onstru tion results for (a) uniform observation points in (0, 0.5), error in L2 - 68 norm = 0.006116; (b) uniform observation points in (0.5, 1), error in L2 -norm = 0.006133; ( ) uniform observation points in (0.25, 0.75), the error in L2 -norm = 0.0060894; (d) uniform observation points in Ω, the error in L2 -norm = 0.0057764 69 3.3 Re onstru tion result of Example 3: (a) τ = 0.01 ; (b) τ = 0.05; ( ) τ = 0.1; (d) τ = 0.3 70 3.4 Re onstru tion result of Example 4: (a) τ = 0.01 ; (b) τ = 0.05; ( ) τ = 0.1; (d) τ = 0.3 72 3.5 Re onstru tion result of Example 5: (a) τ = 0.01 ; (b) τ = 0.05; ( ) τ = 0.1; (d) τ = 0.3 73 3.6 Example Re onstru tion results: (a) Exa t initial ondition v ; (b) re onstru tion of v ; ( ) point-wise error; (d) the omparison of v|x1 =1/2 | and its re onstru tion 3.7 Example Re onstru tion results: (a) Exa t initial ondition v ; (b) re onstru tion of v ; ( ) point-wise error; (d) the omparison of v|x1 =1/2 | and its re onstru tion 3.8 74 76 Example Re onstru tion results: (a) Exa t initial ondition v ; (b) re onstru tion of v ; ( ) point-wise error; (d) the omparison of v|x1 =1/2 | and its re onstru tion iv 77 4.1 Example 1: Singular Values for 1D Problem 4.2 Example 2, 3, 4: 1D Problem: Re onstru tion results for smooth, ontinuous and dis ontinuous initial onditions 87 88 4.3 Example 5: Exa t initial ondition (left) and its re onstru tion (right) 4.4 Example ( ontinue): Error (left) and the verti al sli e of the exa t initial ondition and its re onstru tion along the interval [(0.5, 0), (0.5, 1)] (right) 89 89 4.5 Example 6: Exa t initial ondition (left) and its re onstru tion (right) 4.6 Example ( ontinue): Error (left) and the sli e of the exa t initial ondition and its re onstru tion along the interval [(0.5, 0), (0.5, 1)] (right) 90 90 4.7 Example 7: Exa t initial ondition (left) and its re onstru tion (right) 4.8 Example ( ontinue): Error (left) and the sli e of the exa t initial ondition and its re onstru tion along the interval [(0.5, 0), (0.5, 1)] (right) v 90 91 List of Tables 3.1 Example 3: Behavior of the algorithm with dierent starting points of observation τ 3.2 Example 6: Behavior of the algorithm when the number of observations and the positions of observations vary (N = 4) 3.3 71 75 Example 6: Behavior of the algorithm when the number of observations and the positions of observations vary (N = 9) 75 vi The solution to this problem is a fun tion u ∈ W (0, T ; H 1(Ω)) satisfying Denition 1.1.8 If the onditions (1.1)(1.6) are satised, then Theorem 1.1.2 shows that there exists a unique solution u ∈ W (0, T ; H 1(Ω)) Data assimilation: to the Neumann problem (4.1)(4.3) Re onstru t the initial ondition vations of the solution u on a part of the boundary Σ = Γ × (0, T ) Our aim is to re onstru t the measurement ϕ ∈ L (Σ) of the solution u on Σ: v(x) S in (4.1)(4.3) from the obser- Namely, let initial ondition Γ ⊂ ∂Ω v and denote from the impre ise ku|Σ − ϕkL2 (Σ) ≤ ǫ (4.4) From now on, as in the previous hapters, to emphasize the dependen e of the solution u in (4.1)(4.3) Cu(v) = u(v)|Σ , on the initial ondition v, we denote it by u(v) or u(x, t; v) Denote we thus have to solve the operator equation Cu(v) = ϕ Remark 4.1.1 To see the ill-posedness of the problem, note that if the ondition 2) of Theorem 1.1.2 is satised, then ompa t from (4.5) L2 (Ω) to L2 (Σ) u ∈ H 1,1 (Q) Hen e the operator mapping Thus, the problem of re onstru ting v from u|Σ v to u|Σ is is ill-posed To hara terize the degree of ill-posedness of this problem, denote the solution to the o problem (4.1)(4.3) with v ≡ by u and denote the solution to the problem (4.1)(4.3) with f ≡ 0, g ≡ by u0 Then, the operator o Cv = Cu − C u (4.6) L2 (Ω) to L2 (Σ) is linear and the problem (4.5) is redu ed to solving the linear equation o Cv = ϕ − C u We thus have to analyze the singular values of C In doing so, let introdu e from the variational method for solving the problem (4.5) We reformulate the re onstru tion problem as the least squares problem of minimizing the fun tional J0 (v) = ku(v) − ϕk2L2 (Σ) over L2 (Ω) (4.7) As this minimization problem is unstable, we minimize its Tikhonov regular- ized fun tional over L2 (Ω) with γ Jγ (v) = ku(v) − ϕk2L2 (Σ) + kv − v ∗ k2L2 (Ω) 2 γ > the regularization parameter, v ∗ an estimation (4.8) of v whi h an be set by zero Now we prove that Jγ is Fré het dierentiable and derive a formula for its gradient In doing so, we introdu e the adjoint problem   n ∂p  ∂  ∂p P    aij (x, t) + b(x, t)p = − −   ∂xi  ∂t i,j=1 ∂xj  ∂p  = u(v) − ϕ χΣ on S,    ∂N   p(x, T ) = in Ω, 79 in Q, (4.9) where χΣ (ξ, t) = if (ξ, t) ∈ Σ and zero otherwise The solution of this problem is understood in the weak sense of Ÿ1.2 Sin e L (Σ), W (0, T ; H (Ω)) we see that there exists a unique solution in Lemma 4.1.1 to (4.9) The fun tional Jγ is Fré het dierentiable and ∇Jγ (v) = p(x, 0) + γ(v(x) − v ∗ (x)), where p(x, t) is the solution to the adjoint problem Proof u(v)|Σ − ϕ ∈ For a small variation δv of v, (4.10) (4.9) we have 1 J0 (v + δv) − J0 (v) = ku(v + δv) − ϕk2L2 (Σ) − ku(v) − ϕk2L2 (Σ) 2 = kδu(v)k2L2(Σ) + hδu(v), u(v) − ϕiL2 (Σ) = hδu(v), u(v) − ϕiL2 (Σ) + o(kδvkL2 (Ω) ) ZZ   = δu(v) u(v) − ϕ dsdt + o(kδvkL2 (Ω) ) Σ where δu is the solution to the problem   n ∂δu ∂ P    (x, t) + b(x, t)δu = δut −   ∂xj  i,j=1 ∂xi ∂δu  = on S,    ∂N   δu(x, 0) = δv in Ω in Q, (4.11) Using Green's formula (1.25) (Theorem 1.2.2) for (4.9) and (4.11), we have ZZ Σ Hen e J0 (v + δv) − J0 (v) = Z Z   δu u(v) − ϕ dsdt = δvp(x, 0)dx Ω δvp(x, 0)dx + o(kδvk) = hp(x, 0), δviL2(Ω) + o(kδvkL2 (Ω) ) Ω Consequently, the fun tional J0 is Fré het dierentiable and ∇J0 (v) = p(x, 0) From this result, we see that the fun tional gradient ∇Jγ (v) Jγ (v) is also Fré het dierentiable and its has the form (4.10) The proof is omplete Now we return to hara terizing the degree of ill-posedness of the problem (4.5) Sin e J0 (v) = ku(v)|Σ − ϕk2L2 (Σ) o = kCv − (ϕ − u|Σ )k2L2 (Σ) 80 If in this formula we take o ϕ = u|Σ , then J0′ (v) = C ∗ Cv Due to Proposition 4.1.1 we have problem where p† is the solution to the adjoint   n ∂p†  ∂ ∂p† P    a (x, t) + b(x, t)p† = − − ij   ∂t ∂x ∂x  j i i,j=1 † ∂p  = u0 (v)χΣ on S,    ∂N   p† (x, T ) = in Ω in Q (4.13) v ∈ L2 (Ω) we an evaluate C ∗ Cv by solving the dire t problem (4.1)(4.3) and ∗ problem (4.13) However, the expli it form for C C is not available As in the Thus, for any the adjoint C ∗ Cv = p† (x, 0), (4.12) previous hapters, when we dis retize the problem, the nite-dimensional approximations Ch to C are matri es, and so we an apply the Lan zos' algorithm [81℄ to estimates the eigenvalues of Ch∗ Ch based on its values Ch∗ Ch v We will present some numeri al results in Se tion 4.4 We note that when Σ ≡ S Lions [50, p 216219℄ suggested the following variational method Consider two boundary value problems   n ∂u1 X ∂ ∂u1 aij (x, t) + b(x, t)u1 = f − ∂t ∂x ∂x i j i,j=1 u1 = h in Q, (4.14) on S, (4.15) u1 |t=0 = v in Ω, (4.16)   n ∂u2 X ∂ ∂u2 aij (x, t) + b(x, t)u2 = f − ∂t ∂x ∂x i j i,j=1 in Q, (4.17) on S, (4.18) in Ω, (4.19) and ∂u2 =g ∂N u2 |t=0 = v then minimize the fun tional JL0 (v) = ku1 (v) − u2 (v)k2L2 (Q) over L2 (Ω) (4.20) As this variational problem has the ill-posed nature of the original problem we regularize it by minimizing the Tikhonov fun tional γ JLγ (v) = JL0 (v) + kv − v ∗ k2L2 (Ω) (4.21) In this setting, the solution of the Diri hlet problem (4.14)(4.16) is understood in a ommon sense: hoose a fun tion Φ ∈ H 1,1 (Q) su h that Φ|S = h, then new homogeneous Diri hlet problem with the new right hand side 81 u˜1 = u1 − Φ satises a f˜ and initial ondition v˜ u˜1 ∈ W (0, T ; H01(Ω)) The fun tion is said to be a weak solution to this homogeneous Diri hlet problem, if Z T (˜ u1t , η)H −1 (Ω),H01 (Ω) dt = Z Z T f˜ηdxdt, Ω + ZZ  X n Q  ∂ u˜1 ∂η + b(x, t)˜ u1 η dxdt aij (x, t) ∂xi ∂xj i,j=1 ∀η ∈ L2 (0, T ; H01(Ω)) and u˜1 |t=0 = v˜ If h in Ω (4.22) u˜1 to the homogeneous Diri hlet u1 ∈ W (0, T ; H 1(Ω)) to (4.14)(4.16) is regular enough, there exists a unique solution problem and thus, there exists a unique solution Sin e u1 u2 and belong to W (0, T ; H 1(Ω)) we an modify Lions' method as follows: mini- mize the fun tional n ZZ λ1 λ2 X 2 MJL0 (v) = ku (v) − u (v)kL2 (Q) + aij (u1 (v) − u2 (v))xi (u1 (v) − u2 (v))xj dxdt, 2 i,j=1 Q (4.23) λ1 with and λ2 The fun tional being non-negative and MJLγ λ1 + λ2 > is Fré het dierentiable and its gradient an be represented via two adjoint problems and  n ∂p1  ∂ ∂p1 P    aij (x, t) + a(x, t)p1 = − −   ∂t ∂x ∂x  i j i,j=1     Pn  2 λ1 (u (v) − u (v)) + λ2 i,j=1 aij u (v) − u (v) xj in Q, xi     p = on S,     p (x, T ) = in Ω,  n  ∂p2  ∂ ∂p2 P   aij (x, t) + b(x, t)p2 = − −    ∂t ∂x ∂x i j i,j=1   Pn    a λ (u (v) − u (v)) + λ   ∂p2 ∂(u1 (v) − u2 (v))    = λ   ∂N ∂N   p2 (x, T ) = in Ω Lemma 4.1.2 on i,j=1 ij u (v) − u (v) S,   xj xi in Q, (4.24) (4.25) The fun tional MJL0 is Fré het dierentiable and its gradient has the form MJL′0 (v) = p1 (x, 0) − p2 (x, 0) (4.26) Lions' method is a subje t of another independent resear h, we therefore not pursuit it in this thesis 82 4.2 Dis retization of the variational method in spa e variables We now turn to approximating the minimization problem (4.8) Due to the previous Se tion 4.1, γ o Jγ (v) = kCv − (ϕ − u|Σ )k2L2 (Σ) + kv − v ∗ k2L2 (Ω) 2 and   o Jγ′ (v) = C ∗ Cv − (ϕ − u|Σ ) + γ(v − v ∗ ) = p(x, 0) + γ(v − v ∗ ), Cv = u0(v)|Σ and p is the solution to the adjoint problem (4.13) where Thus, the optimality ondition is Denote by   o C ∗ Cv − (ϕ − u|Σ ) + γ(v − v ∗ ) = p(x, 0) + γ(v − v ∗ ) = Ch v = uˆ0h |Σ we have that kCv − Ch vkL2 (Σ) tends to zero as h (4.27) tends to zero The dis rete version of the fun tional (4.8) is γ oˆ vh − vˆh∗ k2L2 (Ω) Jγh (v) = kCh v − (ϕˆh − uh |Σ )k2L2 (Σ) + kˆ 2 for whi h we have the rst-order optimality ondition   oˆ Ch∗ Ch v − (ϕˆh − uh |Σ ) + γ(ˆ vh − vˆh∗ ) = (4.28) Ch∗ we have to solve the orresponding dis retized adjoint problem, 1,1 but the Neumann ondition in the adjoint problem (4.13) does not belong to H (S), 1,1 ∗ therefore p is not in H (Q), hen e we not have the strong onvergen e of Ch z to C ∗ z in L (Ω) However, when we dis retize (4.13) we have to mollify the Neumann data by We note that to evaluate the onvolution with Steklov's kernel [40℄, therefore we have a new approximate data in H 1,1 (S) Sin e the solution of the adjoint problem (4.13) is stable with respe t to the data, the solution of the adjoint problem with mollied data p p¯ approximates the solution of (4.13) Now we apply the above nite dieren e s heme to the adjoint problem with p¯ˆh su h that pˆ¯h → p¯ in L2 ([0, T ], L2(Ω)) ˆ¯h (t) → p¯(t) weakly in H (Ω) for all t ∈ [0, T ] Thus, in this way, instead of the and p ∗ ˆ∗ of C for whi h kC ∗ z − Cˆ∗ zkL2 (Ω) tends adjoint operator Sh , we dened an approximation C h h h to zero for all z being multi-linear interpolations on Ωh mollied data to get its multi-linear interpolation Let vˆhγ be the solution of the variational problem   oˆ Cˆh∗ Ch v − (ϕˆh − uh |Σ ) + γ(ˆ vh − vˆh∗ ) = (4.29) Following Se tion 1.5 we an prove the following result Proposition 4.2.1 Then vˆhγ Let v γ be the solution of the variational problem onverges to v γ in L2 (Ω) as h tends to zero 83 (4.27) and γ > 4.3 Full dis retization of the variational problem and the onjugate gradient method In this se tion, we onsider the problem of estimating the dis rete initial ondition v¯ from the dis rete measurement of the solution on the boundary of the domain The fully dis retized version of Jγ has the form J0h,∆t (¯ v) M 1XX [uk,m(¯ v ) − ϕk,m]2 := m=1 (4.30) k∈Γh For minimizing the problem (4.30) by the onjugate gradient method, we rst al ulate h,∆t (¯ v ) and it is shown by the following theorem the gradient of obje tive fun tion J0 Theorem 4.3.1 The gradient of J0h,∆t at v¯ is given by ∗ v ) = A0 η , ∇J0h,∆t (¯ where η = (η , , η M ) satises the adjoint problem  m m+1 ∗ m+1  )η + ψ m+1 , m = M − 2, M − , 0,  η = (A η M −1 = ψ M ,    M η = 0, (4.31) (4.32) with ψ m = {ψ k,m := uk,m(¯ v ) − ϕk,m, k ∈ Γh }, m = 0, 1, , M and the matri es (Am )∗ and (B m )∗ being given by ∆t m ∆t m −1 ∆t m ∆t m −1 Λ1 )(E1 + Λ1 ) (En − Λn )(En + Λ ) 4 4 n ∆t m −1 ∆t m ∆t m −1 ∆t m Λn )(En + Λn ) (E1 − Λ1 )(E1 + Λ ) , × (En − 4 4 ∆t m ∆t m −1 ∆t m ∆t m −1 (B m )∗ = (En − Λn )(En + Λn ) (E1 − Λ1 )(E1 + Λ ) 4 4 (Am )∗ = (E1 − Proof For a small variation δ¯ v of v¯, (4.33) we have from (4.30) that v) v + δ¯ v) − J0h,∆t (¯ J0h,∆t (¯ M M 1XX 1XX k,m k,m [u (¯ v + δ¯ v) − ϕ ] − [uk,m (¯ v) − ϕk,m ]2 = k∈Γ m=1 k∈Γ m=1 h = h M  1XX k∈Γ h m=1 w k,m 2 + M XX w k,m(uk,m(¯ v ) − ϕk,m ) k∈Γh m=1 M  M 2 X X 1XX k,m w + w k,mψ k,m = m=1 m=1 k∈Γh = k∈Γh M  M 2 X 1XX w k,m + hw m , ψ m i, k∈Γ m=1 m=1 h 84 (4.34) w m = {w k,m := uk,m(¯ v + δ¯ v ) − uk,m (¯ v), k ∈ Γh } k,m ϕ , k ∈ Γh }, m = 0, 1, , M and ψ m = {ψ k,m := uk,m(¯ v) − w is the solution to the problem  w m+1 = Am w m , m = 0, , M − 1, w = δ¯ v (4.35) where It follows from (1.111) that Taking the inner produ t of the both sides of the ve tor m η ∈R N1 × ×Nn hw m+1 m ,η i = m=0 = is the inner produ t in m = 0, , M − 1, we obtain M −1 X hAm w m , η m i m=0 M −1 X m=0 h·, ·i equation of (4.35) with an arbitrary and then summing the results over M −1 X Here, mth RN1 × ×Nn (4.36) ∗ hw m, Am η m i Am ∗ is the adjoint matrix of Am Taking the inner produ t of the both sides of the rst equation of (4.32) with an arbitrary ve tor w m+1 , summing the results over M −2 X hw m+1 , η m i = m=0 = m = 0, , M − 2, M −2 X we have hw m+1 , (Am+1 )∗ η m+1 i + m=0 M −1 X M −2 X hw m+1 , ψ m+1 i m=0 hw m , (Am )∗ η m i + m=1 M −1 X (4.37) hw m, ψ m i m=1 Taking the inner produ t of both sides of the se ond equation of (4.32) with an arbitrary ve tor wM , we have hw M , η M −1 i = hw M , ψ M i (4.38) It follows from (4.37) and (4.38) that M −2 X hw m+1 m M , η i + hw , η M −1 i= m=0 M −1 X m ∗ m m hw , (A ) η i + m=1 M −1 X hw m , ψ m i + hw M , ψ M i (4.39) m=1 From (4.36) and (4.39) we obtain M −1 X ∗ hw 0, A0 η i = hw m , ψ m i + hw M , ψ M i m=1 Equivalently,  ∗ hδ¯ v, A On the other hand, we an prove that η i= P M X (4.40) m=1 M P k∈Γh m=1 (4.34) and (4.40) that hw m , ψ m i 2  w k,m = o(k¯ v k) ∗ J0h,∆t (¯ v + δ¯ v ) − J0h,∆t (¯ v ) = hδ¯ v, A0 η i 85 Hen e, it follows form (4.41) Consequently, J0h,∆t is dierentiable and its gradient has the form (3.32) Note that, sin e the matri es Λi , i = 1, , n are symmetri , we have for m = 0, , M − ∆t m −1 ∆t m ∆t m −1 ∆t m Λ1 )(E1 + Λ1 ) (En − Λn )(En + Λ ) 4 4 n ∆t m ∆t m −1 ∆t m ∆t m −1 × (En − Λn )(En + Λn ) (E1 − Λ1 )(E1 + Λ ) 4 4 (4.42) ∆t m ∆t m −1 ∆t m ∆t m −1 Λn )(En + Λn ) (E1 − Λ1 )(E1 + Λ ) 4 4 (4.43) (Am )∗ = (E1 − Similarly, (B m )∗ = (En − Conjugate gradient method for the dis retized fun tional (4.30) onsists of the following steps Step solving the splitting s heme (1.111) and set Step k = r = −∇Jγ (v ) d0 = r Cal ulate the gradient problem (4.32) Then set Step v and al ulate the residual rˆ0 = u(v )|Σ − ϕ by with v ¯ being repla ed by the initial approximation v Choose an initial approximation given in (4.31) by solving the adjoint Cal ulate kr k2 α = ku(d0)|Σ k2 + γkd0 k2 u(d0) an be al ulated from d0 and g = 0, F = Then, set where the splitting s heme (1.111) with v¯ being repla ed by v = v + α d0 Step For k = 1, 2, · · · , al ulate r k = −∇J0 (v k ), dk = r k + β k dk−1 , βk = Step Cal ulate where kr k k2 kr k−1 k2 αk kr k k2 α = ku(dk )|Σ k2 + γkdk k2 k u(dk ) an be al ulated from dk and g = 0, F = Then, set where the splitting s heme (1.111) with v¯ being repla ed by v k+1 = v k + αk dk The simulation of this algorithm on omputer will be given in the next se tion 4.4 Numeri al example In this se tion we will present our numeri al simulation for one- and two-dimensional problems As in the previous hapters, we will test for dierent kinds of the initial onditions: 86 1) very smooth, 2) ontinuous but not smooth (the hat fun tion), 3) dis ontinuous (step fun tions) We see that the degree of di ulty in reases from test 1) to test 3) In the one-dimensional ases we will present our numeri al al ulation for the singular values by the method des ribed in Se tion 4.1 4.4.1 Numeri al example in one-dimensional ase Set Ω = (0, 1), T = Consider the system ut − (aux )x = f in (0, T ], aux (1, t) = ϕ2 (t) in (0, T ], a = 2xt + x2 t + −2 is 10 where oe ient Example a by a0 = Q, −aux (0, t) = ϕ1 (t) u|t=0 = v The noise level in in Ω, The observations will be taken at x=0 and x = We approximate singular values for the ases when we in rease the oe ient and a0 = 10 times It appeared that if the oe ient of the equation is larger, the singular values are smaller This an be seen in Figure 4.1 on the singular values evaluated by the method presented in Se tion 4.1 10 a0=1 a0=5 a0=10 10 −2 10 −4 10 −6 10 −8 10 −10 10 −12 10 Figure 4.1: 10 15 20 25 30 35 40 Example 1: Singular Values for 1D Problem 87 45 50 55 Now we present numeri al results for dierent initial onditions as explained above Example Example Smooth initial ondition: Continuous but not smooth initial ondition: v= Example v = sin(2πx)  2x, if x ≤ 0.5, 2(1 − x), otherwise Dis ontinuous initial ondition: v= Estimated function v  1, 0, if 0.25 ≤ x ≤ 0.75, otherwise Estimated function v Exact sol Noiselevel=1% Noiselevel=10% Exact sol Noiselevel=1% Noiselevel=10% 0.8 0.6 0.8 0.4 0.6 v(x) v(x) 0.2 0.4 −0.2 −0.4 0.2 −0.6 −0.8 −1 0.1 0.2 0.3 0.4 0.5 x 0.6 0.7 0.8 0.9 0.7 0.8 0.9 −0.2 0.1 0.2 0.3 0.4 0.5 x 0.6 0.7 0.8 0.9 Estimated function v 0.8 v(x) 0.6 0.4 0.2 Exact sol Noiselevel=1% Noiselevel=10% −0.2 Figure 4.2: 0.1 0.2 0.3 0.4 0.5 x 0.6 Example 2, 3, 4: 1D Problem: Re onstru tion results for smooth, ontinuous and dis ontinuous initial onditions 4.4.2 Numeri al example in the multi-dimensional ase ut − (a1 ux1 )x1 − (a2 ux2 )x2 = f As in the one-dimensional ase, we hoose the initial ondition v , then let u = v × (1 − t) Putting u in the equation to get the boundary data and right hand side f The observation Set Ω := (0, 1) × (0, 1), T = Consider the equation 88 is taken on the whole boundary S and the noise level is set by 10−2 In all examples we take a1 (x1 , x2 , t) = a2 (x1 , x2 , t) = 10−1 (1 + 10−2 cos(πx1 t) cos(πx2 )) Example Smooth initial ondition: v = sin(πx1 ) sin(πx2 ) Exact function v Estimated function with noiselevel 1% 0.5 0.5 v(x) v(x) 0 −0.5 −0.5 0.79 0.79 0.79 0.39 x2 0.79 0.39 0.39 −0.01 −0.01 −0.01 x2 x Figure 4.3: 0.39 −0.01 x Example 5: Exa t initial ondition (left) and its re onstru tion (right) Error Exact sol Noiselevel=1% Noiselevel=10% 1 0.8 Vertical slice 0.5 0.6 0.4 −0.5 0.2 −1 0.79 0.79 0.39 x 0.39 −0.01 −0.01 −0.2 x1 0.1 0.2 0.3 0.4 0.5 x 0.6 0.7 0.8 0.9 Example ( ontinue): Error (left) and the verti al sli e of the exa t initial ondition and its re onstru tion along the interval [(0.5, 0), (0.5, 1)] (right) Figure 4.4: Example Continuous initial ondition:    2x2 , if x2 ≤ 0.5 and x2 ≤ x1 and x1 ≤ − x2 ,   2(1 − x ), if x ≥ 0.5 and x ≥ x and x ≥ − x , 2 1 v=  2x1 , if x1 ≤ 0.5 and x1 ≤ x2 and x2 ≤ − x1 ,    2(1 − x ), otherwise 89 Exact function v Estimated function with noiselevel 1% 0.5 0.5 v(x) v(x) 0 −0.5 −0.5 0.79 0.79 0.79 0.39 x 0.79 0.39 0.39 −0.01 −0.01 x x 0.39 −0.01 −0.01 x Example 6: Exa t initial ondition (left) and its re onstru tion (right) Figure 4.5: Error Exact sol Noiselevel=1% Noiselevel=10% 1 0.8 0.5 0.6 Vertical slice 0.4 −0.5 0.2 −1 0.79 0.79 0.39 x 0.39 −0.01 −0.01 −0.2 x1 0.1 0.2 0.3 0.4 0.5 x 0.6 0.7 0.8 0.9 Example ( ontinue): Error (left) and the sli e of the exa t initial ondition and its re onstru tion along the interval [(0.5, 0), (0.5, 1)] (right) Figure 4.6: Example Dis ontinuous initial ondition: v=  1, 0, if 0.25 ≤ x1 ≤ 0.75 and 0.25 ≤ x2 ≤ 0.75 otherwise Exact function v Estimated function with noiselevel 1% 0.5 0.5 v(x) v(x) 0 −0.5 −0.5 0.79 0.79 0.79 0.39 x2 0.79 0.39 0.39 −0.01 Figure 4.7: −0.01 x2 x 0.39 −0.01 −0.01 x Example 7: Exa t initial ondition (left) and its re onstru tion (right) 90 Error 1.2 0.5 0.8 Vertical slice −0.5 0.6 0.4 −1 0.2 0.79 0.79 0.39 x −0.01 Exact sol Noiselevel=1% Noiselevel=10% 0.39 −0.01 −0.2 x 0.1 0.2 0.3 0.4 0.5 x 0.6 0.7 0.8 0.9 Example ( ontinue): Error (left) and the sli e of the exa t initial ondition and its re onstru tion along the interval [(0.5, 0), (0.5, 1)] (right) Figure 4.8: In all examples we see that the numeri al re onstru tions are pretty good However, when the oe ients are large, the ill-posedness of the problem is more severe and the method is less ee tive This hapter is written on the basis of the paper [26℄ Hào D.N and Oanh N.T.N., Determination of the initial ondition in paraboli equations from boundary observations Journal of Inverse and Ill-Posed Problems no 2, 195220 91 24(2016), Con lusion In this thesis we study data assimilation in heat ondu tion of re onstru ting the initial ondition in a heat transfer pro ess from either 1) the observation of the temperature at the nal time moment, or 2) interior integral observations whi h are regarded as interior measurements, or 3) boundary observations The rst problem is new in the sense that the oe ients of the equation des ribing the heat transfer pro ess are depending on time, and up to now there are very few studies devoted to it The se ond problem is a new setting for su h kind of problems in data assimilation: interior observations are important, but related studies are devoted to the ase of pointwise observations whi h are not realisti in pra ti e The use of integral observations is more pra ti al The third problem is very hard as the observation is only on the boundary and up to now there have been very few studies for this ase We reformulate these problems as a variational problem aiming at minimizing a mist fun tional in the least squares sense We prove that the fun tional is Fré het dierentiable and derive a formula for its gradient via an adjoint problem and as a by-produ t of the method, we propose a very natural and easy method for estimating the degree of ill-posedness of the re onstru tion problem For numeri ally solving the problems, we dis retize the dire t and adjoint problems by the splitting nite dieren e method for getting the gradient of the dis retized variational problems and then apply the onjugate gradient method for solving them We note that sin e the solutions in the thesis are understood in the weak sense, the nite dieren e method for them is not trivial With respe t to the dis retization in spa e variables we prove onvergen e results for the dis retization methods We test our method on omputer for various numeri al examples to show the e ien y of our approa h 92 The author's publi ations related to the thesis [1℄ Nguyen Thi Ngo Oanh, A splitting method for a ba kward paraboli equation with time-dependent oe ients, Computers & Mathemati s with Appli ations 65(2013), 1728; [2℄ Dinh Nho Hào and Nguyen Thi Ngo Oanh, Determination of the initial ondition in paraboli equations from integral observations, ing Inverse Problems in S ien e and Engineer- (to appear) doi: 10.1080/17415977.2016.1229778 [3℄ Dinh Nho Hào and Nguyen Thi Ngo Oanh, Determination of the initial ondition in paraboli equations from boundary observations, 24(2016), no 2, 195220 93 Journal of Inverse and Ill-Posed Problems

Ngày đăng: 22/06/2023, 15:23

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN