MA4081 PENGANTAR PROSES STOKASTIK Topik Khusus: Model AR dan INAR Cerdas dan Stokastik
Setelah rantai Markov, distribusi eksponensial, lalu apa?
Proses Bernoulli, Proses Poisson, Proses Stokastik lain?
Mhs dengan 2 jawaban BENAR: 037, 039, 053, 089, 094
Mhs dengan 3 jawaban BENAR: 073
Mhs dengan 4 jawaban BENAR: 107
Silabus Diskusi Silabus Tujuan Proses autoregresif (AR), proses AR bernilai integer, kestasioneran, struktur momen (tidak) bersyarat, metode likelihood maksimum, metode kuadrat terkecil, penaksiran parameter.
Tujuan Diskusi Silabus Tujuan 1 Mempelajari proses autoregresif (AR) 2 Mempelajari proses AR bernilai integer (INAR) 3 Menentukan daerah kestasioneran 4 Menurunkan struktur momen bersyarat dan tidak bersyarat 5 Mempelajari metode penaksiran parameter
Kuliah Pengantar Tugas kelompok: 4 orang/kelompok (dipilih acak) Presentasi: 15/11, 17/11, 22/11, 24/11 (jadwal presentasi ditentukan tanggal 10/11 dst.)
Tentang tugas: Membahas 1-2 bagian dalam suatu artikel ilmiah Artikel tentang model AR atau INAR dan variannya (konfirmasikan terlebih dahulu artikel yang dipilih) Kajian dapat berupa teoritis atau komputasi
Penilaian presentasi: Dilakukan oleh dosen dan 2 kelompok lain yang terpilih (acak) Kelompok penilai harus memberikan pertanyaan Dosen dapat bertanya atau meminta seseorang/sekelompok untuk bertanya dan menjawab Cakupan penilaian: - Materi dan akurasi - Tingkat kesulitan - Cara presentasi Nilai: 0-10 (bobot 15%)
Contoh.
Formulir: Daftar kelompok Jadwal presentasi Penilaian
Model Random Walk Pandang suatu rantai Markov dengan keadaaan-keadaan 0, ±1, ±2 dan peluang transisi dimana 0 < p < 1. P i,i+1 = p = 1 P i,i 1,
Kita dapat menuliskan m.p.t: ( 2, 1, 0, 1, 2) ( 2, 1, 0, 1, 2) P = 0 p 0 0 0 1 p 0 p 0 0 0 1 p 0 p 0 0 0 1 p 0 p 0 0 0 1 p 0
Model diatas disebut Random Walk.
Model AR Diskusi Consider a stationary zero-mean Gaussian first-order autoregressive process {Y t } satisfying Y t = ρy t 1 + ε t where ρ < 1 and the ε t are independent and identically N(0, v) distributed. Let θ = (ρ, v). Suppose that the data is Y 1,..., Y n and that we wish to find a prediction interval for Y n+1.
We employ the following estimators Θ. (i) The estimator Θ is equal to Θ ( ) = ρ, Ṽ where ρ = n 1 Y t Y t+1 t=1 n 1 t=1 Y 2 t+1 and Ṽ = 1 n 1 (Y t ρ Y t+1 ) 2. n 1 t=1 These estimators are obtained by least squares from the backward representation of the process {Y t }.
(ii) The estimator Θ is equal to Θ = ( ρ, V ) where ρ = n Y t Y t 1 t=2 n t=2 Y 2 t 1 and V = 1 n 1 n (Y t ρ Y t 1 ) 2. t=2 These estimators are obtained by maximizing the loglikelihood function conditional on Y 1 = y 1.
The estimators ρ and ρ differ by only a small amount. Yet their asymptotic biases conditional on Y n = y n are quite different. These asymptotic conditional biases are described as follows. E ( ρ ρ Y n = y n ) = 2 ρ n 1 + E ( ρ ρ Y n = y n ) = ( y 2 n (1 ρ 2 )ρ (σ 2 ) 1 3 ρ ) n 1 +
Model INAR Suppose that {Y t } is a discrete-time stationary non-negative INAR(1) process satisfying Y t = Y t 1 i=1 V ti + ε t, t 1 (1) where V ti s denote i.i.d. random variables following certain (discrete) distribution and ε t s be uncorrelated non-negative integer-valued random variables. The first term in r.h.s. may be presented as θ Y t 1, where is the thinning operator. The θ is the probability of success of random variables V ti.
Consider the INAR(1) process as in (1). We assume that V ti s are Bernoulli random variables with probability of success θ i.e. P(V ti = 1) = 1 P(V ti = 0) = θ and ε t follows a Poisson distribution with parameter (1 θ)λ. Thus, Y t has a Poisson distribution with parameter λ. The process (1) is known as a Poisson INAR(1) process.
The fact that the distribution of ε t implies the distribution of Y t has shown us the same role of the distribution of ε t in the usual stationary Gaussian AR(1) process.
Conditional on Y n = y n, the probability mass function (pmf) of Z is given by p(z y n ; θ, λ) = P ( Z = z Y n = y n ) = min(z,y n) k=0 for z = 0, 1, 2,.... C yn k θk (1 θ) yn k 1 (z k)! e (1 θ)λ {(1 θ)λ} z k
We have used the following Yule-Walker estimators ˆθ = n 1 t=1 ( Yt Ȳ )( Y t+1 Ȳ ) n ( Yt Ȳ ) 2 and ˆλ = 1 n 1 n ( ) Yt ˆθ Y t 1 t=2 t=1 to estimate θ and λ, respectively.