Obiettivi formativi
L’obiettivo del corso è fornire allo studente la capacità di comprendere
ed applicare le regole di base della teoria della decisione e della stima, e in particolare:
- i test statistici nel decidere tra diverse ipotesi
- la struttura del decisore ottimo nel contesto delle trasmissioni numeriche.
- i principali stimatori di uso comune
- la struttura dei filtri ottimi nel contesto delle trasmissioni numeriche.
Le capacità di applicare le conoscenze sopra elencate risultano
essere in particolare:
- progettare ed analizzare le prestazioni del blocco di decisione nei ricevitori per trasmissioni numeriche
- progettare ed analizzare le prestazioni dei blocchi di stima dei parametri di segnale nei ricevitori per trasmissioni numeriche.
Prerequisiti
Entry-level courses in probability theory and Fourier analysis for stochastic processes, such as those normally offered in the corresponding 3-year Laurea course, are necessary pre-requisites for this course.
Contenuti dell'insegnamento
1. Detection Theory
1.1 Bayes, MiniMax, Neyman Pearson Tests
1.2 Multiple hypothesis testing MAP and ML tests
1.3 Sufficient statistics
Factorization, Irrelevance, Reversibility theorems
1.4 MAP Test with Gaussian signals. Additive Gaussian noise channel
1.5 Optimal detection of continuous-time signals: discrete representation.
Orthonormal bases and signal coordinates. Gram-Schmidt procedure.
Projection Theorem. Complete bases
1.6 Discrete representation of a stochastic process. Karhunen Leove (KL) basis
1.7 Optimal MAP receiver in AWGN
1.8 Techniques to evaluate error probability
1.9 Composite hypothesis testing: partially known signals in AWGN.
Optimal incoherent MAP receiver structure
1.10 Detection in additive colored Gaussian noise: whitening, Cholesky decomposition
1.11 Detection with stochastic Gaussian signals: Radiometer
2. Estimation theory
2.1 Fisherian estimation
2.1.1 Minimum Variance Unbiased Estimation
2.1.2 Cramer Rao Lower Bound
2.1.3 Maximum Likelihood estimation
2.2 Bayesian estimation
2.2.1 Minimum Mean Square Error estimation
2.2.2 MAP estimation
2.2.3 Linear MMSE estimation
2.2.4 Spectral Factorization and Wiener Filtering
Programma esteso
Syllabus (every class = 2 hours)
CLASS 1:
First hour: Course organization, objectives, textbooks, exam details. Sneaky preview of the course, motivations, applications. Second hour: basic probability theory refresher: total probability, Bayes rule in discrete/continuous/mixed versions, double conditioning. A first elementary exercise on binary hypothesis testing.
CLASS 2:
First hour: completion of proposed exercise. Second hour: Bayes Tests.
CLASS 3:
First hour: exercise on Bayes Test (Laplacian distributions) Second hour: MiniMax Test.
CLASS 4:
First hour: esercise on Minimax. Second hour: Neyman Pearson Test with example.
CLASS 5:
First hour: ROC properties. NP test with distrete RVs: randomization. Second hour: Exercise on Bayes, Minimax, Neyman-Pearson tests.
CLASS 6:
First hour: Multiple hypothesis testing, Bayesian approach. MAP and ML tests. Decision regions, boundaries among regions: examples in R^1 and R^2. Second hour: exercise: 3 equally-likely signal "hypotheses" -A,0,A in AWGN noise: Bayes rule (ML) based on the sample-mean (sufficient statistic).
CLASS 7:
First hour: Minimax in multiple hypotheses. Sufficient statistics: introduction. Second hour: Factorization theorem, irrelevance theorem. Reversibility theorem. Gaussian vectors refresher: joint PDF, MGF/CF.
CLASS 8:
First hour: Summary of known main results on Gaussian random vectors: Gaussian MGF, 4th order statistics from moment theorem, MGF-based proof of Gaussianity of linear transformations. Examples of Gaussian vectors: Fading Channel. Second hour: A: MAP Test with Gaussian signals. B: Additive Gaussian noise channel. Decision regions are hyperplanes.
CLASS 9:
First hour: examples of decision regions. Optimal detection of continuous-time signals: motivation for their discrete representation. Second hour: Discrete signal representation: definitions. Inner product, norm, distance, linear independence. Orthonormal bases and signal coordinates.
CLASS 10:
Gram-Schmidt orthonormalization. Detailed example. Operations on signals, and dual operations on signal images.
CLASS 11:
Unitary marices in change of basis. Orthorgonal matrices: rotations and reflections. Orthogonality principle. Projection theorem. Interpretation of Gram-Schmidt procedure as repeated projections. Complete ON bases: motivations and definition.
CLASS 12:
First hour: exercises: 1. product of unitary matrices is unitary. 2. unitary matrix preserves norm of vectors. Projection matrices, eigenvectors, eigenvalues, spectral decomposition. Properties. Second hour: examples of complete bases in L2: the space of band-limited functions, evaluation of series coefficients, sampling theorem, ON check. More examples of complete bases: Legendre, Hermite, Laguerre.
CLASS 13:
Discrete representation of a stochastic process. Mean and covariance of process coefficients. Properties of covariance matrices for finite random vectors: Hermitianity and related properties. Whitening. Karhunen Leove (KL) theorem for whitening of discrete process representation (hint to proof). Statement of Mercer theorem. KL bases.
CLASS 14:
Summary of useful matrices: Normals and their subclasses: unitary, hermitian, skew-hermitian. If noise process is white, any ON complete basis is KL. Digital modulation. Example: QPSK. Digital demodulation with correlators bank or matched-filter bank.
CLASS 15:
First hour: Matched filter properties. Max SNR, physical reason of peak at T. Second hour: back to M-ary hypothesis testing with time-continuous signals: receiver structure. With white noise, irrelevance of noise components outside signal basis. Optimal MAP receiver in AWGN. Basis detector. Signal detector.
CLASS 16:
Examples of MAP RX and evaluation of symbol error probability Pe. First hour: MAP RX for QPSK signals and its Pe. Second hour: MAP RX for generic binary signals, basis detector, reduced complexity signal detector. Evaluation of Pe.
CLASS 17:
First hour: Techniques to evaluate Pe: rotational invariance in AWGN and signal image shifts. Center of gravity for minimum energy. Second hour: Pe evaluation for binary signaling. Comparisons between antipodal and orthogonal signals. Calculation of Pe for 16-QAM (begin).
CLASS 18:
First hour: Calculation of Pe for 16-QAM (end). Second hour: Calculation of Pe for M-ary orthogonal signaling. Begin calculation of Bit error rate (BER).
CLASS 19:
Completion of BER evaluation in M-ary orthogonal signaling. Example: M-FSK. Occupied bandwidth. Limit as M->infinity and connection with Shannon channel capacity. Notes on Simplex constellation. BER evaluation for QPSK: natural vs. Gray mapping.
CLASS 20:
Further notes on Gray mapping. Approximate BER calculation: union upper bound, minimum distance bound, nearest-neighbor bound. Lower bounds. Example: M-PSK. Review of cartesian(X,Y)-to-polar(R,Q) probability transformation. For zero-mean normal (X,Y), (R,Q) are independent with Rayleigh and Uniform marginals.
CLASS 21:
For non-zero-mean normal (X,Y), (R,Q) are dependent, with Rice and Bennet marginals. Properties of Rayleigh, Rice, Bennet PDFs. Use of Bennet PDF in the exact evaluation of Pe in M-PSK.
Composite hypothesis testing: introduction. Bayesian approach: Example of partially known signals in AWGN.
CLASS 22:
Partially known signals in AWGN: Bayesian MAP decision rule. Application to incoherent reception of passband signals. Optimal incoherent MAP receiver structure.
CLASS 23:
Alternative more compact derivation of incoherent MAP receiver for passband signals using complex envelopes. Incoherent OOK receiver and its BER evaluation.
CLASS 24:
Detection in additive colored Gaussian noise. Karhunen-Loeve formulation. Hints about the analog whitening filter. Reversibility theorem and whitening of the discretized signal sample. Example 1: whitening by unitary transformation that alignes the orthonormal eigenvectors of the noise covariance matrix to the canonical basis. Example 2: Cholesky decomposition of covariance matrix and noise whitening. Example of calculation of Cholesky decomposition.
CLASS 25:
Exercise: whitening and Pe evaluation for sampled signals in colored Gaussian noise.
Detection with stochastic signals: the case of Gaussian signals. Binary hypothesis testing: Radiometer. BER evaluation.
CLASS 26:
Estimation theory: introduction. Classical (Fisherian) estimation. MSE cost. The bias-variance tradeoff. Example and motivation for unbiased estimators.
CLASS 27:
Asymptotically unbiased and consistent estimators. MVUE. Cramer Rao Lower Bound: motivazion, theorem statement, example: signals in AWGN (both discrete and continuous-time). Amplitude estimation.
CLASS 28:
Phase estimation. Proof of CRLB. Extension of CRLB to vector parameters: theorem statement and examples. ML estimation, introduction. If an efficient estimator exists, it is ML.
CLASS 29:
ML: asymptotic properties and invariance. Examples: 1) Gaussian observations with unknown (constant) mean and variance. 2) Linear Gaussian model and comparison with least-squares solution. 3) Phase estimation of passband signals (begin)
CLASS 30:
ML: Phase estimation of passband signals (end). Bayesian Estimation: 1) MMSE estimator and minimum error. Orthogonality principle. Unbiasedness. Note on regression curve. Gaussian example. Exercise: both observations and parameter are negative exponentials.
CLASS 31:
Bayesian estimation: MAP estimator. Example. ML Criterion as a paticular MAP case. Ex: linear Gaussian model (homework, with solution). Extension to vector parameters. Gaussian multivariate regression. MMSE linear Bayesian estimates. Optimal filter coefficients through orthogonality principle. Yule-Walker equations. LMMSE optimal estimator and minimal MSE.
CLASS 32:
Review of optimal scalar LMMSE estimator and minimum MSE. Extension to vector estimator. Wiener Filter: problem statement, objectives. A) Smoothing, optimal non-causal filter, MMSE error, case of additive noise channel. Alternative evaluation of MMSE with error filter.
CLASS 33:
B) Causal Wiener filter: problem setting in 2 steps: whitening and innovations estimation. Whitening: 1) review of two-sided Z-transform and its ROC. 2) review: Z-transform of PSD of the output of a linear system. 3) statement of Spectral Factorization (SF) theorem.
CLASS 34:
SF theorem: key to proof. Calculation of innovations filter L(z) for real processes through the SF. Regular processes classification with L(z) a rational fraction. AR, MA, ARMA processes. Example: AR(1).
CLASS 35:
Wiener causal filter, formula in z. Example. r-step predictor: form of filter in z. Error Formula.
CLASS 36:
Predictor: example: prediction of AR(p) processes. r-step filtering and prediction: formula in z. General error formula for additive noise channels. Example.
Bibliografia
Part I: Detection
[1] J. Cioffi, "Signal Processing and Detection", Ch. 1, http://www.stanford.edu/~cioffi
[2] B. Rimoldi, "Principles of digital Communications", EPFL, Lausanne. Ch 1-4.
[3] A. Lapidoth, "A Foundation in Digital Communication" ETH, Zurich.
[4] R. Raheli, G. Colavolpe, "Trasmissione numerica", Monte Universita' Parma Ed., Ch. 1-5. In Italian.
Part II: Estimation
[5] S. M. Kay, "Fundamentals of statistical signal processing", Vol.I (estimation), Prentice-Hall, 1998.
Metodi didattici
Lezioni teoriche ed esercitazioni per un totale di 72 ore.
All Classes will be held online on Teams in the following <A HREF="https://teams.microsoft.com/l/team/19%3a981ae34d5ba643149c86a018c7e252ad%40thread.tacv2/conversations?groupId=f221d7cd-ffa8-4fbb-800b-8e9930b93183&tenantId=bb064bc5-b7a8-41ec-babe-d7beb3faeb1c">Virtual Classroom</A>.
<br>
The Team name of the course is "DETECTION AND ESTIMATION-DIA-LM-1-COMMUNICATION ENGINEERING". The Team registration code is
5nxymwq (for those accessing with a guest.unipr.it email).
<br>
Classes will be held in the virtual classroom on:
<br>
<i> Wednesday 10:00-12:00; Thursday 11:00-13:00; Friday 11:00-13:00
</i>
<br>
Teaching will work as follows. BEFORE every class, it is YOUR RESPONSIBILITY to watch on your own the corresponding
<A HREF="http://www.tlc.unipr.it/bononi/didattica/video/videoclasses_Detection_and_Estimation/2018/">videolecture</A>
and use the corresponding
<A HREF="http://www.tlc.unipr.it/bononi/didattica/TSD/class_notes/">class notes</A>.
ID and password to access the videos/slides will be communicated to you in class on the first lecture.
At the scheduled Teams Videomeeting the instructor will go over the main concepts of the videolecture, provide extra examples, and answer the questions you hopefully will have collected along with your doubts/curiosities.
Modalità verifica apprendimento
Esami:
Oral only, to be scheduled on an individual basis. When ready, please contact the instructor by email at alberto.bononi[AT]unipr.it by specifying the requested date.
The exam consists of solving some proposed exercises and explaining theoretical details connected with them, for a total time of about 1 hour.
The exam may be split into two distinct parts and scheduled on different days at the student's request: Part 1 Detection; Part 2 Estimation.
At the exam, you can bring your summary of important formulas in an A4 sheet to consult if you so wish. Some sample exercises can be found on the course website.
COVID emergency: The exam will take place online on Teams.
<b><i>ONLINE EXAM RULES</i></b>
We will be in video-connection on microsoft Teams. Once we agree on a time/date, you will get an invitation: to connect click on the link at the botton of the invitation email on the day of the exam.
<br>
At the beginning, you are requested to show at 360 degrees the room where you are taking the exam. You must be alone in the room, with only paper and pen and your allowed A4 summary sheet.
<br>
How the exam will proceed depends on whether you have just pen and paper (and a cell phone), or if you have an electronic device where you can write on by hand (eg a tablet)
<hr width="100%">
CASE A) paper and pen:
<br>
I will provide you the text of the exam on my shared screen. You can copy that out on your paper sheet.
<br>
You are requested to look into the camera when you reply to me, or into your paper sheet when you write.
<br>
You are requested to tilt the camera of your PC such that the sheet of paper on which you write is clearly visible and readable by me.
This will allow me to follow your work and guide you towards the solution.
<br>
Sometimes you will be asked to take pictures of your work and send it by email/through the Teams chat. Please make sure to reduce the resolution of your camera, so that pictures are more compressed. Or please download the app
Adobe Scan to take compressed pictures, so that you can transmit them easily even on a low bandwidth connection.
<hr width="100%">
CASE B) you have a tablet and can write on it
<br>
in that case I will send you a link to an electronic whiteboard
(I use https://whiteboardfox.com/) which we will use as a shared paper sheet to work out your exam, exactly as we used to do in my office.
<hr width="100%">
IMPORTANT NOTICE: Getting help on the internet from others on solving your problem is considered as a delinquent behavior and may lead to your withdrawal from the exam and to possible further sanctions.
Altre informazioni
1) Ricevimento
Monday 15:00-17_00 (Scientific Complex, Building 2, floor 2, Room 2/19T)
COVID emergency: Meet me on the
<A HREF="https://teams.microsoft.com/l/meetup-join/19%3a23c74fa4d0d24f0fb36e5c0bf836a3ab%40thread.tacv2/1598392101920?context=%7b%22Tid%22%3a%22bb064bc5-b7a8-41ec-babe-d7beb3faeb1c%22%2c%22Oid%22%3a%22bad45013-46b7-43f0-8e0f-9e227ab845a8%22%7d">"LMCE 20-21" Team Virtual Classroom</A> (same as where Lecture 0 and prep-courses review took place). Please send an email so that we can schedule an appointment.
2) Sito web del corso:
www.tlc.unipr.it/bononi/didattica/TSD/TSD.html
To get userid and password, please send an email to alberto.bononi[AT]unipr.it from your account nome@studenti.unipr.it.
Obiettivi agenda 2030 per lo sviluppo sostenibile
- - -