Distinguished Lecturer and Tutorial Program
You are here
All AESS Chapters and IEEE Sections are encouraged to take advantage of the AESS Distinguished Lecturer and Tutorial Program for their regular or special meetings, that allows them to select from an outstanding list of truly distinguished speakers who are experts in the technical fields of the society. The AES Society will pay reasonable speaker’s expenses for economy-class travel, lodging and meals, with the inviting IEEE organization expected to cover 50% of the speaker's expenses. As a general guideline, speaker’s expenses involving travel wholly within North America or within the European Union can be approved to be covered up to $1,000 USD. Expenses involving extensive international travel can be approved to be covered up to $2,000 USD. The Society encourages arrangements whereby more than one lecture is presented in a single trip, and costs in such situations will be considered on a case by case basis.
Non-IEEE entities (such as universities, research organizations, and companies) are also eligible to contact speakers directly. If a speaker agrees to give the non-IEEE lecture on a particular date and location, the inviting organization is required to pay 100% of the speaker's expenses as mutually agreed between the speaker and the organization. While the AESS has no responsibilities for any arrangements or costs regarding the lecture, speakers should keep the VP Education advised of their Distinguished Lectures.
The procedure for obtaining a speaker is as follows: If a Chapter or Section has an interest in inviting one of the speakers, it should first contact the speaker directly in order to obtain his or her agreement to give the lecture on a particular date. After this is accomplished, the Chapter or Section must notify the AESS VP for Education by sending in a DL Request Form. If financial support from the AESS is required for the speaker’s expenses, he or she must submit an estimate to the AESS VP for Education before actually incurring any expenses. This estimate must be provided at least 45 days before the planned meeting to provide time for feedback from the VP for Education and for changes if needed. The VP for Education must provide written authorization to proceed.
Distinguished Lecturers and Tutorial speakers are ambassadors of the AESS, who serve as an important demonstration of the value of membership in IEEE and AESS in particular. A short presentation on the benefits of Society membership is available and included in each Distinguished Lecture presentation. Speakers should contact Judy Scharmann well in advance of each lecture to arrange for shipping AESS and IEEE Membership brochures and copies of society publications to hand out.
Following the lecture, the speaker and/or host are asked to prepare a short report suitable for publication and posting on the AESS web site. Pictures taken at the meeting are highly desirable.
Target Tracking and Data Fusion: How to Get the Most Out of Your Sensors
This talk describes the evolution of the technology of tracking objects of interest (targets) in a cluttered environment using remote sensors. Approaches for handling target maneuvers (unpredictable motion) and false measurements (clutter) are discussed. Advanced ("intelligent") techniques with moderate complexity are described. The emphasis is on algorithms which model the environment and the scenarios of interest in a realistic manner and have the ability to track low observable (LO) targets. The various architectures of information processing for multi-sensor data fusion are discussed. Applications are presented from Air Traffic Control (data fusion from 5 FAA radars for 800 targets) and underwater surveillance for a LO target.
Overview of High-Level Information Fusion Theory, Models, and Representations
The High-Level Information Fusion (HLIF) lecture describes the developments over the past decade on concepts, papers, needs, and grand challenges for practical system designs. This lecture brings together the contemporary concepts, models, and definitions to give the attendee a summary of the state-of-the-art in HLIF research (e.g., situation awareness and interface design between manmachine information fusion systems). Analogies from low-level information fusion (LLIF) of object tracking and identification are extended to the HLIF concepts of situation/impact assessment and process/user refinement. HLIF theories (operational, functional, formal, cognitive) are mapped to representations (semantics, ontologies, axiomatics, and agents) with contemporary issues of modeling, testbeds, evaluation, and human-machine interfaces. Discussions with examples of search and rescue, cyber analysis, and surveillance are presented. The attendee will gain an appreciation of HLIF through the topic organization from the perspectives of numerous authors, practitioners, and developers of information fusion systems. The lecture is organized as per the recent text: E. P. Blasch, E. Bosse, and D. A. Lambert, Information Fusion Theory and Representations, Artech House, April 2012, of (1) HLIF theories (2) HLIF representations in information fusion testbeds, and (3) HLIF supporting elements of humansystem interaction, scenario-based design, and HLIF evaluation.
Phased-Arrays and Radar: Advances, Breakthroughs and Future Trends
ABSTRACT: SYSTEMS: Patriot now has GaN AESA providing 360o coverage; S-band AMDR provides 30 times the sensitivity and number of tracks as SPY-1D(V); 3, 4, 6 face “Aegis” systems developed by China, Japan, Australia, Netherlands, USA; LOW COST PACKAGING: Raytheon, MIT-Lincoln-Lab./MA-COM, Rockwell Collins and South Korea developing low cost S and X-band flat panel arrays using COTS: PCBs and commercial packaging. EXTREME MMIC: Whole 64 and 256 element T/R phased array on single chip at 60 GHz, RF circuitry of car radar put on one chip, these chips expected to cost only few dollars in future.. DIGITAL BEAM FORMING (DBF): Israel, Thales, Lockheed Martin (LM) and Australia AESAs have an A/D for every receive channel (LM has 172K A/Ds for their space fence radar) ; Raytheon developing element level mixer-less direct RF A/D having >400 MHz instantaneous bandwidth, reconfigurable between S and X-band; Lincoln Lab increases spurious free dynamic range of receiver plus A/D by 40 dB. MOORE’S LAW: Slowing down but expect increase in chip density of ~50 in next 30 years and reduction in power consumption of factor of ~75 per transistor. Potential continuation of Moore’s Law: 1. via Spintronics - which could revolutionize computer architecture away from John von Neumann model, 2. via Memristor – which potentially allows one to do what mouse’s brain does in a shoe box instead of a computer the size of a small city requiring several nuclear power plants, 3. via Graphene which has potential for Thz clock speed transistors, and/or 4. via Quantum Computing - which has the potential of orders of magnitude advance in computation power per 2 years. METAMATERIALS: 2-D Electronically Steered Antennas at 20 and 30 GHz, cost goal only $1K, explained in simple terms, to be used in worldwide coverage satellite internet systems of the future, companies like Google and Qualcomm working on it; Echodyne and PARC (a Xerox Co.) developed radar electronic scanning arrays; Stealthing by absorption: Simulation shows 10 dB absorption over 2-20GHz band using <1 mm thick fractal coating, 20 dB over 10-15 GHz, good results for all incidence angles and polarizations; Stealthing by cloaking: Target made invisible over 50% bandwidth at L-band using fractals; Army 250-505 MHZ conformal antenna with a l/20 thickness; Focus beyond diffraction limit: 6X at 0.38 μm; Provides isolation: Between antennas having 2.5 cm separation equivalent to 1m separation; potential use for phased array WAIM; Negative Index of Refraction: For n-doped graphene, first such material found in nature. VERY LOW COST SYSTEMS: Valeo Raytheon (now Valeo Radar) developed low cost, $100s, 25 GHz 7 beam phased array car radar; about 2 million sold already, more than all the radars ever built up to a very few years ago. WIDEBAND LOW PROFILE ANTENNA: Tightly coupled dipole antenna (TCDA) provides 20:1 bandwidth with l/40 thickness for the lowest frequency. MIMO (MULTIPLE INPUT MULTIPLE OUTPUT): Explained in simple physical terms instead of with heavy math; It is claimed MIMO array radars can provide 1, 2 or 3 orders of magnitude better resolution and accuracy than conventional array radars; It is shown how conventional arrays can do as well; MIMO does not let us use fewer elements than conventional array; MIMO does not provide better barrage-noise-jammer, repeater-jammer or hot-clutter rejection than conventional array radars; Should not provide better GMTI than conventional radar; Where it makes sense to use. LOW COST PRINTED ELECTRONICS: 1.6 GHz printed diodes achieved (goal 2.4 GHz). ELECTRICAL AND OPTICAL SIGNALS ON SAME CHIP: Will allow data transfer at the speed of light; IR transparent in silicon. BIODEGRADABLE ARRAYS OF TRANSISTORS OR LEDS: Imbedded under skin for detecting cancer or low glucose. QUANTUM RADAR: Has potential to defeat stealth targets. NEW POLARIZATIONS: OAMS, (ORBITAL ANGULAR MOMENTUM) unlimited data rate over finite band using new polarizations??
Around the World In 60 Minutes – Exotic Places with a Radar Twist
MIMO Demystified and Its Conventional Equivalents
This talk is given in tutorial form using a simple explanation starting from basics of phased arrays and how they work. Physical insight into MIMO is then given. No heavy math used. It has been shown in the literature that MIMO thin/full array radars can provide orders of magnitude better resolution and accuracy than conventional radars. The thin/full array consisted of two collocated parallel linear arrays of N elements each. One is a thin array and the other a full array having spacings respectively of Nλ/2 and λ/2 with the thin one used for transmit and the full for receive. We show how to use the same thin/full arrays in conventional radars to do as well and also without grating lobes (GLs). Also covered are full/thin array equivalents where the full array is used for transmit and the thin is used for receive. Detailed are the monostatic MIMO full array radar and its Skolnik ubiquitous equivalent and a machine gunning equivalent. The performance of these systems against jammers is summarized. A simple physical explanation is given of why the Cramer-Rao bound provides a 2 better angle accuracy for the MIMO monostatic full array than the conventional ubiquitous array equivalent and why this result does not apply for the machine gunning equivalent. It has been also shown in the literature that a MIMO thin/full array airborne GMTI radar can provide a better minimum detectable velocity (MDV) than a conventional one. We show how the same thin/full array can be used in a conventional GMTI radar system to provide the same advantages as the MIMO system re coherence dwell time and aperture size and thus should provide the same MDV. The operation, waveforms, detection sensitivities, use of maximum likelihood estimation (MLE) for angle estimation and detection, and resolutions of these systems are detailed. We show that the signal processing load for the MIMO radar system can typically be much larger than for its conventional equivalents. There is also usually a more difficult waveform design problem with MIMO. Covered also are practical issues like the effects of the different mutual coupling between the elements of the thin and full arrays and how to deal with it.
It has been claimed that MIMO radars perform better than conventional radars against repeater and hot clutter jammers (jammer signals reflected from the ground into the radar). It is shown here that conventional radars can perform as well if not better than MIMO radars against these jammers as well as against barrage noise jammers. The results here are also presented in tutorial form without heavy math. Instead physical explanations are given for these results. Applied here to reject the barrage jammer and hot clutter is the Adaptive-Adaptive Array Processing (AAAP) technique which makes use of the information available as to where the jammers are rather than assuming there location is not known as done for the classical sample matrix inversion (SMI) method. This is reminiscent of the KA-STAP technique used by DARPA. The method reduces the transient time (the number of time samples needed to calculate the interference covariance matrix) by orders of magnitude. Also the interference covariance matrix size is reduced by orders of magnitude and in turn the computation of its matrix inverse. Finally this method reduces the sidelobe degradation usually resulting from using the SMI method. The AAAP technique lends itself well to both the MIMO and conventional array systems when digital beam forming is used.
Finally, we present potential practical applications of the MIMO radar concept: 1. For conventional radar combining to get better power-aperture for search and power-aperture-gain for track, respectively 6 dB and 9 dB for two radars; 2. For OTH radars and 3. For automobile radars.
Metamaterial Advances for Radar and Communications
Metamaterials have gained much interest in recent years because they offer the potential to provide phased arrays and antennas having better performance at lower cost. Metamaterials are man-made materials in which an array of structures having a size less than a wavelength are embedded. These materials have properties not found in nature, like a negative index of refraction. Much progress has been made using metamaterials. Kymeta demonstrated transmission to satellites and back using 20 and 30 GHz antennas which use metamaterial resonators in a very novel way for realize phase steering. Echodyne and Xerox’s PARC have developed metamaterial arrays for radar. The Army Research Laboratory funded the development of a metamaterial 250 to 505 MHz low profile antenna with a λ/20 thickness for replacement of the very tall whip antennas on HMMWVs thus providing greater survivability. Complementing this, a conventional tightly coupled dipole antenna (TCDA) has been developed which provides a 20:1 bandwidth with a λ/40 thickness. Target cloaking (invisibility) has been demonstrated at microwaves over a narrow bandwidth using metamaterials. Cloaking has been demonstrated over a 50% bandwidth at L-band using fractals. Stealthing by absorption using a thin flexible and stretchable metamaterial sheet has been shown to provide 6 dB absorption over an 8 to 10 GHz band, with greater absorption over a narrower band. Using fractals sheets < 1 mm thick simulation has shown a 20dB absorption over a band from 10-15 GHz and 10 dB from 2-20 GHz. Good absorption was achieved for all incident angles and polarizations. Metamaterial has been used in cell phones to provide antennas that are 5× smaller (1/10th λ) having 700 MHz to 2.7 GHz bandwidth. It has also provided isolation equivalent to 1 m separation in antennas with 2.5 cm separation and has the potential for use in phased array for wide angle impedance matching (WAIM). Using metamaterial one can focus 6X beyond diffraction limit at 0.38 μm (Moore’s Law marches on); 40X diffraction limit, λ/80, at 375 MHz.
National Missile Defense
The Bush Administration made major changes to the National Missile Defense (NMD) system that had been developed earlier by the Clinton Administration and established a limited system in Alaska to counter threats from North Korea. But even with the new emphasis on anti-terrorism and closer relations with Russia, NMD was still a very controversial topic as seen with the U.S. proposal to install parts of the Missile Defense System in Europe for protection against Iran. The European proposal had negative impacts on the US/Russia relations during the later years of the Bush Administration. The Obama administration is trying to mend relations with Russia by taking a new look at the system proposed for Europe.
The NMD program will continue to be a key technical, political, and legislative issue facing the U.S. and the rest of the world. The Bush Administration focused more on testing and developing new equipment for the NMD system and also investigated a wider variety of sensors (such as space-based and sea-based systems) to detect and track incoming missiles. The upgrade to the existing Early Warning Radars was one of the few features that did not change from the Clinton plan. The Obama Administration is still finalizing its approach to NMD.
This talk will provide background information on the political issues facing NMD. It will also provide technical information on some of the major systems including upgrades to the Early Warning Radars. The talk will also provide system engineering details on the proposed elements of the system that could be installed in Europe.
Nonlinear filters with particle flow
We have invented a new particle filter, which improves accuracy by several orders of magnitude compared with the extended Kalman filter for difficult nonlinear problems. Our filter runs
many orders of magnitude faster than standard particle filters for problems with dimension higher than four. We do not resample particles, and we do not use any proposal density, which is a
radical departure from other particle filters. We show very interesting movies of particle flow and many numerical results. The key idea is to compute Bayes’ rule using a flow of particles
rather than as a point wise multiplication; this solves the well known problem of “particle degeneracy”. Our derivation is based on freshman calculus and physics. This talk is for normal engineers who do not have log-homotopy for breakfast.
Never trust a simulation without a simple back-of-the-envelope calculation that explains it
Simulations are a crucial tool for systems engineers, and I have coded, developed, analyzed, tested, debugged and debunked many such simulations. However, they cannot be trusted. All too often system engineers come a cropper due to believing the results of simulations without making sure that the results are correct and relevant. Significant errors can occur for many reasons: bugs, bugs, bugs, incorrect parameters, incorrect physical models, incorrect application of perfectly fine code, incorrect interpretation of accurate results, etc. I was deeply shaped by a system
engineering culture that valued simple back-of-the-envelope calculations to provide insight into what was going on. Moreover, I am appalled when I see system engineers blindly believe the
results of simulations. My talk will give five or ten examples of system engineering blunders caused by faulty simulations or erroneous physical experiments, as well as two surprising twists.
MIMO radar: snake oil or good idea?
MIMO (multiple input multiple output) communication is theoretically superior to conventional comm. under certain conditions, and MIMO comm. also appears to be practical and cost effective in the real World for some applications. It is natural to suppose that the same is true for MIMO radar, but the situation is not so clear. Researchers claim many advantages of MIMO radar relative to boring old phased array radars (SIMO radar). We will evaluate such assertions from a radar system engineering viewpoint. It is very rare to see a paper on MIMO radar with a correct quantitative apples & apples comparison including cost, complexity, risk and all relevant real World physical effects. Moreover, MIMO radar researchers often use boring old phased arrays in a highly suboptimal way, whereas the MIMO radar is used optimally. Hardboiled radar system engineers view such comparisons with skepticism.
Is there a royal road to robustness?
There is much confusion and misinformation about robustness among engineers. For example, many smart hard working and well educated engineers believe that there are decision rules and
estimation algorithms that are more robust than Bayesian algorithms. In particular, some engineers think that fuzzy logics or Dempster-Shafer methods are more robust than Bayesian methods.
We discuss a long list of standard methods to improve robustness, as well as a little known fact about the robustness of Bayesian algorithms.
Real World data fusion
Fusion of data from multiple sensors has the promise of substantial improvement in system performance for many important applications. However, there are several practical issues that must
be addressed to achieve such improvement: (1) residual bias errors between sensors; (2) dense multiple target environments; (3) unresolved data; (4) errors in data association between sensors; (5) sensor errors that are not fixed in time or space but which are not white noise either. We describe state-of-the-art algorithms that attempt to mitigate such problems. We show simple back-of-theenvelope formulas which quantify the situation, as well as one well known formula that is extremely pessimistic.
Ultra Wideband Surveillance Radar
Foliage Penetration (FOPEN) Radar is a technical approach to find and characterize man-made objections under dense foliage, as well as characterizing the foliage itself. It has applications in both military surveillance and civilian geospatial imaging. This Tutorial is divided into three parts.
• The early history of FOPEN Radar: battlefield surveillance and the early experiments in foliage penetration radar are covered. There were some very interesting developments in radar technology that enabled our ability to detect fixed and moving objects under dense foliage. The most important part of that technology was the widespread awareness of the benefits of coherent radar and the advent of digital processing. Almost as important was the quantification of the radar propagation through foliage, and its scattering and loss effects.
• FOPEN synthetic aperture radar (SAR) with concentration on development results from several systems. These systems were developed for both military and commercial applications, and during a time of rapid awareness of the need and ability to operate in a dense signal environment. A brief description of each radar system will be provided along with illustrations of the SAR image and fixed object detection capability. The next section will quantify the benefits of polarization diversity in detecting and characterizing both man made and natural objects. There is a clear benefit for use of polarization in the target characterization and false alarm mitigation. Finally the techniques developed for ultra wideband and ultra wide angle image formation will be presented.
• New research in Multi-mode Ultra-Wideband Radar, with the design of both SAR and moving target indication (MTI) FOPEN systems. Particular note will be taken on the benefits and difficulties in designing these ultra-wideband (UWB) systems, and operation in real world electromagnetic environments. At common FOPEN frequencies, the systems have generally been either SAR or MTI due to the difficulties of obtaining either bandwidth or aperture characteristics for efficient operation. The last two sections of the tutorial will illustrate new technologies that are appearing in the literature that have promise for future multimode operation: the need to detect low minimum discernable velocity movement; and the operation of bistatic SAR in concert with a stationary GMTI illumination waveform.
Robust Adaptive Array Processing for Radar
Adaptive array processing techniques represent a key element for enhancing the performance and capabilities of multi-channel radar systems that must operate in demanding and complex disturbance environments, which in general includes clutter, man-made interference and naturally-occurring noise. The first part of this lecture recalls some foundational adaptive processing principles and the main assumption and conditions under which seminal theoretical results have been derived. The second contrasts these main assumptions and conditions with those actually encountered by a wide range of practical radar systems that operate in real-world environments. In the presence of environmental uncertainties, instrumental imperfections, and operational constraints, which are ubiquitously faced by practical systems, the implementation of robust adaptive techniques becomes an essential ingredient for effective and efficient operation. The third part of this lecture discusses the design and application of robust adaptive array processing techniques in the dimensions of space, time and space-time. Experimental results are illustrated for OTH radar systems to lend concreteness by way of example. This lecture is expected to benefit students, researchers and practitioners with an interest in the effective and efficient application of advanced processing techniques to practical radar systems.
Over-The-Horizon Radar: Fundamental Principles, Adaptive Processing and Emerging Applications.
Skywave over-the-horizon (OTH) radars operate in the high frequency (HF) band (3–30 MHz) and exploit signal reflection from the ionosphere to detect and track targets at ranges of 1000 to 3000 km. The long-standing interest in OTH radar technology stems from its ability to provide persistent and cost-effective early-warning surveillance over vast geographical areas (millions of square kilometres). Australia is recognized as a world-leader in the OTH radar field. Pioneering research and development covering every facet of this technology has resulted in the multi-billion-dollar Jindalee Operational Radar Network (JORN) of three state-of-the-art operational OTH radars in Australia.
The first part of the tutorial introduces the fundamental principles of OTH radar design and operation in the challenging HF environment to motivate and explain the architecture and capabilities of modern OTH radar systems. The second describes mathematical models characterizing the HF propagation channel and adaptive processing techniques for clutter and interference mitigation. The third delves into emerging applications, including HF passive radar, blind signal separation and multipath-driven geolocation. A highlight of the tutorial is the prolific inclusion of experimental results illustrating the application of robust signal processing techniques to real-world OTH radar systems. This is expected to benefit students, researchers and practitioners with limited prior knowledge of HF radar and with an interest in the application of advanced processing techniques to practical systems.
Radar Adaptivity: Antenna Based Signal Processing Techniques
The lecture discusses the following topics:
• Introducing Radar: from its conception to recent industrial achievements,
• Operational needs requiring adaptivity,
• Side lobe blanking and cancellation techniques,
• Adaptive arrays of antennas,
• Some practical application examples of adaptivity,
• Conclusions and way ahead.
Each part is structured with some mathematical background, presentation of key processing algorithms, performance evaluation of the algorithms either in closed form or via Monte Carlo simulation, practical engineering implications related to the implementation of processing algorithms and, finally, examples of application potentials. A comprehensive set of technical references is also provided for further study and investigation.
Sea and land clutter statistical analysis and modeling
The modeling of the clutter echoes is a central issue for the design and performance evaluation of radar systems. Main goal of this lecture is to describe the state-of-the-art approaches to the modeling and understanding of land and sea clutter echoes and their implications on performance prediction and signal processors design.
The lecture first introduces radar sea and ground clutter phenomena, measurements and measurement limitations, at high and low resolution, high and low grazing angles with particular attention to classical model for RCS prediction. Most part of the lecture will be dedicated to modern statistical and spectral models for high resolution sea and ground clutter and to the methods of experimental validation using recorded data sets. Some comparison between monostatic and bistatic sea clutter data will be provided together with some results on non-stationarity analysis of the high resolution sea clutter.
Advanced Techniques of Radar Detection in Non-Gaussian Background
For several decades, the Gaussian assumption on the disturbance modeling in radar systems has been widely used to deal with detection problems. But, in modern high-resolution radar systems, the disturbance cannot be modelled as Gaussian distributed and the classical detectors suffer from high losses.
In this talk, after a brief description of modern statistical and spectral models for high-resolution clutter, coherent optimum and sub-optimum detectors, designed for such a background, will be presented and their performance analyzed against a non-Gaussian disturbance. Different interpretations of the various detectors are provided that highlight the relationships and the differences among them.
After this first part, some discussion will be dedicated to how to make adaptive the detectors, by incorporating a proper estimate of the disturbance covariance matrix. Recent works on Maximum Likelihood and robust covariance matrix estimation have proposed different approaches such as the Approximate ML (or Fixed-Point) Estimator or the M-estimators. These techniques allow to improve the detection performance in terms of false alarm regulation and detection gain in SNR.
Some of results with simulated and real recorded data will be shown.
Sensor selection for multistatic radar networks
After an introduction to bistatic/multistatic radar systems, the talk will focus on multistatic passive radars. The characteristics of the systems with different sources of opportunity will be described.
The concept of bistatic ambiguity function (BAF), often used to measure the possible global resolution and large error properties of the target parameters estimates, will be introduced and its relation with the Fisher Information Matrix (FIM) and Cramér-Rao Lower Bounds (CRLBs) highlighted. Some example will be provided concerning active LFM radar and passive radar using an UMTS or FM signal as source of opportunity.
The information gained through the calculation of the bistatic CRLBs can be used in a multistatic radar system for the dynamic choice of the optimum Tx-Rx pair or set of bistatic channels for radar target tracking in a multistatic scenario. Taking advantage of the knowledge of the CRLBs is a kind of “radar cognition”, that, applied in multistatic realistic scenarios with both active and passive sensors, can improve the performance of the target tracker and reduce the computational load of surveillance operations. Some results will be shown in both certain and uncertain radar measurements.
Bistatic & Multistatic Radar
Bistatic and multistatic radar systems have been studied and built since the earliest days of radar. As an early example, the Germans used the British Chain Home radars as illuminators for their Klein Heidelberg bistatic system. Bistatic radars have some obvious advantages. The receiving systems are passive, and hence undetectable. The receiving systems are also potentially simple and cheap. Bistatic radar may also have a counter-stealth capability, since target shaping to reduce target monostatic RCS will in general not reduce the bistatic RCS.
Furthermore, bistatic radar systems can utilize VHF and UHF broadcast and communications signals as 'illuminators of opportunity', at which frequencies target stealth treatment is likely to be less effective.
Bistatic systems have some disadvantages. The geometry is more complicated than that of monostatic systems. It is necessary to provide some form of synchronization between transmitter and receiver, in respect of transmitter azimuth angle, instant of pulse transmission, and (for coherent processing) transmit signal phase. Receivers which use transmitters which scan in azimuth will probably have to utilize 'pulse chasing' processing.
Over the years a number of bistatic and multistatic radar systems have been built and evaluated. However, rather few have progressed beyond the 'technology demonstrator' phase. Willis, in his book Bistatic Radar, has remarked that interest in bistatic radar tends to vary on a period of approximately fifteen years, and that currently we are at a peak of that cycle. The purpose of this lecture is therefore to present a subjective review of the properties and current developments in the subject, with particular emphasis on 'passive coherent location' and to consider whether or not the present interest is just another peak in the cycle. It draws on material in the book Advances in Bistatic Radar, edited by Willis and Griffiths, and recently published by SciTech.
The Challenge of Waveform Diversity
Waveform Diversity is defined in the IEEE Std 868-2008 as ‘Adaptivity of the radar waveform to dynamically optimize the radar performance for the particular scenario and tasks. May also exploit adaptivity in other domains, including the antenna radiation pattern (both on transmit and receive), time domain, frequency domain, coding domain and polarization domain’. In other words, modern digital technology now allows us to generate precise, wide-bandwidth radar waveforms, and to vary them adaptively – potentially even on a pulse-by-pulse basis.
This opens up many new possibilities, including ultra-low range sidelobe waveforms, orthogonally-coded waveforms for MIMO radar applications, waveforms with spectral nulls to allow co-existence with other transmissions without mutual interference, and so-called target-matched illumination, where a waveform may be matched to the impulse response of a specific target at a specific aspect angle. We may also learn from natural systems such as bats, whose acoustic signals are sophisticated and are used in an intelligent, cognitive manner.
The lecture will describe the design of these waveforms and their applications, and the prospects for the future.
Multistatic Exploration – Introduction to Modern Passive Radar and Multistatic Tracking & Data Fusion
Advanced distributed signal and data fusion for passive radar systems, where DVB TV or GSM mobile phone base stations are used as sources for illuminating targets, for example, is a topic of increasing interest. Even in remote regions of the world, transmitters of electromagnetic radiation become a potential radar transmitter stations enabling covert surveillance for air, sea, and ground scenarios. Analogous considerations are valid for sub-sea surveillance. Illustrated by examples and experimental results, principles of passive radar as well as advanced multistatic tracking and de-ghosting techniques will be discussed.
Tracking and Sensor Data Fusion – Methodological Framework and Selected Applications.
The tutorial covers material of the recently published book of the presenter with the same title (Springer 2014, Mathematical Engineering Series, ISBN 978-3-642-39270-2) and thus provides an guided introduction to deeper reading. Starting point is the well known JDL model of sensor data and information fusion that provides general orientation within the world of fusion methodologies and its various applications, covering a dynamically evolving field of ever increasing relevance. Using the JDL model as a guiding principle, the tutorial introduces into advanced fusion technologies based on practical examples taken from real world applications.
Global Navigation Satellite System
Conventional Satellite technology has got three applications : Communication, Remote Sensing and, Scientific Studies. The latest one to add to this list is Satellite Based Navigation also referred as Satellite Navigation/Global Positioning System and lately termed as Global Navigation Satellite System (GNSS). With the technological advancement taking place in mobile communications, controls, automobiles, aviation, geodesy, geological survey, military operations, precision farming, town planning, banking, weather predictions, power grid synchronization etc., in spite of each one having separate domain, there is one thing common in all of them for their future; that is the Precise-position, Timing and Velocity (PVT) – information, which can only be provided by Global Navigation Satellite System (GNSS).
Global Navigation Satellite System (GNSS) is a vast system of systems, providing global positioning, navigation and timing information to scores of users in oceans, land, air and even in space. The lecture module traces the history of navigation, evolution of navigation satellite systems, the three present constellations (GPS,GLONAS,GALILEO) and the world scenario in this direction including the S-BAS system. The lecture module will also touch upon the basics of position, velocity and time measurements, various GNSS connected aspects, their applications and the technologies associated including the S-BAS system.
Antenna Systems for Aerospace Vehicles
The lecture module covers the design requirement for various types of antenna system viz., omni-directional, directional, wide/shaped beam patterns, multi-beam, scanning (active/passive etc.) antenna for space applications to be used on launchers, satellites and inter-planetary probes at various frequencies of operations. The module starts with basics of antenna, then the criterion for choosing antenna types, spacecraft body effects on design and mounting considerations along with measurement techniques and the qualification procedures. The module will also cover the RF link calculation procedures and G/T measurement/estimation techniques for space borne & ground stations.
Business Case for Systems Engineering - Is Systems Engineering Effective?
One of the oft-discussed elements in the field of Systems Engineering is how can one justify the expenditure of program or project monies for systems engineering? In short, what is the payback, or business case, for doing systems engineering? Those who are somewhat knowledgeable in the field of systems engineering know what the value is, but what are the tangible results of doing SE on programs and projects? How do we convince our program and project managers that SE is needed, or essential?
The Systems Engineering Division of the National Defense Industrial Association, in conjunction with the Software Engineering Institute (SEI) of Carnegie Mellon University initiated a comprehensive study in 2008 to try to determine the tangible benefits of performing SE in terms of program/project performance. The study consisted of a series of questions based on SE work products as defined in CMMI® (Capability Maturity Model Integration), which is the currently accepted systems engineering process model in widespread adoption, worldwide. The study concluded that there indeed is a positive correlation of SE performed and program/project performance in terms of budget (cost), schedule and requirements.
The number of responses to this initial study survey was small, in the order of 46 valid responses, from the US defense industry. In order to validate the results with a larger response base to include commercial as well as non-US organizations, in 2011 the NDIA and SEI partnered with the IEEE Aerospace & Electronic Systems Society to reach a broader audience, and the results of this updated survey with over 180 valid responses was completed and released in late 2012.
This lecture will present the results of the updated study of SE performed on programs/projects and program performance in terms of cost, schedule and requirements. It will show that programs with the greater amount of SE performed demonstrate the best performance, while the programs with less SE had a lower rate of success. Since the study correlates program successes in terms of specific SE activities, these results can be used within organizations to assist in establishing systems engineering plans on programs and projects.
Compression Based Analysis of Image Artifacts: Application to Satellite Images
This work aims at an automatic detection of artifacts in optical satellite images such as aliasing, A/D conversion problems, striping, and compression noise; in fact, all blemishes that are unusual in an undistorted image.
Artifact detection in Earth observation images becomes increasingly difficult when the resolution of the image improves. For images of low, medium or high resolution, the artifact signatures are sufficiently different from the useful signal, thus allowing their characterization as distortions; however, when the resolution improves, the artifacts have, in terms of signal theory, a similar signature to the interesting objects in an image. Although it is more difficult to detect artifacts in very high resolution images, we need analysis tools that work properly, without impeding the extraction of objects in an image. Furthermore, the detection should be as automatic as possible, given the quantity and ever-increasing volumes of images that make any manual detection illusory. Finally, experience shows that artifacts are not all predictable nor can they be modeled as expected. Thus, any artifact detection shall be as generic as possible, without requiring the modeling of their origin or their impact on an image.
Outside the field of Earth observation, similar detection problems have arisen in multimedia image processing. This includes the evaluation of image quality, compression, watermarking, detecting attacks, image tampering, the montage of photographs, steganalysis, etc. In general, the techniques used to address these problems are based on direct or indirect measurement of intrinsic information and mutual information. Therefore, this thesis has the objective to translate these approaches to artifact detection in Earth observation images, based particularly on the theories of Shannon and Kolmogorov, including approaches for measuring rate-distortion and pattern-recognition based compression. The results from these theories are then used to detect too low or too high complexities, or redundant patterns. The test images being used are from the satellite instruments SPOT, MERIS, etc.
We propose several methods for artifact detection. The first method is using the Rate-Distortion (RD) function obtained by compressing an image with different compression factors and examines how an artifact can result in a high degree of regularity or irregularity affecting the attainable compression rate. The second method is using the Normalized Compression Distance (NCD) and examines whether artifacts have similar patterns. The third method is using different approaches for RD such as the Kolmogorov Structure Function and the Complexity-to-Error Migration (CEM) for examining how artifacts can be observed in compression-decompression error maps. Finally, we compare our proposed methods with an existing method based on image quality metrics. The results show that the artifact detection depends on the artifact intensity and the type of surface cover contained in the satellite image.
Inertial System and GPS Technology Trends
This presentation presents a roadmap for the development of inertial sensors, the Global Positioning System (GPS), and integrated inertial navigation system (INS)/GPS technology. This roadmap will lead to better than 1-m accuracy, low-cost, moving platform navigation in the near future. Such accuracy will enable military and civilian applications which were previously unthought-of a few years ago. After a historical perspective, a vision of the inertial sensor instrument field and inertial systems for the future is given. Accuracy and other planned improvements for GPS are explained. The trend from loosely-coupled to tightly-coupled INS/GPS systems to deeply-integrated INS/GPS is described, and the synergistic benefits are explored. Some examples of the effects of GPS interference and jamming are illustrated. Expected technology improvements to system robustness are also described. Applications that will be made possible by this new technology include personal navigation systems, robotic navigation, and autonomous systems with unprecedented low-cost and accuracy.
Navigation Sensors and Systems in GNSS Degraded and Denied Environments (Or How I Learned to Stop Worrying About GPS)
Position, velocity, and timing (PVT) signals from various Global Navigation Systems (GNSS) are used throughout the world but the availability and reliability of these signals in all environments has become a subject of concern for both military and civilian applications. Most of the 16 critical infrastructure sectors to the US economy, security, and health are dependent on GPS signals. More than 90% of the US military guided weapons use GPS. International news reports about a successful GPS spoofing attack on a civilian UAV in USA have increased concerns over the planned use of UAVs in the national airspace and safety of flight in general. Other examples of the effects of GPS interference and jamming are illustrated in this presentation. This is a particularly difficult problem that requires new and innovative ideas to fill the PVT gap when the data are degraded or unavailable. One solution is to use inertial and/or other sensors to bridge the gap in navigation information and maintain world-wide navigation capability. This presentation summarizes with examples four different methods for combining GPS and inertial systems to achieve mission success when GPS becomes unavailable. Some of the applications being made possible by this advanced performance include personal navigation systems, robotic navigation, and autonomous systems with unprecedented low-cost and accuracy.
Passive Microwave Imaging Technology: Distributed Interferometric Array
Since 1960’s the interferometric imaging technology has been developed in radio astronomical observations from ground. In 1980, this technology has been introduced into the area of remote sensing. First on air borne system such as ESTAR developed by MIRSL at University of Massachusetts, and then in recent year in space of the ESA mission SMOS. However, in order to reach higher spatial resolution, distributed array on different space crafts are more and more demanded, in particularly with the fast development of Cube-Sat technology. In this lecture, the fundamental theory and principle of interferometric imaging technology for remote sensing is introduced. Then the distributed array configurations are described from the most simple one, with only two elements, to the most complete one are presented. These fundamental knowledge will be useful for engineering design of future practical passive microwave remote sensing systems.
Space Science Missions in China
Since 1970, China launched its first satellite, there have been more than 100 satellites launched. Among them few are space science missions. The situation has been changed since 2000, particularly in recent years. In this lecture, China’s past, present and future space science missions will be given detailed introductions. Including the first satellite DHF-1 as a historical introduction, the first scientific mission of China and cooperation mission with ESA - Double Star Program, the first Chinese Mars Exploration mission YH-1 with picky-back launch with Russian’s Phobos-Grunt but failed during the departure from LEO to interplanetary orbit, and four missions from recent CAS priority strategic program on space science: the Dark Matter Explorer, the Microgravity and life science recoverable satellite mission, the Quantum Experiments in Space Scale and Hard X ray Modulation Telescope. Some of the selected future mission will also be presented briefly. In the presentation, system engineering considerations will be emphasized.
Spacecraft Avionics and Scientific Instruments for Unmanned Space Missions
Developing advanced spacecraft avionics and scientific instruments for unmanned space missions is a particularly challenging endeavor that requires solutions accommodating many conflicting design constraints including:
- State-of-the-art technologies for data, signal, and image processing,
- High reliability hardware and software requirements,
- Long duration missions involving dormant and operational periods,
- Extreme physical, electromagnetic, and radiation environments,
- Size, weight, and power limitations,
- High Technology Readiness Level (TRL) designs, and
- Proven flight heritage.
The purpose of this lecture is to present a review of these design considerations, illustrated by several examples from past and current unmanned space missions including:
- New Horizons
- Magnetospheric Multiscale
- Mars Science Laboratory
- Lunar Reconnaissance Orbiter
- Deep Impact
Bridging the Valley of Death: Overcoming barriers to adopting disruptive technologies in aerospace applications
Opportunities to create incremental technological innovations are relatively easy to accomplish and adopt. Disruptive technologies significantly alter the ways that businesses operate, and therefore are often more difficult to adopt. Applied research and development plays an important role in technology transfer of disruptive innovations from the laboratory to industry. This talk will describe that role and provide examples from the aerospace industry.
Onboard Adjustable Learning Rates for Autonomous Space Vehicle Proximity Operations
The lack of adequate on-orbit verification protocols presents a formidable obstacle blocking the broader embrace of autonomy within many space missions. In this lecture, we review some recent advances in nonlinear stability theory and robust adaptive control that involve immersion and invariance approaches. These technical foundations are strongly motivated by growing numbers of aerospace engineering applications that are currently addressing the critical need for autonomous (and semiautonomous) control systems with agile maneuvering and robust perception inside dynamic, complex and uncertain environments. Specifically, these new state-estimation and control design tools involve the construction of auxiliary filters (state-space augmentation) for dynamically adjusting the “rate of adaptation” - ultimately leading to some strong convergence properties and robustness features. The lecture will focus upon space robot manipulators and proximity operation applications under the framework of large-scale model uncertainties and non-negligible time-delays due to network-based control implementations. The lecturer concludes with a brief discussion of broader astrodynamics applications that include spacecraft motion control within arbitrary potential fields.
A Primer on Various Approaches to Data Association
To thread measurements (well, many call them “hits” or “plots”) of radar, sonar or imaging observations to a credible, smooth and reportable trajectory requires a filter. We’ll discuss those – Kalman, Unscented, particle, etc. – briefly. But the main topic here arises because one cannot even begin to filter without knowing which hits come from which targets, and which hits are complete nonsense (clutter). When wrapped inside some scheme for such data-association, a filter becomes a tracker. This talk is intended to explain, at a fairly high level, the intuition behind some of the popular tracking algorithms.
Maximum-Likelihood Methods in Target Tracking And Fundamental Results on Trackability
If a GLR (generalized likelihood ratio) test cannot make a good decision, then there is no good decision to be made. If the test is as to whether or not a VLO target is present in heavy clutter, the GLR should be the maximum-likelihood probabilistic data association (MLPDA) tracker. The MLPDA is very effective, but has several operational shortcomings that its close cousin, the maximum-likelihood probabilistic multi-hypothesis tracker (MLPMHT) avoids. We will discuss and compare both algorithms, plus show some fortuitous new MLPMHT developments. Perhaps most interesting, we are now able to set the MLPMHT threshold accurately and confidently, as would be a requirement for real-time operation. And since one cannot do better than ML, we are now able to make fundamental statements about which targets can be tracked and which cannot: these statements are essentially a bound, as opposed to algorithm-specific performance experience.
Distributed Detection and Data Fusion
The initial paper on the subject of distributed detection, by Tenney and Sandell, showed that under a fixed fusion rule, for two sensors with one bit outputs, the optimal Bayes sensor decision rule is a likelihood ratio test. It has been shown that the optimal fusion rule for N sensors is a likelihood ratio test on the data received from the sensors. Reibman and Nolte and Hoballah and Varshney have generalized the results in to N sensors with optimal fusion, again with the restriction of one bit sensor outputs. Hoballah and Varshney have also investigated system optimization under the aforementioned conditions for a mutual information criterion. The restriction of the sensor outputs to single bits seems unduly harsh since it implies either very rapid decision rates or extremely narrowband channels. In this paper the authors have derived some results concerning this more general case; using different techniques, Tsitsiklis has come to conclusions that are similar, as have, for example Thomopoulos, Viswanathan, et al. As always, when speaking of optimality, it is necessary to impose a criterion for judgment. For detectors, several such measures have been used. In this paper, we study the structure of optimal decentralized detectors for the Neyman-Pearson, Bayes, Ali-Silvey, and mutual (Shannon) information criteria. We assume that, conditioned on the actual hypothesis (state of nature), the random processes observed at any two different sensors are independent. Given this assumption we show that for each criterion, the optimal strategy is to quantize the local likelihood ratio at the sensors (to the maximum number of bits allowable), and transmit this result to the fusion center. The fusion center then performs a likelihood ratio test on this received data. That to quantize the likelihood ratio is the optimal thing to do is scarcely a surprising result. Even prior to a proof of its optimality it was used by a number of researchers.
UAV/UAS/RPAS and Related Technologies
Unmanned Aircraft Systems (UAS) are an area of emerging technology that is gaining increasing worldwide notoriety, in both military and civilian contexts. While UAS can be an effective and efficient means of conducting particular operations for national security and social good, and are considered to have significant potential for a wide range of commercial applications, the strategic trends in UAS development and the implications of what the future will bring to their operation convey with them many issues – positive and negative – that need to be acknowledged and addressed.
In order to address the implications of the rapidly expanding use and sophistication of UAS (and their related technologies), and to achieve an effective and efficient commercialization of UAS applications, policies must be adopted that balance the rights and responsibilities of the individual with public sector capabilities and private sector growth. Those policies need to consider the different types of UAS currently being deployed or in development, the challenges and risks posed by the private and commercial use of UAS, safety regulations as applied to the manufacture and civil use of UAS, and how these impact on important national security and societal values.
In his talks, Philip examines these issues, their associated policy considerations and the legal and civil liberties challenges that have been identified.
UAV Traffic Management (UTM) Systems
High-profile incidents involving UAV/UAS/RPAS are becoming more frequent and present major issues for policy makers, regulators, law enforcement agencies and the public. Such incidents must be avoided if the emerging drone industry is to mature and thrive. Therefore, essential to the future of the drone industry is implementation of a safe and secure airspace management system (UTM) for civilian RPAS operations; a system that enables new and innovative RPAS applications while resolving the concerns and issues of policy makers and the needs of regulators.
Critical to an effective UTM system for low flying RPAS is the ability to identify in real-time an operating aircraft, its owner and pilot, and its precise location. Senior managers in several major national aviation authorities have acknowledged that it is not a question of IF, but WHEN, governments will regulate that all civilian RPAS must be fitted with a unique and secure digital identifier at manufacture. Similarly, it is also foreseeable that governments will regulate that RPAS must have an on-board means of identifying the owner and pilot while the aircraft is being operated.
In his talks, Philip discusses the aircraft/pilot identity issue and presents an UTM system now in operation that incorporates digital identification of RPAS aircraft, owner and pilot as an integral part.
Feature Object Extraction: Evidence Accrual Applied to Information Assurance and Other Problems
Information assurance, also referred to as cyber security, is the process of protecting information from theft, destruction, or manipulation. Cyber threats can be either from internal or external sources, sudden or taking time to develop, such as a slow denial of service (DOS) attack. Some techniques have been developed to behave as sensors to quickly assess elements of attacks that rely on a decision engine to fuse the information to estimate whether or not an attack is underway. Interpreting cybersecurity as a sensor fusion problem, includes a number of additional alternative techniques into the solution space. The concept of evidence accrual is gather measurements over time from different sensors to provide estimates of what event is occurring. A classification fusion technique using feature extraction and fuzzy logic known as Feature Object Extraction is developed and applied to problems such as cyber security and GPS attacks. The feature-aided object extraction technique was developed for the classification problem to fuse different features and generate both a classification and a measure of the quality of the classification estimate. A primary advantage of this is that it evidence is built for each possibility without excluding classes. Thus, the evidence may point to multiple possibilities until evidence disproves a class. Most probabilistic techniques increase the probability of one class by lowering the probability on other classes. Another difference exists in the fact that evidence can be applied to individual classes and not all classes. Feature Object Extraction also allows for a level of evidence to recover from erroneous negative information which might normally cause elimination of a possibility. These design features of Feature Object Extraction are applied to the cybersecurity problem where multiple attacks might be underway simultaneously.
Navigation: The Road to GPS and Getting Beyond It
Navigation can be viewed as merely determining position or direction, but more commonly it relies on knowledge of position or direction to control or monitor movement from one place to another. In this talk, the field of navigation is introduced, including the evolution of techniques up through modern navigation dominated by electronic navigation including radio, radar, and satellite. The working of GPS, a navigation system based on a constellation of satellites in medium earth orbit that provides positioning information with global coverage is explained. Since its launch in 1978, it has been in ever wider use for finding and keeping track of just about anything: people, animals, boats, trucks, planes, and more. Its initial military uses have expanded far into civilian applications both for individuals and for large-scale commerce and transportation. The wide availability of first personal vehicle GPS navigation and later mobile phone-based navigation have changed how the world does business and how people and goods are moved around. As more and more vehicles and people rely upon it, any threats to GPS navigation become more dangerous. This is a result that more systems have become completely or primarily dependent on GPS for guidance and navigation. Simple jamming of the GPS can render a system completely blind to its location, while more sophisticated attacks can spoof a GPS signal to control its navigation. Future trends and technologies to address the security issue and to move forward in navigation are discussed.
Radar Systems Prototyping
Radar Reverse Engineering
Cooperative Control and Resilient Operations of Unmanned Vehicles
Distributed Kalman Filter Design and Applications
Cooperative and Distributed Guidance
Knowledge Based Radar Signal and Data Processing
Radio Frequency Tomography
Space Time Adaptive Processing