Distinguished Lecturer Program
You are here
All AESS Chapters and IEEE Sections are encouraged to take advantage of the AESS Distinguished Lecturer and Tutorial Program for their regular or special meetings, that allows them to select from an outstanding list of truly distinguished speakers who are experts in the technical fields of the society. The AES Society will pay reasonable speaker’s expenses for economy-class travel, lodging and meals, with the inviting IEEE organization expected to cover 50% of the speaker's expenses. As a general guideline, speaker’s expenses involving travel wholly within North America or within the European Union can be approved to be covered up to $1,000 USD. Expenses involving extensive international travel can be approved to be covered up to $2,000 USD. The Society encourages arrangements whereby more than one lecture is presented in a single trip, and costs in such situations will be considered on a case by case basis.
Non-IEEE entities (such as universities, research organizations, and companies) are also eligible to contact speakers directly. If a speaker agrees to give the non-IEEE lecture on a particular date and location, the inviting organization is required to pay 100% of the speaker's expenses as mutually agreed between the speaker and the organization. While the AESS has no responsibilities for any arrangements or costs regarding the lecture, speakers should keep the VP Education advised of their Distinguished Lectures.
The procedure for obtaining a speaker is as follows: If a Chapter or Section has an interest in inviting one of the speakers, it should first contact the speaker directly in order to obtain his or her agreement to give the lecture on a particular date. After this is accomplished, the Chapter or Section must notify the AESS VP for Education by sending in a DL Request Form. If financial support from the AESS is required for the speaker’s expenses, he or she must submit an estimate to the AESS VP for Education before actually incurring any expenses. This estimate must be provided at least 45 days before the planned meeting to provide time for feedback from the VP for Education and for changes if needed. The VP for Education must provide written authorization to proceed.
Distinguished Lecturers and Tutorial speakers are ambassadors of the AESS, who serve as an important demonstration of the value of membership in IEEE and AESS in particular. A short presentation on the benefits of Society membership is available and included in each Distinguished Lecture presentation. Speakers should contact Judy Scharmann well in advance of each lecture to arrange for shipping AESS and IEEE Membership brochures and copies of society publications to hand out.
Following the lecture, the speaker and/or host are asked to prepare a short report suitable for publication and posting on the AESS web site. Pictures taken at the meeting are highly desirable.
List of Distinguished Lecturers:
Target Tracking and Data Fusion: How to Get the Most Out of Your Sensors
This talk describes the evolution of the technology of tracking objects of interest (targets) in a cluttered environment using remote sensors. Approaches for handling target maneuvers (unpredictable motion) and false measurements (clutter) are discussed. Advanced ("intelligent") techniques with moderate complexity are described. The emphasis is on algorithms which model the environment and the scenarios of interest in a realistic manner and have the ability to track low observable (LO) targets. The various architectures of information processing for multi-sensor data fusion are discussed. Applications are presented from Air Traffic Control (data fusion from 5 FAA radars for 800 targets) and underwater surveillance for a LO target.
Tracking Maneuvering Targets in a World of Netted Sensors
This lecture starts by motivating the need for multisensor/multitarget tracking and then develops the fundamental concepts of single target tracking with resource management, tracking in the presence of maneuvers, multiple-model tracking and finally multiple sensor tracking. An illustrative approach with minimal use of equations is taken in this lecture in order to reach a broad audience.
“Systematic Filter Design for Tracking Maneuvering Targets: How to Get Guaranteed Performance out of Your Sensors”
Although the Kalman filter has been widely applied to target tracking applications nearly since its introduction in the early 1960’s, until recently, no systematic design methodology has been available to predict maneuvering target tracking performance or to optimize filter parameter selection. This lecture, presents a rigorous procedure for selecting optimal process noise variances for the Kalman filter based on a particular sensor and target model. Design of nearly constant velocity (NCV) Kalman filters with discrete white noise acceleration and exponentially-correlated acceleration errors are addressed. Furthermore, filter design for tracking maneuvering targets with linear frequency modulated waveforms is considered as well.
Multispectral Image Fusion and Night Vision Colorization
This course presents methods and applications of multispectral image fusion and night vision colorization organized into three areas: (1) image fusion methods, (2) evaluation, and (3) applications. Two primary multiscale fusion approaches, image pyramid and wavelet transform, will be emphasized. Image fusion comparisons include data, metrics, and analytics.
Fusion applications presented include off-focal images, medical images, night vision, and face recognition. Examples will be discussed of night-vision images rendered using channel-based color fusion, lookup-table color mapping, and segment-based method colorization. These colorized images resemble natural color scenes and thus can improve the observer’s performance. After taking this course you will know how to combine multiband images and how to render the result with colors in order to enhance computer vision and human vision especially in low-light conditions.
Y. Zheng. E. Blasch, Z. Liu, Multispectral Image Fusion and Colorization, SPIE Press, 2018.
Overview of High-Level Information Fusion Theory, Models, and Representations
Over the past decade, the ISIF community has put together special sessions, panel discussions, and concept papers to capture the methodologies, directions, needs, and grand challenges of high-level information fusion (HLIF) in practical system designs. This tutorial brings together the contemporary concepts, models, and definitions to give the attendee a summary of the state-of-the-art in HLIF. Analogies from low-level information fusion (LLIF) of object tracking and identification are extended to the HLIF concepts of situation/impact assessment and process/user refinement. HLIF theories (operational, functional, formal, cognitive) are mapped to representations (semantics, ontologies, axiomatics, and agents) with contemporary issues of modelling, testbeds, evaluation, and human-machine interfaces. Discussions with examples of search and rescue, cyber analysis, and battlefield awareness are presented. The attendee will gain an appreciation of HLIF through the topic organization from the perspectives of numerous authors, practitioners, and developers of information fusion systems.
The tutorial is organized as per the recent text:
E. P. Blasch, E. Bosse, and D. A. Lambert, High-Level Information Fusion Management and Systems Design, Artech House, April 2012.
Characterization and Mitigation of Multipath in GNSS
Multipath is the phenomenon whereby a transmitted signal arrives at a receiver via multiple paths due to reflection and diffraction. These non-direct-path signals distort the received signal and cause errors in GNSS pseudorange and carrier-phase measurements. Differential techniques do not eliminate multipath and thus multipath is a critical error source in high precision applications. The physical surroundings around the user’s antenna dictate the multipath environment and thus cause significant differences for land, marine, airborne, and spaceborne applications. This lecture describes the multipath environment and the impact of multipath on code and phase measurements. The influence of the type and rate of the broadcast code as well as the receiver architecture will be presented. Mitigation strategies will also be described along with multipath measurement techniques.
Fundamentals of Inertial Aiding
Navigation-grade inertial systems are characterized by so-called “free inertial” position error drift rates on the order of one nautical mile-per-hour of operation. Such performance implies a certain class of gyros and accelerometers and thus certain specifications on biases, scale factor errors and noise. For more than five decades, the Kalman filter has been the primary tool used to reduce inertial drift through the integration of various sensors. Specifically, the aiding sources (e.g., stellar, Doppler, GPS, etc) are used by the filter to estimate the errors in the free inertial processing. Thus, the heart of any aided-inertial Kalman filter is the inertial error model including, specifically, sensor errors. We will discuss these models and will proceed to explain how aiding source observations are then used by the filter, in conjunction with the models, to estimate the inertial errors. For example, a given aiding source may provide an independent measurement of position, yet somehow the filter is able to use this in order to estimate gyro biases in the inertial system. The daunting matrix mathematics involved in the full algorithm can be extremely intimidating to the newcomer. In this lecture, the basic concepts of estimation theory will be briefly reviewed and the Kalman Filter will be described first in terms of simple one-dimensional problems for which the full algorithm reduces to an approachable set of scalar equations. We will look at the performance of the filter in some simple case studies and by the end will have an intuitive feel for how the full filter operates. We will then apply the Kalman filter to the aiding of inertial systems. We will see how external sources of position and velocity (such as GPS) can be used first to measure inertial system error and then, with the aid of the Kalman filter, to estimate and correct inertial sensor error as well as system error.
Fundamentals of Inertial Navigation
Inertial navigation systems (INS) are modern technologically sophisticated implementations of the age-old concept of dead reckoning. The basic philosophy is to begin with a knowledge of initial position, keep track of speed and direction, and thus be able to determine position continually as time progresses. Perhaps surprisingly, the rise of GNSS has actually expanded the need for inertial-based systems. Accelerometers and gyroscopes are the basic sensors utilized and since INS are essentially self-contained, they do not suffer from interference or unavailability that can affect radio-based systems such as GNSS. Furthermore, INS are highly complementary to GNSS since they provide high data rates, low data latencies and attitude-determination along with position and velocity. This lecture will start by highlighting the basic principles of operation of an inertial navigation system. We will focus initially on the concepts underlying the algorithms used to determine position, velocity and attitude from inertial sensor measurements. Key error characteristics will then be described as well such as the Schuler oscillation and vertical channel instability. We will also consider the impact of various sensor errors on system performance.
Around the World in 60 Minutes: Exotic Places with a Twist
An informative and humorous adventure covering: (1) China: the Yangtze River, Three Gorge Dam, Chinese Opera and Acrobatics, their dynamic growth, their amazing capitalistic-communist run country with its very modern cities; (2) India; Palace on Wheels through Rajasthan, Varanasi; (3) Nepal: its friendly people, cremations, Mt. Everest; (4) Singapore: offers some of the most interesting things to see in the world like the Indian Hindu Thaipusam Festival (with cheek, tongue, torso piercing) and Fire Walking, Chinese, Arab and Malaysian cultures; (5) Sumatra, Indonesia: get close to free roaming orangutans, very low cost touring; (6) Turkey: take a tour of Capredocia in a hot air balloon and see the famous cave dwelling there, visit Istanbul; (7) Saudi Arabia: see the very modern and beautiful King Saudi University in Riyadh, visit Jubail (Dharan) and Jeddah; (8) Papua New Guinea: visit these colorful people who where first discovered in 1930 living in a pre-Iron age; (9) Vietnam: their colorful ethnic minority tribe people, Hanoi Hilton; (9) Also visit South Korea, Taiwan, Chile, Russia, Peru, Japan, Hong Kong, Mexico, Africa, Shemya (an island far out on the Aleutian Island Chain of Alaska), Austria, Malaysia, Canada, Israel, Australia, the Netherlands, Belgium, Union of South Africa, Egypt, New Zealand, Brazil, Philippines, Borneo, Bali, Iran, Dubai, Abu Dhabi.
The twist is an explanation that all can understand on how radars and phased arrays work and some of the recent amazing breakthroughs in science, technology, radars and phased arrays. Covered will be the advance from 2 inch tubes in the 1940s, when I went to High School, to today’s 2 inch memory stick which does the job of 260 billion tubes – Moore’s Law. The potential for invisibility via metamaterial – man made material. We go from 10 story high phased array radars to a whole phased array on a single chip and Google’s putting a whole radar on a smart watch. In the future we may have a phased array in our cell phones for 5G. Also multiple radars in all our cars.
Cognitive Adaptive Array Processing (Caap) for Radar
Usually adaptive array processing is presented with exotic matrix equations that are difficult to understand and do not give a physical feel to what is going on so one can improve on the jammer suppression. This tutorial is present the subject from physical point of view which gives one great insight into the best way to do jammer suppression. We show how to do Cognitive Adaptive Array Processing (CAAP). The immense power of CAAP is demonstrated. Digital beam forming (DBF) makes CAAP the processing of the future.
Barrage Jammer: It is shown how CAAP reduces by orders of magnitude the computation complexity, the number of training samples needed and the adapted antenna sidelobe degradation. For example assume an N = 10,000 element array and J = 1 barrage noise jammer. With the classical Sample Matrix Inversion (SMI) one needs to have K = 5N = 50,000 training samples to get a signal-to-interference (SIR) within 1 dB of optimum. Plus it requires the inversion of an NXN = 10,000X10,000 interference matrix which necessitates on the order of a billion operations (multiplies and divides). Also it degrades many of the adapted antenna sidelobes by 30 dB for a 40 dB un-adapted sidelobe level pattern. In contrast with CAAP it is shown that only K = 4 training samples are needed, only the two sidelobes next to the jammer are degraded and only about 2dB, and no matrix inversion is needed, only 13 operations for a reduction of 100 million operations. It is shown that CAAP lets us go from the complex SMI processor to a simple sidelobe canceller (SLC) having one aux antenna beam for the case of one jammer. Similar advantages are shown for the case of J jammers. With J jammers we go from an SMI canceller to a SLC having J high gain aux beams pointing at the jammers. To estimate the number and location of the jammers modern estimation and super-resolution methods could be used. It is shown that SMI actually does in the math what CAAP is doing, except that it does it very inefficiently or with poor performance or both. Like CAAP, SMI actually does SLC in the math but transparent to the user. It is pointing beams at the jammers using them to do SLC just like with the CAAP. However because the SMI does not have a perfect estimate of the NXN size interference covariance matrix ME, it also points beams at N-J-1 directions where there are no jammers and as a result degrades the adapted antenna sidelobe level where there are no jammers. For SMI these jammer beams are called eigenbeams. They result physically from the eigenvectors of ME. Amazingly Sidney Applebaum pointed this out in his seminal paper and report back in 1974. Covered will be the expressing of the adapted antenna pattern in terms of its eigenbeams, which is known as the principal component formulation. It is pointed out that CAAP represents a very fruitful area for future study. One area is coping with different jammer combinations in the sidelobes and main lobe. For example Gabriel in his original work showed that if you had two jammers of nearly equal strength close together in one sidelobe the eigenbeams are a sum beam and a difference beam [9, p. 185]. Probably having two squinted main beams for the aux would do just as well in general. The advantages that may be gained if the J jammers are separated from each other by more than a beamwidth should be explored. A lot can be learned for cognitive adaptivity by doing simulations for different cases. This is now easy to do using Mathworks MATLAB and Phased Array System Toobox.
Hot Clutter Jammer: We show how with hot clutter present we can use CAAP to cancel out the jammer even if it is coming into the main lobe where we are looking for a target. We show how with CAAP we cancel out main beam jammer without any loss of signal strength. Main lobe jammer cancellation without signal cancellation using CAAP. Amazing.
Repeater Jammer: Apply CAAP to repeater jammers using locations of repeaters to spoof them and lower the radar signals they can see.
Low Probability of Intercept: Apply CAAP to it.
MIMO vs Conventional Array Radars: Compare these two re the above jammers. Examine both Ubiquitous and Machine Gunning Conventional Radars. These conventional radars detailed. Results applied to thin/full and full/thin array systems.
Mimo Radar: Mystery Taken Out of It
New MIMO (MULTIPLE INPUT MULTIPLE OUTPUT) Radar: Explained in simple physical terms instead of with heavy math. Covered are performance, waveforms, signal processing load, ability to handle jamming, performance re conventional radars. Where it makes sense to use: OTH, automobile radars, for combining existing radars to increased performance. It is pointed out that for some car MIMO radars configurations conventional radars having the same number of elements can perform better than MIMO radars with respect to antenna angle resolution and antenna sidelobe levels.
Radar, Phased-Arrays, Metamaterials (Invisible Man), Stealth, Anti-Stealth, Ultra-Wideband, Cognitive Adaptivity, Mimo, 5g: Advances & Breakthroughs
BASICS: Active Electronic Scanned Arrays (AESAs). RECENT SYSTEM DEVELOPMENTS: Patriot upgrade; New AMDR AESA, 30X sensitivity of AEGIS. LOW COST AESAs using COTS. EXTREME MMIC: Whole 256 element phased array on chip at 60 GHz for future 5G and car radars. DIGITAL BEAM FORMING (DBF); mixer-less and reconfigurable; MOORE’S LAW: Future continuation of: via 2-D transistors, Spintronics, Memristor, Graphene, Quantum Computing. LOW COST RADARS: For cars, cell phones and smart wrist watches. Ultra-Wideband (UWB) Radar.
METAMATERIALS: for low cost 2-D Electronically Steered Antennas for satellites, cell towers, cars. UAV radars; Stealthing by absorption and by cloaking; Army conformal whip antenna replacement. WIDEBAND LOW PROFILE ANTENNA: 20:1 bandwidth.
MIMO (MULTIPLE INPUT MULTIPLE OUTPUT): Explained in simple physical terms instead of with heavy math. Performance, waveforms, signal processing load, ability to handle jamming. Contrary to what is claimed MIMO array radars do not provide orders of magnitude better resolution and accuracy than conventional array radars. Should not provide better GMTI. Applications presented.
COGNITIVE ADAPTIVE ARRAY PROCESSING (CAAP): Applied to barrage, hot clutter and repeater jamming for conventional and MIMO systems. Enabled by DBF. Tremendous advantages over classical methods. Results derived through simple physical explanation rather than heavy math that which does not give one physical insight into adaptive nulling. Show how CAAP re SMI reduces by several orders of magnitude the: 1) number of training sample needed, 2) the size of the interference matrix to be inverted, and 3) the amount by which the adapted sidelobes are degraded. Using CAAP show that MIMO does not provide better rejection of barrage, repeater or hot-clutter jammers than conventional array radars.
NEW TECHNOLOGY: HOLOGRAPHIC RADAR RADAR; QUANTUM RADAR; LOW COST PRINTED MICRTECHNOLLOWAVE ELECTRONICS; ELECTRICAL AND OPTICAL SIGNALS ON SAME CHIP. BIODEGRADABLE ARRAYS OF TRANSISTORS OR LEDs: Imbedded under skin. NEW POLARIZATIONS -- OAMs (Orbital Angular Momentum).
Metamaterial for Low Cost Electronic Scanning, Wideband Conformal Antennas, Cloaking (The Invisible Man), Stealth & Waim
Metamaterials have gained much interest in recent years because they offer the potential to provide low cost electronic scanning antennas and to provide target stealth. Metamaterials are man-made materials in which an array of structures having a size less than a wavelength are embedded. These materials have properties not found in nature, like a negative index of refraction. Much progress has been made using metamaterials.
For Cloaking and Stealthing of Targets: Target cloaking (making them invisible) has been demonstrated at microwaves over a narrow bandwidth using metamaterials. Cloaking has been demonstrated over a 50% bandwidth at L-band using fractal metamaterials. Work on cloaking at optical frequencies will be summarized.Stealthing by absorption using a thin flexible and stretchable metamaterial sheet has been shown to provide 6 dB absorption over an 8 to 10 GHz band, with greater absorption over a narrower band. Using fractals sheets < 1 mm thick simulation has shown a 20dB absorption over a band from 10-15 GHz and 10 dB from 2-20 GHz. Good absorption was achieved for all incident angles and polarizations. It looks very promising for stealthing aircraft and other military targets over a wide band for all aspect angles and polarizations.
For Communication Antennas: Kymeta demonstrated transmission to satellites and back using Ku band antennas which use metamaterial resonators in a very novel way to realize electronic steering potentially at low cost. Echodyne and Metawave (formerly Xerox’s PARC) have developed metamaterial arrays for radar. How Kymeta and Echodyne antennas work is given. The Army Research Laboratory funded the development of a metamaterial 250 to 505 MHz low profile antenna with a lambda/20 thickness for replacement of the very visible tall whip antennas on HMMWVs thus providing greater survivability. Complementing this, a conventional tightly coupled dipole antenna (TCDA) has been developed which provides a 20:1 bandwidth with a lambda/40 thickness. The two together could be employed in escort jammer aircraft like the USA Next Generation Jammer on the Ea-1G Growler covering the band from VHF to Ku band. They could serve as conformal or low profile antennas on the aircraft.
Other Applications: Metamaterial has been used in cell phones to provide antennas that are 5X smaller (1/10th lambda) having 700 MHz to 2.7 GHz bandwidth. Under Army funding isolation equivalent to 1 m separation has been achieved for transmit and receive antennas with 2.5 cm separation allowing simultaneous transmission and reception on a small relay platform. Metamaterial has the potential for use in phased array for wide angle impedance matching (WAIM) by placing metamaterial between the radiating elements to reduce mutual coupling. Using metamaterial one can focus 6X beyond diffraction limit at 0.38 μm (Moore’s Law marches on); 40X diffraction limit, lambda/80, at 375 MHz demonstrated.
Cognitive DF for Airborne Systems
Presentation of research into Machine Learning algorithms for the improvement of airborne direction finding capability. A brief overview of airborne DF and ML techniques is presented. An example of research application of ML to airborne DF is presented.
Application of SOSA to Airborne EW
Discusses the challenges of applying the Sensor Open Systems Architecture (SOSA) standard in an Electronic Warfare (EW) context. Includes brief overview of SOSA, the evolution of EW within SOSA, and the future direction of SOSA.
MIMO radar: snake oil or good idea?
MIMO (multiple input multiple output) communication is theoretically superior to conventional comm. under certain conditions, and MIMO comm. also appears to be practical and cost effective in the real World for some applications. It is natural to suppose that the same is true for MIMO radar, but the situation is not so clear. Researchers claim many advantages of MIMO radar relative to boring old phased array radars (SIMO radar). We will evaluate such assertions from a radar system engineering viewpoint. It is very rare to see a paper on MIMO radar with a correct quantitative apples & apples comparison including cost, complexity, risk and all relevant real World physical effects. Moreover, MIMO radar researchers often use boring old phased arrays in a highly suboptimal way, whereas the MIMO radar is used optimally. Hardboiled radar system engineers view such comparisons with skepticism.
Real World data fusion
Fusion of data from multiple sensors has the promise of substantial improvement in system performance for many important applications. However, there are several practical issues that must
be addressed to achieve such improvement: (1) residual bias errors between sensors; (2) dense multiple target environments; (3) unresolved data; (4) errors in data association between sensors; (5) sensor errors that are not fixed in time or space but which are not white noise either. We describe state-of-the-art algorithms that attempt to mitigate such problems. We show simple back-of-theenvelope formulas which quantify the situation, as well as one well known formula that is extremely pessimistic.
Never trust a simulation without a simple back-of-the-envelope calculation that explains it
Simulations are a crucial tool for systems engineers, and I have coded, developed, analyzed, tested, debugged and debunked many such simulations. However, they cannot be trusted. All too often system engineers come a cropper due to believing the results of simulations without making sure that the results are correct and relevant. Significant errors can occur for many reasons: bugs, bugs, bugs, incorrect parameters, incorrect physical models, incorrect application of perfectly fine code, incorrect interpretation of accurate results, etc. I was deeply shaped by a system
engineering culture that valued simple back-of-the-envelope calculations to provide insight into what was going on. Moreover, I am appalled when I see system engineers blindly believe the
results of simulations. My talk will give five or ten examples of system engineering blunders caused by faulty simulations or erroneous physical experiments, as well as two surprising twists.
Is there a royal road to robustness?
There is much confusion and misinformation about robustness among engineers. For example, many smart hard working and well educated engineers believe that there are decision rules and
estimation algorithms that are more robust than Bayesian algorithms. In particular, some engineers think that fuzzy logics or Dempster-Shafer methods are more robust than Bayesian methods.
We discuss a long list of standard methods to improve robustness, as well as a little known fact about the robustness of Bayesian algorithms.
Nonlinear filters with particle flow
We have invented a new particle filter, which improves accuracy by several orders of magnitude compared with the extended Kalman filter for difficult nonlinear problems. Our filter runs
many orders of magnitude faster than standard particle filters for problems with dimension higher than four. We do not resample particles, and we do not use any proposal density, which is a
radical departure from other particle filters. We show very interesting movies of particle flow and many numerical results. The key idea is to compute Bayes’ rule using a flow of particles
rather than as a point wise multiplication; this solves the well known problem of “particle degeneracy”. Our derivation is based on freshman calculus and physics. This talk is for normal engineers who do not have log-homotopy for breakfast.
Ultra Wideband Surveillance Radar
Foliage Penetration (FOPEN) Radar is a technical approach to find and characterize man-made objections under dense foliage, as well as characterizing the foliage itself. It has applications in both military surveillance and civilian geospatial imaging. This Tutorial is divided into three parts.
• The early history of FOPEN Radar: battlefield surveillance and the early experiments in foliage penetration radar are covered. There were some very interesting developments in radar technology that enabled our ability to detect fixed and moving objects under dense foliage. The most important part of that technology was the widespread awareness of the benefits of coherent radar and the advent of digital processing. Almost as important was the quantification of the radar propagation through foliage, and its scattering and loss effects.
• FOPEN synthetic aperture radar (SAR) with concentration on development results from several systems. These systems were developed for both military and commercial applications, and during a time of rapid awareness of the need and ability to operate in a dense signal environment. A brief description of each radar system will be provided along with illustrations of the SAR image and fixed object detection capability. The next section will quantify the benefits of polarization diversity in detecting and characterizing both man made and natural objects. There is a clear benefit for use of polarization in the target characterization and false alarm mitigation. Finally the techniques developed for ultra wideband and ultra wide angle image formation will be presented.
• New research in Multi-mode Ultra-Wideband Radar, with the design of both SAR and moving target indication (MTI) FOPEN systems. Particular note will be taken on the benefits and difficulties in designing these ultra-wideband (UWB) systems, and operation in real world electromagnetic environments. At common FOPEN frequencies, the systems have generally been either SAR or MTI due to the difficulties of obtaining either bandwidth or aperture characteristics for efficient operation. The last two sections of the tutorial will illustrate new technologies that are appearing in the literature that have promise for future multimode operation: the need to detect low minimum discernable velocity movement; and the operation of bistatic SAR in concert with a stationary GMTI illumination waveform.
Radar Detection, Performance, and CFAR Techniques
The objective of this lecture is to teach the theory of radar detection, detector performance analysis, and Constant False Alarm Rate (CFAR) techniques according to a rigorous academic style based on the use of statistical decision theory. It is organized into two main sections: a) Theory of Radar Detection and Performance Assessment; b) CFAR Techniques.
1. Theory of Radar Detection and Performance Assessment
1.1. Preliminaries on the Fast-Time Slow-Time Radar Data Matrix
1.2. Statistical Characterization of the Radar Observations
1.3. Coherent, Non-Coherent, and Binary Integration
1.4. Optimum Detection in White Gaussian Noise (WGN)
18.104.22.168. Optimum Coherent Detection in WGN and Related Performance
1.5. Non-Coherent Detection in WGN
22.214.171.124. Square-Law and Linear Integration
126.96.36.199. False Alarm and Detection Probability Analysis
188.8.131.52. Albersheim’s Equation
1.6. The case of Fluctuating Targets According to Swerling 1 -4 Models
2. CFAR Techniques
2.1. The Constant False Alarm Rate (CFAR) Concept
2.2. Basic CFAR Architecture
184.108.40.206. Cell Averaging (CA-CFAR)
220.127.116.11. CFAR Loss
18.104.22.168. Masking Effects
22.214.171.124. Clutter Edges
126.96.36.199. Robust CFAR Processors
188.8.131.52.1.1. Greatest Of CFAR (GO-CFAR)
184.108.40.206.1.2. Smallest OF CFAR (SO-CFAR)
220.127.116.11.1.3. CFAR Techniques Based on Order Statistics (OS)
18.104.22.168.1.4. Trimmed-Mean and Censored CFAR, OS-CFAR.
Optimization Theory in Radar Signal Processing
The objective of this lecture is to provide a systematic overview of innovative radar signal processing algorithms based on modern optimization theory according to a rigorous and academic style. Specifically, the theoretical basis to address constrained design problems is given, illustrating in the radar context some key/relevant results of modern optimization theory about convex and non-convex problems.
1. Introduction to convex optimization theory:
• Historical notes on the use of optimization theory in Radar;
• Preliminaries on the Constrained Optimization Problems;
• Convex Optimization;
• Convex sets and Radar Examples;
• Convex Functions and Radar Examples,
• Taxonomy of Convex Programming Problems.
2. Convex optimization problems in radar and their solution via CVX:
• Linear Programming (mismatched filter for real observations);
• Quadratic Problems (Capon filter, Knowledge-Based beamformer);
• Second Order Cone Programming, SOCP (Lp-norm minimization filter, robust beamformer);
• SemiDefinite Programming, SDP (MIMO Matrix Beamformer, MIMO Waveform Design in Tracking Applications);
• Max-Det (constrained precision matrix maximum likelihood estimate).
3. Non-convex design problems in radar and the implementation of effective algorithms for their solution:
• Hidden Convex Quadratic Problems based on Rank-One Decomposition (robust detection, waveform design with similarity constraint);
• NP-hard Quadratic Problems based on Relaxation & Randomization (waveform design with phase/PAR constraint);
• Fractional Quadratic Programming (robust detection, robust constrained Doppler filters).
Spacecraft Avionics and Scientific Instruments for Unmanned Space Missions
Developing advanced spacecraft avionics and scientific instruments for unmanned space missions is a particularly challenging endeavor that requires solutions accommodating many conflicting design constraints including:
- State-of-the-art technologies for data, signal, and image processing,
- High reliability hardware and software requirements,
- Long duration missions involving dormant and operational periods,
- Extreme physical, electromagnetic, and radiation environments,
- Size, weight, and power limitations,
- High Technology Readiness Level (TRL) designs, and
- Proven flight heritage.
The purpose of this lecture is to present a review of these design considerations, illustrated by several examples from past and current unmanned space missions including:
- New Horizons
- Magnetospheric Multiscale
- Mars Science Laboratory
- Lunar Reconnaissance Orbiter
- Deep Impact
Bridging the Valley of Death: Overcoming barriers to adopting disruptive technologies in aerospace applications
Opportunities to create incremental technological innovations are relatively easy to accomplish and adopt. Disruptive technologies significantly alter the ways that businesses operate, and therefore are often more difficult to adopt. Applied research and development plays an important role in technology transfer of disruptive innovations from the laboratory to industry. This talk will describe that role and provide examples from the aerospace industry.
Over-The-Horizon Radar: Fundamental Principles, Adaptive Processing and Emerging Applications.
Skywave over-the-horizon (OTH) radars operate in the high frequency (HF) band (3–30 MHz) and exploit signal reflection from the ionosphere to detect and track targets at ranges of 1000 to 3000 km. The long-standing interest in OTH radar technology stems from its ability to provide persistent and cost-effective early-warning surveillance over vast geographical areas (millions of square kilometres). Australia is recognized as a world-leader in the OTH radar field. Pioneering research and development covering every facet of this technology has resulted in the multi-billion-dollar Jindalee Operational Radar Network (JORN) of three state-of-the-art operational OTH radars in Australia.
The first part of the tutorial introduces the fundamental principles of OTH radar design and operation in the challenging HF environment to motivate and explain the architecture and capabilities of modern OTH radar systems. The second describes mathematical models characterizing the HF propagation channel and adaptive processing techniques for clutter and interference mitigation. The third delves into emerging applications, including HF passive radar, blind signal separation and multipath-driven geolocation. A highlight of the tutorial is the prolific inclusion of experimental results illustrating the application of robust signal processing techniques to real-world OTH radar systems. This is expected to benefit students, researchers and practitioners with limited prior knowledge of HF radar and with an interest in the application of advanced processing techniques to practical systems.
Robust Adaptive Array Processing for Radar
Adaptive array processing techniques represent a key element for enhancing the performance and capabilities of multi-channel radar systems that must operate in demanding and complex disturbance environments, which in general includes clutter, man-made interference and naturally-occurring noise. The first part of this lecture recalls some foundational adaptive processing principles and the main assumption and conditions under which seminal theoretical results have been derived. The second contrasts these main assumptions and conditions with those actually encountered by a wide range of practical radar systems that operate in real-world environments. In the presence of environmental uncertainties, instrumental imperfections, and operational constraints, which are ubiquitously faced by practical systems, the implementation of robust adaptive techniques becomes an essential ingredient for effective and efficient operation. The third part of this lecture discusses the design and application of robust adaptive array processing techniques in the dimensions of space, time and space-time. Experimental results are illustrated for OTH radar systems to lend concreteness by way of example. This lecture is expected to benefit students, researchers and practitioners with an interest in the effective and efficient application of advanced processing techniques to practical radar systems.
Radar Adaptivity: Antenna Based Signal Processing Techniques
The lecture discusses the following topics:
• Introducing Radar: from its conception to recent industrial achievements,
• Operational needs requiring adaptivity,
• Side lobe blanking and cancellation techniques,
• Adaptive arrays of antennas,
• Some practical application examples of adaptivity,
• Conclusions and way ahead.
Each part is structured with some mathematical background, presentation of key processing algorithms, performance evaluation of the algorithms either in closed form or via Monte Carlo simulation, practical engineering implications related to the implementation of processing algorithms and, finally, examples of application potentials. A comprehensive set of technical references is also provided for further study and investigation.
Cooperative and Networked Navigation
Cooperative or networked navigation are terms used to describe an approach whereby members of a community (or network) exchange information to generate a navigation solution of higher quality than would have been possible if each had operated alone. There are many application in aerospace where cooperative navigation holds the promise of enabling new capabilities or enhancing existing ones. Examples capability enhancements in the operation of transport aircraft, small UAVs and CubeSat on deep-space missions are described. As will be shown, there are many technological and algorithmic challenges that must be addressed before networked navigation concepts become practical. One of the algorithmic challenges has to do with the design of suitable decentralized estimation algorithms. Challenges associated with decentralized estimation are discussed and solutions to deal with them are proposed. Flight tests results from community of small UAVs operating in a GNSS/GPS-denied environment is used to validate performance of the proposed algorithms. Finally, the issue of integrity in a network setting is described. Potential approached for detecting and isolating “bad actors” in a community or network of cooperating vehicles when using decentralized filters are discussed.
Signal of Opportunity Navigation for Small Spacecraft in Deep Space
Spacecraft navigation outside of geosynchronous orbit (GEO) presents an ongoing challenge. Current navigational techniques rely on Earth-based tracking, particularly through NASA's Deep Space Network (DSN). Navigation via the DSN is both fundamentally limited in terms of accuracy, as well as practically limited in terms of availability. Navigation via naturally occurring signals of opportunity, such as those produced by pulsars, quasars, and gamma-ray bursts, is proposed as an alternative navigation technique that could augment or eventually replace navigation via the DSN. This technique involves making range measurements based on the time-difference of arrival (TDOA) of a signal at the user and another location, usually either another cooperating user or a fixed reference point. Estimating the value of the TDOA is challenging, particularly because the signals in question are usually extremely weak.
In this talk we describe algorithms for generating a 6 degree of freedom of freedom position, navigation and timing solution in deep space by measuring the time and angle of arrival of x-rays from pulsars. We show that for pulsar navigation, the position and attitude determination problems are coupled. This is due in part to the small signal-to-noise ratio of pulsar signals and the fact that x-ray photons emanating from various pulsars have no unique identifier which can be used to associate them with their source. To address this challenge a joint probabilistic data association filter is developed. The filter fuses angular rate measurements from a three-axis rate gyro with time and angle of arrival measurements from an x-ray detector. The performance of the filter is validated in simulation and the trade-offs associated with detector size and initial conditions are evaluated. Additional validation of the algorithms is performed by playback of data from x-ray detectors flown on the Suzaku and Chandra missions.
Design and Validation of Fault-Tolerant Integrated Navigation Systems for Small UAVs
Integrated navigation systems used in safety-critical flight control system are required to be fault-tolerant. This means that they must be able to quickly detect the onset of system faults. Once detected, they must be able to isolate them or issue a timely alarm so that operators can make effective contingency and recovery plans. After faults have been detected and isolated, the navigation system must be reconfigured for either for continued operation or recovery of the vehicle. For systems used in manned aircraft fault-tolerance is achieved, in part, by employing massive hardware redundancy whereby several replicas of components that are likely to fail are included in the system. This approach to fault-tolerance is untenable in many of the emerging safety-critical application of small Unmanned Aerial Vehicles (SUAVs). The severe cost as well as size, weight and power (SWAP) constraints encountered in SUAVs limits the level of hardware redundancy that can be used. This has led to the idea of analytical redundancy whereby intelligent sensor fusion algorithm designs are used to make up for the lack (or complement a limited amount) of physical redundancy.
In this presentation filtering approaches that can be used to incorporate analytical redundancy into the integrated navigation systems used for SUAV flight control systems are discussed. As a case study, we consider a synthetic air data systems--an integrated navigation system which estimates angle of attack, slide slip and airspeed without using the traditional pitot static system but rather information from inertial sensors, GNSS receivers, and equations of motion of the SUAV. As will be shown, the algorithms used for achieving analytical redundancy can be complex and often rely on non-linear filters. This makes safety-validation of these algorithms very difficult using traditional approaches such as overbounding. Thus, some potentially promising approaches for validation which leverage ideas from Extreme Value Theory (EVT) are discussed.
Multi Sensor Fusion in Distributed Systems
The increasing trend towards connected sensors (“internet of things” and “ubiquitous computing”) derive a demand for powerful distributed estimation methodologies. In tracking applications, the “Distributed Kalman Filter” (DKF) provides an optimal solution under certain conditions. The optimal solution in terms of the estimation accuracy is also achieved by a centralized fusion algorithm which receives either all associated measurements or so-called “tracklets”. However, this scheme needs the result of each update step for the optimal solution whereas the DKF works at arbitrary communication rates since the calculation is completely distributed. Two more recent methodologies are based on the “Accumulated State Densities” (ASD) which augment the states from multiple time instants. In practical applications, tracklet fusion based on the equivalent measurement often achieves reliable results even if full communication is not available. The limitations and robustness of the tracklet fusion will be discussed. On the other hand, different flavors of Covariance Intersection (CI) have the advantage that it is not required to modify the produced tracks from local sites. Theoretical and practical implications of this approach will be presented.
At first, the lecture will explain the origin of the challenges in distributed tracking. Then, possible solutions to them are derived and illuminated. In particular, algorithms will be provided for each presented solution. The list of topics includes: Short introduction to target tracking, Tracklet Fusion, Exact Fusion with cross-covariances, Naive Fusion, Federated Fusion, Decentralized Fusion (Consensus Kalman Filter), Distributed Kalman Filter (DKF), Covariance Intersection, Distributed ASD Fusion, Augmented State Tracklet Fusion.
Advanced Techniques of Radar Detection in Non-Gaussian Background
For several decades, the Gaussian assumption on the disturbance modeling in radar systems has been widely used to deal with detection problems. But, in modern high-resolution radar systems, the disturbance cannot be modelled as Gaussian distributed and the classical detectors suffer from high losses.
In this talk, after a brief description of modern statistical and spectral models for high-resolution clutter, coherent optimum and sub-optimum detectors, designed for such a background, will be presented and their performance analyzed against a non-Gaussian disturbance. Different interpretations of the various detectors are provided that highlight the relationships and the differences among them.
After this first part, some discussion will be dedicated to how to make adaptive the detectors, by incorporating a proper estimate of the disturbance covariance matrix. Recent works on Maximum Likelihood and robust covariance matrix estimation have proposed different approaches such as the Approximate ML (or Fixed-Point) Estimator or the M-estimators. These techniques allow to improve the detection performance in terms of false alarm regulation and detection gain in SNR.
Some of results with simulated and real recorded data will be shown.
Sea & Land Clutter Statistical Analysis & Modeling
The modeling of the clutter echoes is a central issue for the design and performance evaluation of radar systems. Main goal of this lecture is to describe the state-of-the-art approaches to the modeling and understanding of land and sea clutter echoes and their implications on performance prediction and signal processors design.
The lecture first introduces radar sea and ground clutter phenomena, measurements and measurement limitations, at high and low resolution, high and low grazing angles with particular attention to classical model for RCS prediction. Most part of the lecture will be dedicated to modern statistical and spectral models for high resolution sea and ground clutter and to the methods of experimental validation using recorded data sets. Some comparison between monostatic and bistatic sea clutter data will be provided together with some results on non-stationarity analysis of the high resolution sea clutter.
Sensor Selection for Multistatic Radar Networks
After an introduction to bistatic/multistatic radar systems, the talk will focus on multistatic passive radars. The characteristics of the systems with different sources of opportunity will be described.
The concept of bistatic ambiguity function (BAF), often used to measure the possible global resolution and large error properties of the target parameters estimates, will be introduced and its relation with the Fisher Information Matrix (FIM) and Cramér-Rao Lower Bounds (CRLBs) highlighted. Some example will be provided concerning active LFM radar and passive radar using an UMTS or FM signal as source of opportunity.
The information gained through the calculation of the bistatic CRLBs can be used in a multistatic radar system for the dynamic choice of the optimum Tx-Rx pair or set of bistatic channels for radar target tracking in a multistatic scenario. Taking advantage of the knowledge of the CRLBs is a kind of “radar cognition”, that, applied in multistatic realistic scenarios with both active and passive sensors, can improve the performance of the target tracker and reduce the computational load of surveillance operations. Some results will be shown in both certain and uncertain radar measurements.
Bistatic & Multistatic Radar
Bistatic and multistatic radar systems have been studied and built since the earliest days of radar. As an early example, the Germans used the British Chain Home radars as illuminators for their Klein Heidelberg bistatic system. Bistatic radars have some obvious advantages. The receiving systems are passive, and hence undetectable. The receiving systems are also potentially simple and cheap. Bistatic radar may also have a counter-stealth capability, since target shaping to reduce target monostatic RCS will in general not reduce the bistatic RCS.
Furthermore, bistatic radar systems can utilize VHF and UHF broadcast and communications signals as 'illuminators of opportunity', at which frequencies target stealth treatment is likely to be less effective.
Bistatic systems have some disadvantages. The geometry is more complicated than that of monostatic systems. It is necessary to provide some form of synchronization between transmitter and receiver, in respect of transmitter azimuth angle, instant of pulse transmission, and (for coherent processing) transmit signal phase. Receivers which use transmitters which scan in azimuth will probably have to utilize 'pulse chasing' processing.
Over the years a number of bistatic and multistatic radar systems have been built and evaluated. However, rather few have progressed beyond the 'technology demonstrator' phase. Willis, in his book Bistatic Radar, has remarked that interest in bistatic radar tends to vary on a period of approximately fifteen years, and that currently we are at a peak of that cycle. The purpose of this lecture is therefore to present a subjective review of the properties and current developments in the subject, with particular emphasis on 'passive coherent location' and to consider whether or not the present interest is just another peak in the cycle. It draws on material in the book Advances in Bistatic Radar, edited by Willis and Griffiths, and recently published by SciTech.
The Challenge of Waveform Diversity
Waveform Diversity is defined in the IEEE Std 868-2008 as ‘Adaptivity of the radar waveform to dynamically optimize the radar performance for the particular scenario and tasks. May also exploit adaptivity in other domains, including the antenna radiation pattern (both on transmit and receive), time domain, frequency domain, coding domain and polarization domain’. In other words, modern digital technology now allows us to generate precise, wide-bandwidth radar waveforms, and to vary them adaptively – potentially even on a pulse-by-pulse basis.
This opens up many new possibilities, including ultra-low range sidelobe waveforms, orthogonally-coded waveforms for MIMO radar applications, waveforms with spectral nulls to allow co-existence with other transmissions without mutual interference, and so-called target-matched illumination, where a waveform may be matched to the impulse response of a specific target at a specific aspect angle. We may also learn from natural systems such as bats, whose acoustic signals are sophisticated and are used in an intelligent, cognitive manner.
The lecture will describe the design of these waveforms and their applications, and the prospects for the future.
Tracking and Sensor Data Fusion – Methodological Framework and Selected Applications.
The tutorial covers material of the recently published book of the presenter with the same title (Springer 2014, Mathematical Engineering Series, ISBN 978-3-642-39270-2) and thus provides an guided introduction to deeper reading. Starting point is the well known JDL model of sensor data and information fusion that provides general orientation within the world of fusion methodologies and its various applications, covering a dynamically evolving field of ever increasing relevance. Using the JDL model as a guiding principle, the tutorial introduces into advanced fusion technologies based on practical examples taken from real world applications.
Multistatic Exploration – Introduction to Modern Passive Radar and Multistatic Tracking & Data Fusion
Advanced distributed signal and data fusion for passive radar systems, where DVB TV or GSM mobile phone base stations are used as sources for illuminating targets, for example, is a topic of increasing interest. Even in remote regions of the world, transmitters of electromagnetic radiation become a potential radar transmitter stations enabling covert surveillance for air, sea, and ground scenarios. Analogous considerations are valid for sub-sea surveillance. Illustrated by examples and experimental results, principles of passive radar as well as advanced multistatic tracking and de-ghosting techniques will be discussed.
Navigation: The Road to GPS and Getting Beyond It
Navigation can be viewed as merely determining position or direction, but more commonly it relies on knowledge of position or direction to control or monitor movement from one place to another. In this talk, the field of navigation is introduced, including the evolution of techniques up through modern navigation dominated by electronic navigation including radio, radar, and satellite. The working of GPS, a navigation system based on a constellation of satellites in medium earth orbit that provides positioning information with global coverage is explained. Since its launch in 1978, it has been in ever wider use for finding and keeping track of just about anything: people, animals, boats, trucks, planes, and more. Its initial military uses have expanded far into civilian applications both for individuals and for large-scale commerce and transportation. The wide availability of first personal vehicle GPS navigation and later mobile phone-based navigation have changed how the world does business and how people and goods are moved around. As more and more vehicles and people rely upon it, any threats to GPS navigation become more dangerous. This is a result that more systems have become completely or primarily dependent on GPS for guidance and navigation. Simple jamming of the GPS can render a system completely blind to its location, while more sophisticated attacks can spoof a GPS signal to control its navigation. Future trends and technologies to address the security issue and to move forward in navigation are discussed.
Feature Object Extraction: Evidence Accrual Applied to Information Assurance and Other Problems
Information assurance, also referred to as cyber security, is the process of protecting information from theft, destruction, or manipulation. Cyber threats can be either from internal or external sources, sudden or taking time to develop, such as a slow denial of service (DOS) attack. Some techniques have been developed to behave as sensors to quickly assess elements of attacks that rely on a decision engine to fuse the information to estimate whether or not an attack is underway. Interpreting cybersecurity as a sensor fusion problem, includes a number of additional alternative techniques into the solution space. The concept of evidence accrual is gather measurements over time from different sensors to provide estimates of what event is occurring. A classification fusion technique using feature extraction and fuzzy logic known as Feature Object Extraction is developed and applied to problems such as cyber security and GPS attacks. The feature-aided object extraction technique was developed for the classification problem to fuse different features and generate both a classification and a measure of the quality of the classification estimate. A primary advantage of this is that it evidence is built for each possibility without excluding classes. Thus, the evidence may point to multiple possibilities until evidence disproves a class. Most probabilistic techniques increase the probability of one class by lowering the probability on other classes. Another difference exists in the fact that evidence can be applied to individual classes and not all classes. Feature Object Extraction also allows for a level of evidence to recover from erroneous negative information which might normally cause elimination of a possibility. These design features of Feature Object Extraction are applied to the cybersecurity problem where multiple attacks might be underway simultaneously.
Noise Radar Technologies
Non scanning radar concept, pulse and continouse wave radar, noise waveforms, MIMO radars with noise illumination, noise waveform shaping; spectrum shaping spectrum sharing, multimode noise radar, noise waveforms for target imaging and recognition.
Radar Technology: New Trends & Frontiers
Emerging technologies in radar – staring radar, MIMO radars, waveform design, target matched waveforms, multistatic radars, noise radars, passive radars, non-cooperative target recognition based on deep learning and model matching, sparse processing for SAR and ISAR, CLEAN processing, resource cognitive management, cognitive radars and EW, frontiers in radar technology.
Passive & Active Radar: Detection, Tracking and Imaging
Radar basis, cooperation between active and passive radars, gap feeling concept, multistatic operations, imaging technology: SAR and ISAR monostatic, bistatic and multistatic imaging, sparse sensing and processing, resolution limitations, NCTR.
Passive Radar Technology: Ground-based and Moving Platform Challenges
Passive radar technology, illuminators, processing, limitations, detection, localization and tracking, passive imaging in SAR and ISAR mode, multistatic and sparse imaging. Passive radars on moving platforms – challenges and benefits, Doppler spread clutter cancelation using CLEAN, DPCA, STAP and sparse processing.
History and Future of Radar
This lecture is meant for an audience that wishes to be introduced to Radar, EW, their market and AESS in general. The lecture is a motivational speech for students, young professionals, or events that require a plenary speaker. The lecture begins with the history of radar, including trivia and fun facts. Then, the lecture continues by showing the utmost recent advances in Radar and electronic warfare, emphasizing both military and commercial applications. The lecture then discusses the recent radar market, and the reasons for engineers to embrace this field. The lecture ends with an introduction to AESS and all the benefits that it has to offer.
Radar Systems Prototyping
Whether you are a student seeking real data to prove your Ph.D. thesis, or a researcher planning for experimentation in your grant proposal, or a system engineer in need of a radar prototype to demonstrate your innovative idea to a customer, you will be faced with prototyping a radar system with limited time and budget. There exist many books and tutorials on radar signal processing, but little is found on how to build your radar prototype that can support and run these algorithms. This tutorial will provide you with practical skills and techniques needed to build your advanced radar prototype. The focus is not on how devices/algorithms work, but on how to relate the choice of microwave devices and signal processing algorithms to the desired radar specifications. You will learn how to interpret datasheets, how components/algorithms affect each other, and how signal processing dictates RF constraints, and how signal processing can fix your RF limitations. The course will end with a step-by-step MIMO radar design example, starting from the requirements and ending with a schematic and bill of material. All participants will also receive a free consultation to their current radar system design until their project is completed.
Deep Learning for Radio Frequency Target Classification
In this lecture, we present modern deep learning (DL) techniques for radio frequency (RF) imagery and signals (i.e., Synthetic Aperture Radar / SAR data, communication signals) classification. First, we will provide a short overview of machine learning (ML) /DL theory and understanding of SAR imagery and RF signals. Then we will demonstrate details algorithmic implementation and performance of DL algorithms on classifying SAR data and RF signals. We will present recent research results, technical challenges, and directions of DL-based object classification for RF sensing. Finally, we will provide adversarial attacks and mitigation techniques involving DL-based RF object recognition.
Satellite Navigation and Sensing
Satellite-based navigation has impacted nearly every aspect of our modern society. Yet, this powerful technology relies on extremely low power, vulnerable signals traversing a vast space to reach receivers on the Earth surface or near-Earth space environments. Many complex elements interfere with the signals along their propagation path, including plasma in the upper atmosphere, water vapor in the lower troposphere, as well as physical objects and electromagnetic sources in the user environments. These nuisance factors degrade and limit navigation systems performance. Understanding their effects on navigation signals is the pre-requisite for developing robust navigation technologies that can mitigate these elements impact. Moreover, these effects enable satellite navigation signals to function as signals-of-opportunity for low cost, distributed, passive sensing of our space and local environments. This presentation will first discuss efforts in developing a worldwide network of software-defined sensors to capture and characterize the effects of the space and local environments on satellite navigation signals, followed by the latest technology development to mitigate these effects, and finally case studies demonstrating the potential powerful applications of the satellite navigation sensor network for environmental monitoring.
On Radar Privacy in Shared Spectrum Scenarios
To satisfy the increasing consumer demand for mobile data, regulatory bodies have set forward to allow commercial wireless systems to operate on spectrum bands that until recently were reserved for military radar. Such co-existence would require mechanisms for controlling the interference. One such mechanism is to assign a precoder to the communication system, designed to meet certain interference related objectives. This talk looks into whether the implicit radar information contained in such precoder can be exploited by an adversary to infer the radar's location. For two specific precoder schemes, we simulate a machine learning based location inference attack. We show that the information leaked from the precoder can indeed pose various degrees of risk to the radar's privacy, and further confirm this by computing the mutual information between the respective precoder and radar location.
Optimum Co-Design for Spectrum Sharing Between MIMO Radar and MIMO Communication Systems
Spectrum congestion in commercial wireless communications is a growing problem as high-data-rate applications become prevalent. In an effort to relieve the problem, US federal agencies intend to make available spectrum in the 3.5 GHz band, which was primarily used by federal radar systems for surveillance and air defense, to be shared by both radar and communication applications. Even before the new spectrum is released, high UHF radars overlap with GSM communication systems, and S-band radars partially overlap with Long Term Evolution (LTE) and WiMax systems. When communication and radar systems overlap in the frequency domain, they exert interference to each other.
Spectrum sharing is a new line of work that targets at enabling radar and communication systems to share the spectrum efficiently by minimizing interference effects. The current literature on spectrum sharing includes approaches which either use large physical separation between radar and communication systems, or optimally schedule dynamic access to the spectrum by using OFDM signals, or allow radar and communication system to co-exist in time and frequency via use of multiple antennas at both the radar and communication systems. The latter approach greatly improves spectral efficiency as compared to the other approaches. This talk presents our recent work on the latter approach. In particular, we discuss optimal co-design of MIMO radar and MIMO communication system signaling schemes, so that the effective interference power to the radar receiver is minimized, while a desirable level of communication rate and transmit power are maintained.
Multidimensional Sparse Fourier Transform and Application to Digital Beamforming Automotive Radar
With the rapid developments in advanced driver-assistance systems and self-driving vehicles, the automotive radar plays an increasingly important role in providing multidimensional information on the dynamic environment to the control unit of the vehicle. Traditional automotive radars use digital beamforming to identify range, velocity, and angular parameters of pedestrians, vehicles, obstacles, referred to here as targets. In that context, in the return signal after demodulation, each target is represented as a D-dimensional complex sinusoid, whose frequency in each dimension is related to the target parameters. When the number of targets is much smaller than the number of samples, the return is sparse in the D- dimensional frequency domain.
Sparsity can be employed to reduce the complexity and computation time of the process that estimates D-dimensional frequencies.
In this talk, we present MARS-SFT, a novel sparse Fourier transform for multidimensional, frequency-domain sparse signals, inspired by the idea of the Fourier projection-slice theorem. MARS-SFT identifies frequencies by operating on one-dimensional slices of the discrete-time domain data, taken along specially designed lines; those lines are parametrized by slopes that are randomly generated from a set at runtime. The Discrete Fourier Transforms (DFT) of data slices represent multidimensional DFT projections onto the lines along which the slices were taken. On designing the line lengths and slopes so that they allow for orthogonal and uniform projections of the sparse frequencies, frequency collisions are avoided with high probability, and the multidimensional frequencies can be recovered with low sample and computational complexity.
We show analytically that the large number of degrees of freedom of frequency projections allows for the recovery of less sparse signals. Although the theoretical results are obtained for uniformly distributed frequencies, empirical evidences suggest that MARS-SFT is also effective in recovering clustered frequencies. We also propose an extension of MARS-SFT to address noisy signals that contain off-grid frequencies, and demonstrate its performance in digital beamforming automotive radar, where MARS-SFT can be used to identify range, velocity and angular parameters of targets with low sample and computational complexity.
Business Case for Systems Engineering - Is Systems Engineering Effective?
One of the oft-discussed elements in the field of Systems Engineering is how can one justify the expenditure of program or project monies for systems engineering? In short, what is the payback, or business case, for doing systems engineering? Those who are somewhat knowledgeable in the field of systems engineering know what the value is, but what are the tangible results of doing SE on programs and projects? How do we convince our program and project managers that SE is needed, or essential?
The Systems Engineering Division of the National Defense Industrial Association, in conjunction with the Software Engineering Institute (SEI) of Carnegie Mellon University initiated a comprehensive study in 2008 to try to determine the tangible benefits of performing SE in terms of program/project performance. The study consisted of a series of questions based on SE work products as defined in CMMI® (Capability Maturity Model Integration), which is the currently accepted systems engineering process model in widespread adoption, worldwide. The study concluded that there indeed is a positive correlation of SE performed and program/project performance in terms of budget (cost), schedule and requirements.
The number of responses to this initial study survey was small, in the order of 46 valid responses, from the US defense industry. In order to validate the results with a larger response base to include commercial as well as non-US organizations, in 2011 the NDIA and SEI partnered with the IEEE Aerospace & Electronic Systems Society to reach a broader audience, and the results of this updated survey with over 180 valid responses was completed and released in late 2012.
This lecture will present the results of the updated study of SE performed on programs/projects and program performance in terms of cost, schedule and requirements. It will show that programs with the greater amount of SE performed demonstrate the best performance, while the programs with less SE had a lower rate of success. Since the study correlates program successes in terms of specific SE activities, these results can be used within organizations to assist in establishing systems engineering plans on programs and projects.
The Importance of Sea Clutter Modeling
There is a large body of literature on sea-clutter analysis and modelling. However, these are mostly from radars with coarse resolution with data collected at low grazing angles. Newer maritime airborne radars which operate at higher resolutions and from higher grazing angles will therefore require newer models to characterise the sea-clutter. The DST Group Ingara medium grazing angle dataset was collected for this purpose and has resulted in a significant amount of work both internally at the DST Group and through the NATO SET-185 group on high grazing angle sea-clutter. This talk discusses the modelling of this data set and its application to realistic sea-clutter simulation and performance prediction modelling.
New Concepts in Maritime Detection
Detection in the maritime domain requires the radar return from targets to be distinguishable from the background interference. These radars traditionally use non-coherent processing due to the time-varying and range-varying nature of the Doppler spectra. However, as radar platforms fly higher and look down at steeper angles, the sea clutter power will increase and traditional methods will not be as effective. This talk covers three new approaches for target detection in the maritime domain. These include the use of stationary wavelet transforms to isolate different spectral features, the use of sparse signal separation algorithms and the application of the single snapshot coherent detector. Each of these techniques is demonstrated using using either real or realistic simulated sea clutter and shows good potential when compared to traditional processing methods.
Aerospace Cyber-Physical Systems: Avionics, Spaceflight Systems and Unified Traffic Management
Continuous rapid advances in airborne computing, sensors and communication technologies are stimulating the development of integrated multisensor avionics systems for an increasing number of aeronautical and space applications. In particular, intelligent automation and networking technologies are being extensively applied to UAS and space platforms, allowing the development of high-performance multisensor Guidance, Navigation and Control (GNC) systems as well as advanced mission systems with reduced Size, Weight, Power and Cost (SWaP-C). The widespread introduction of Performance-Based Navigation (PBN) is the first step of an evolutionary process from equipment-based to Performance-Based Operations (PBO). PBN specifies that aircraft navigation systems performance requirements shall be defined in terms of accuracy, integrity, availability and continuity for the proposed operations in the context of a particular airspace when supported by an appropriate Air Traffic Management (ATM) infrastructure. The full PBO paradigm shift requires the introduction of suitable metrics for Performance-Based Communication (PBC) and Performance-Based Surveillance (PBS). The proper development of such metrics and a detailed definition of PBN-PBC-PBS interrelationships for manned and unmanned aircraft operations represent one of the most exciting research challenges currently facing the avionics research community with major impacts on air transport safety, airspace capacity and operational efficiency.
In parallel, the International Civil Aviation Organization (ICAO) Aviation System Block Upgrades (ASBUs) rely on a progressive introduction of advanced Communication, Navigation and Surveillance (CNS) technologies, including digital data links, satellite services and Automatic Dependent Surveillance–Broadcast (ADS-B), which will effectively enable the transition to network-centric aviation operations. However, the international aviation community (both civil and military) is now facing important technological and operational challenges to allow a proper development and deployment of the CNS/ATM and Avionics (CNS+A) innovations announced by the FAA Next Generation Air Transportation System (NextGen), the EU Single European Sky ATM Research (SESAR) and other programs such as CARATS (Collaborative Actions for Renovation of Air Traffic Systems) in Japan and OneSky in Australia. In particular, it is essential to address global harmonization issues and to develop a cohesive certification framework for future CNS+A systems simultaneously addressing safety, security and interoperability requirements as an integral part of the Research, Development, Test and Evaluation (RDT&E) process.
In response to these challenges, modern avionics and space systems are becoming more and more cyber-physical, with software and hardware components seamlessly integrated towards performing highly automated/autonomous tasks. These tasks are increasingly demanding and distributed amongst multiple platforms/sub-systems, while recent research trends elicit the introduction of Artificial Intelligence (AI), fault-tolerant architectures and adaptive Human-Machine Interfaces and Interactions (HMI2) to support the development of Trusted Autonomous Systems (TAS).
This lecture addressed key contemporary issues in air and space Cyber-Physical Systems (CPS) research focussing on the key challenges and opportunities currently faced by the global aerospace industry with the pervasive adoption of automation and AI technologies. First of all, automation is becoming more and more complex, with the widespread adoption of heterogeneous sensor networks and the need for optimization algorithms that deal with an increasing amount of input data (including unstructured, semi-structured and asynchronous data), multiple objectives and constraints. A well-known side effect of this complexity is the reduction or loss of situational awareness of the human operator, who is no longer capable of evaluating the validity and quality of the solutions implemented. Secondly, most of the automation we are introducing is deterministic and not adaptive enough and, paradoxically, it may end up by increasing the workload of human operators in certain scenarios instead of alleviating it. This is why instances of cognitive overload are not infrequent despite dealing with highly automated systems. Finally, the kind of automation that is currently being adopted in complex systems is not deeply trusted by humans because it lacks sufficient transparency and/or integrity.
It is therefore essential to develop innovative CPS that address these fundamental challenges by implementing innovative cognitive processing and machine learning techniques towards enhancing human-machine interactions and building trusted autonomy. CPS are at the core of the digital innovation that is transforming our world and redefining the way we interact with intelligent machines in a growing number of industrial sectors and social contexts. Present-day CPS integrate computation and physical processes to perform a variety of mission-essential or safety-critical tasks. From a historical perspective CPS combine elements of cybernetics, mechatronics, control theory, systems engineering, embedded systems, sensor networks, distributed control and communications.
Properly engineered CPS rely on the seamless integration of digital and physical components, with the possibility of including human interactions. This requires three fundamental functions to be present: control, computation and communication. Practical CPS typically combine sensor networks and embedded computing to monitor and control physical processes, with feedback loops that allow physical processes to affect computations and vice-versa. Despite the significant progress in CPS research, the full economic, social and environmental benefits associated to such systems are far from being fully realized. Major investments are being made worldwide to develop CPS for an increasing number of engineering applications, including aerospace, transport, defense, robotics, communications, security, energy, medical, smart agriculture, humanitarian, etc.
Current avionics and space systems research is focusing on two main categories of CPS: Autonomous Cyber-Physical (ACP) systems and Cyber-Physical-Human (CPH) systems. ACP systems operate without the need for human intervention or control. For ACP systems to work, formal reasoning is required as these systems are normally used to accomplish mission/safety-critical tasks and any deviation from the intended behavior may have significant implications on human health, well-being, economy, etc. A sub-class is that of Semi-Autonomous Cyber-Physical (S-ACP) systems, which perform autonomous tasks in a specific set of pre-defined conditions but require a human operator otherwise.
A separate category is that of CPH systems. These are a particular class of CPS where the interaction between the dynamics of the system and the cyber elements of its operation can be influenced by the human operator and the interaction between these three elements is regulated to meet specific objectives. CPH systems consist of three main components: physical elements sensing and modeling the environment, the systems to be controlled and the human operators; cyber elements including the communication links and software; and human operators who partially monitor the operation of the system and can intervene if and when needed.
Today, several aerospace CPS implementations are S-ACP systems. This fact limits the achievable benefits and the range of possible applications due to the reduced fault-tolerance and the inability of S-ACP to dynamically adapt in response to external stimuli. Many S-ACP architectures are progressively evolving to become either ACP or CHP depending on the specific applications. Current research in the aerospace, defense and transport sectors aims at developing robust and fault-tolerant ACP and CPH system architectures that ensure trusted autonomous operations with the given hardware constraints, despite the uncertainties in physical processes, the limited predictability of environmental conditions, the variability of mission requirements (especially in congested or contested scenarios), and the possibility of both cyber and human errors. A key point in these advanced CPS is the control of physical processes from the monitoring of variables and the use of computational intelligence to obtain a deep knowledge of the monitored environment, thus providing timely and more accurate decisions and actions. The growing interconnection of physical and digital elements, and the introduction of highly sophisticated and efficient AI techniques, has led to a new generation of CPS, that is referred to as intelligent (or smart) CPS (iCPS).
By equipping physical objects with interfaces to the virtual world, and incorporating intelligent mechanisms to leverage collaboration between these objects, the boundaries between the physical and virtual worlds become blurred. Interactions occurring in the physical world are capable of changing the processing behavior in the virtual world, in a causal relationship that can be exploited for the constant improvement of processes. Exploiting iCPS, intelligent, self-aware, self-managing and self-configuring systems can be built to improve the quality of industrial process across a variety of application domains.
Advances in aerospace CPS research are accelerating the introduction of intelligent automation (both on platforms and ground control systems) and a progressive transition to trusted autonomous operations. Major benefits of these capabilities include a progressive de-crewing of flight decks and ground control centers, as well as the safe and efficient operations of air and space platforms in a shared, unsegregated environment.
In the commercial aviation context, CPS are supporting the transition from the two-pilot flight crews to single pilot operations, with the co-pilot potentially replaced by a remote pilot on the ground. A single remote pilot on the ground, on the other hand, will no longer be restricted to controlling a single UAS and instead will be allowed to control multiple vehicles, in line with the so-called One-to-Many (OTM) concept.
Important efforts are being devoted to the integration of Unmanned Aircraft Systems (UAS) in all classes of airspace, eliciting the introduction of UAS Traffic Management (UTM) services seamlessly integrated with the existing (and evolving) ATM framework. In particular, UTM requires substantial advances in CNS+A technologies and associated regulatory frameworks, especially to enable low-altitude and Beyond-Line-of-Sight (BLoS) operations. Recent advances in communications, navigation and Sense-and-Avoid (SAA) technology are therefore progressively supporting UTM operations in medium-to-high density operational environments, including urban environments.
Important research efforts are also necessary to demonstrate the feasibility of avionics and CNS/ATM technologies capable of contributing to the emission reduction targets set by the International Civil Aviation Organization (ICAO), national governments and various large-scale international research initiatives. Therefore, growing emphasis is now being placed on environmental performance enhancements, focusing on Air Traffic Flow Management (ATFM), dynamic airspace management, 4-dimensional (4D) trajectory optimization, airport automation and, in the near future, urban flight operations.
In addition to CNS+A technologies for air operations, space CPS are also being researched for a wide range of practical applications including commercial satellites, space transport/tourism, and interplanetary scientific missions. In this context, it is anticipated that economically viable and reliable cyber-physical systems will play a fundamental role in the successful development of the space sector and significant research efforts are needed in the field of reusable space transportation systems, Space Traffic Management (STM), and Intelligent Satellite Systems (SmartSats).
In particular, the operation of space launch and re-entry platforms currently requires considerable airspace segregation provisions, which if continued will become increasingly disruptive to civil air traffic. Moreover, the currently limited space situational awareness is posing significant challenges to the safety and sustainability of spaceflight due to the rapidly growing amount of resident space objects and particularly orbital debris. The deployment of network-centric CNS+A systems and their functional integration with ground-based ATM in a Space Traffic Management (STM) framework will support a much more flexible and efficient use of the airspace with higher levels of safety. These evolutions will support the transition to what the international aerospace and electronic systems research community have started naming “Unified Traffic Management.”
Navigation Sensors and Systems in GNSS Degraded and Denied Environments (Or How I Learned to Stop Worrying About GPS)
Position, velocity, and timing (PVT) signals from Global Navigation Satellite Systems (GNSS) are used throughout the world. However, the availability, reliability, and integrity of these signals in all environments have become a cause for concern for both civilian and military applications. International news reports about a successful GPS spoofing attack on ships navigating the Black Sea in June 2017 have caused concerns. Prior to that, reports about a successful GPS spoofing attack on a civilian UAV in the USA increased questions over the planned use of UAVs in the national airspace and the safety of flight in general. Jamming of GPS by the North Koreans has interfered with ship and aircraft navigation for several years. Recently, the Russians have apparently equipped cell towers with GPS jamming devices as a defense against attack. All of these incidents have led the navigation community to search for reliable solutions in the face of spoofing and jamming. Based on his own experiences with navigation systems since Sputnik and Apollo, the presenter will give an historical and personal perspective on what is required for civilian and military navigation applications now and in the future.
Inertial System and GPS Technology Trends
This presentation presents a roadmap for the development of inertial sensors, the Global Positioning System (GPS), and integrated inertial navigation system (INS)/GPS technology. This roadmap will lead to better than 1-m accuracy, low-cost, moving platform navigation in the near future. Such accuracy will enable military and civilian applications which were previously unthought-of a few years ago. After a historical perspective, a vision of the inertial sensor instrument field and inertial systems for the future is given. Accuracy and other planned improvements for GPS are explained. The trend from loosely-coupled to tightly-coupled INS/GPS systems to deeply-integrated INS/GPS is described, and the synergistic benefits are explored. Some examples of the effects of GPS interference and jamming are illustrated. Expected technology improvements to system robustness are also described. Applications that will be made possible by this new technology include personal navigation systems, robotic navigation, and autonomous systems with unprecedented low-cost and accuracy.
Inside Apollo: Heroes, Rules and Lessons Learned in the Guidance, Navigation, and Control (GNC) System Development
This Abstract was written in March 2019 which is halfway between the 50th Anniversary of Apollo 8 (Dec 1968) and Apollo 11 (July 1969). Those 2 flights were among the greatest explorations of mankind. In 8, astronauts deliberately put themselves in orbit around the moon expecting the rocket engine to later fire and bring them home to Earth. In 11, it was mankind’s first visit to the moon and Tranquility Base. Movies, books, articles, and documentaries have covered the space race. The author will give his thoughts based on 10 years inside the GNC program design, many hours in the Spacecraft Control room at Cape Kennedy monitoring GNC performance through liftoff, and then providing real-time mission support to NASA from MIT in Cambridge, MA.
How It Works – UAV Technology Overview
Commercial level drones and UAVs are readily available to everyone today. New users can benefit from and drone technology familiarization. This lecture provides a technology overview of unmanned air vehicles and systems (UAV/UAS). A system overview with component descriptions is provided. UAV flight dynamics are discussed. Subsystems and component functions of UAS interfaces are outlined. Hardware and software are demonstrated. Participants will become familiar with the methods and practices of flight operations.
How it's Managed - HAV Policies and Regulations.
UAS technology and applications are advancing at such a rapid rate, that operator regulations and public education are struggling to keep up. Before a drone hobbyist can become a commercial UAV operator, she must be knowledgeable of current UAV policies and regulations and pass a certification exam to become a licensed operator. This lecture provides an overview of the knowledge areas necessary to become an FAA licensed UAV operator. An exploration of current FAA concerns and near-term considerations is also provided.
How It’s Used – UAV Applications and Business Opportunities
UAS technology, sensing, and software are advancing at such a rapid rate, that operators are always finding new ways to use them in commercial applications. Whereas drone used to be primarily used for photography, by adding advanced sending systems, they can be used for surveillance, inspection, security, and even VLOS/NVLOS material delivery. This lecture provides an overview of the currently popular commercial applications for UAVs as well as an exploration of future possibilities. An overview of business operations will guide entrepreneurs to start their own UAV operator business.
A Course for New Drone Operators – UAV Technology, Regulations, and Applications
Drone operations are becoming more commonplace in society. Any adult may purchase a drone for hobby operation and may choose to pursue licensing for commercial operation. A new pilot may become more effective by understanding drone technology. A new business operator can be more profitable by understanding relevant drone applications. The FAA has recently issued updated rules for drone registration, pilot licensure, and operation. The purpose of this course is to prepare the public to be knowledgeable users of drone technology, effective strategists in drone business applications, and good citizens of drone operator regulations and policies.
Analytic Combinatorics for Multi-Object Tracking and Higher Level Fusion
Exact solutions of many problems in tracking have high computational complexity and are impractical for all but the smallest of problems. Practical implementations entail approximation. There is a bewildering variety of established trackers available and practicing engineers and/or researchers often study them almost in isolation of each other without fully understanding what these trackers are about and how they are inter-related. One reason for this is that these filters have different combinatorial problems which are approached by explicitly enumerating the feasible solutions. The enumeration is usually a highly detailed, hard to understand accounting scheme specific to the filter and the details cloud understanding the filter and make it hard to compare different filters. On the other hand, the analytic combinatoric approach presented in this tutorial avoids the heavy accounting burden and provides a solid tool to work with. This tool is the derivative of multivariate calculus, which all engineers easily understand.
This lecture is designed to facilitate understanding of the classical theory of Analytic Combinatorics (AC) and how to apply it to problems in multi-object tracking and higher level data fusion. AC is an economical technique for encoding combinatorial problems—without information loss—into the derivatives of a generating function (GF). Exact Bayesian filters derived from the GF avoid the heavy accounting burden required by traditional enumeration methods. Although AC is an established mathematical field, it is not widely known in either the academic engineering community or the practicing data fusion/tracking community. This tutorial lays the groundwork for understanding the methods of AC, starting with the GF for the classical Bayes-Markov filter. From this cornerstone, we derive many established filters (e.g., PDA, JPDA, JIPDA, PHD, CPHD, MultiBernoulli, MHT) with simplicity, economy, and insight. We also show how to use the saddle point method (method of stationary phase) to find low complexity approximations of probability distributions and summary statistics.
Passive Through-Wall Human Sensing with WiFi
Through-wall human sensing is of high interest to both the Defense Department in counter-terrorism applications and the Home Affairs Department for civilian law enforcement purposes, particularly in the modern highly urbanized environment. Conventional active radar sensors possess good performance, however, at the expense of high cost and complexity of the transmitter/receiver design. This talk demonstrates the effectiveness of a passive through-wall human sensing technique using the opportunistic WiFi signal. With a simple software defined radio (SDR) receiver, the position of indoor WiFi access point is accurately localized and the Doppler/micro-Doppler of human motions inside the room are clearly detected. It is demonstrated that the WiFi-based passive radar is capable of detecting not only major motions such as walking and waving hand, but also very small human motions such as finger typing and breathing.
Countering the Drone’s Threat by Radar – Technical Challenges and Perspectives
Nowadays the increasing use of remote-controlled mini drones has become a real threat to the air traffic management. They may be misused for criminal acts or even terrorist attacks, and pose serious threats for the public security. Thus, accurate detection and classification of mini drones are highly essential and critical. Comparing to other Electro-Optical/Infrared (EO/IR) sensors, radars show superior performance for their wide, long-range and rapid surveillance capabilities in all weather conditions, which are therefore widely adopted as an effective sensor to detect the drones. It is well known that the micro-Doppler generated by the drones’ rotating blades is the most prominent signature of drones different from other slow-moving objects such as birds, persons, and ground vehicles. Thus, micro-Doppler is popularly used for the drone detection and classification in many literatures. However, it should be noted that the behaviors of micro-Doppler of drones vary significantly for different radar operating modes. It may not be easy to capture the micro-Doppler signatures of drone in the real scenario using a ground-based surveillance radar. In this talk, the radar signal processing techniques for drone detection and classification are introduced, and the behaviors of micro-Doppler of drones for different radar operating modes are analyzed in details. The suitable radar-based solutions are proposed for countering the drone’s threat in different application scenarios.
Advanced Sensor Concepts, Exploitation, Signal Processing and Systems Engineering
In this talk, a number of concepts and technologies forming the foundation for the exploitation of sensors from a Big Data perspective are presented. A signal processing and systems engineering approach is discussed, and heuristic techniques are presented as being critical to leap ahead advances in sensor exploitation. While radar centric in nature, the foundation for a more general sensors approach to Big Data exploitation is discussed. Archival data is considered to be essential to the optimal exploitation of sensor phenomena, as humans are unable to fully observe or even comprehend the volumes of rapidly changing data available today. Topics as diverse as radio frequency tomography for below ground imaging, millimeter wave sensing for exquisite feature extraction, target resonance and dynamic imaging of targets obscured by clutter and cover, as well as space-time adaptive processing are presented. The integrating theme of Big Data exploitation is discussed within the context of these enabling sensor technologies as is the “Velocity of Sensor Data.”
Maximum-Likelihood Methods in Target Tracking And Fundamental Results on Trackability
If a GLR (generalized likelihood ratio) test cannot make a good decision, then there is no good decision to be made. If the test is as to whether or not a VLO target is present in heavy clutter, the GLR should be the maximum-likelihood probabilistic data association (MLPDA) tracker. The MLPDA is very effective, but has several operational shortcomings that its close cousin, the maximum-likelihood probabilistic multi-hypothesis tracker (MLPMHT) avoids. We will discuss and compare both algorithms, plus show some fortuitous new MLPMHT developments. Perhaps most interesting, we are now able to set the MLPMHT threshold accurately and confidently, as would be a requirement for real-time operation. And since one cannot do better than ML, we are now able to make fundamental statements about which targets can be tracked and which cannot: these statements are essentially a bound, as opposed to algorithm-specific performance experience.
A Primer on Various Approaches to Data Association
To thread measurements (well, many call them “hits” or “plots”) of radar, sonar or imaging observations to a credible, smooth and reportable trajectory requires a filter. We’ll discuss those – Kalman, Unscented, particle, etc. – briefly. But the main topic here arises because one cannot even begin to filter without knowing which hits come from which targets, and which hits are complete nonsense (clutter). When wrapped inside some scheme for such data-association, a filter becomes a tracker. This talk is intended to explain, at a fairly high level, the intuition behind some of the popular tracking algorithms.
Distributed Detection and Data Fusion
The initial paper on the subject of distributed detection, by Tenney and Sandell, showed that under a fixed fusion rule, for two sensors with one bit outputs, the optimal Bayes sensor decision rule is a likelihood ratio test. It has been shown that the optimal fusion rule for N sensors is a likelihood ratio test on the data received from the sensors. Reibman and Nolte and Hoballah and Varshney have generalized the results to N sensors with optimal fusion, again with the restriction of one bit sensor outputs; this has been relaxed later to multi-bit quantizations.. In this “primer” talk we explore a number of issues in distributed detection, including some pathologies, the benefits of fusion, optimal design, structures for decision flow, consensus, sensor biases, feedback, deliberate obfuscation (i.e., security) and censoring. We also devote some time to distributed estimation (i.e., fusion for tracking): why is it difficult and what seems to work best?
Inertial Navigation: Sensing and Computation into the Future
Acquiring the attitude, velocity and position information is fundamental to any motion body manipulation. Inertial navigation is a self-contained method to achieve this goal by integrating inertial measurements from triads of gyroscopes and accelerometers. Over half a century has witnessed tremendous efforts in fabricating inertial sensors with even further improved performance, as well as in designing advanced inertial navigation algorithms for strapdown systems so as not to compromise the quality of inertial sensors. This lecture will review the forefront of high-quality inertial sensors including but not limited to optical, HRG and atom types. Additionally, it will review the algorithm development history and demonstrate by analyses and examples how the present-day INS algorithm is not always able to deliver the target precision of the well-known DARPA PINS project as a result of fundamental approximations in handling the motion-induced noncommutativity errors. A brand-new approach of INS computation based on functional iteration can bring down the noncommutativity errors to almost machine precision at affordable cost. The approach paves a solid algorithmic road for the forthcoming ultra-precision INS of meter-level accuracy, and the existing dynamic applications as well.
Non-Convex Optimization for Active and Passive Radar
This talk will present recent advances in passive and active multi-static radar from an optimization viewpoint; specifically non-convex optimization with the goal of designing provably exact and computationally efficient novel algorithms. A variety of challenging active and passive radar imaging problems encountered in real world and novel radar applications will be introduced some of which include passive and active imaging with limited bandwidth, limited number of measurements and unknown transmitted waveforms, imaging in the presence of phase errors and additive noise and clutter, imaging without phase information and multi-static interferometric imaging. The talk will motivate, describe and illustrate the application of convex and non-convex optimization principles in addressing these problems. The talk will introduce novel methods of low-rank matrix recovery and Wirtinger Flow and recent exciting applications in phaseless synthetic aperture radar, auofocus, passive radar, and super-resolution imaging.
Machine Learning for Radar Sensing and Imaging
Machine Learning (ML) has dramatically advanced the state-of-the-art for many problems in science and engineering. Inspired by these developments, ML has drawn increasing attention in radar signal processing. Predominantly, applications of ML have focused on the advancement of automatic target recognition algorithms across a multitude of radar data types, including synthetic aperture radar, micro-Doppler signatures, and high-resolution range profiles. Performance gains achieved in the area of classification have spurred research into potential application of ML to other facets of radar design – The focus of this talk will be these novel perspectives on the role of machine learning in the radar sensing process. I will present innovative techniques that leverage machine and deep learning for the purposes of passive radar, image reconstruction, radar resource management, waveform estimation and design, and automatic target recognition. Together this talk will draw attention to new and innovative ideas for application of ML to radar sensing and imaging.