WInnComm 2018 Technical Session Presentation Abstracts

Click to jump to:  Wednesday  ~  Thursday
~~~~~~~

Wednesday, 14 November

10:30 - 12:00

14:00 - 15:30

TS3: Machine Learning and AI for Dynamic Spectrum Management II
TS4: Topics in Spectrum Sharing


[TOP]

10:30 - 12:00

TS1:  Machine Learning and AI for Dynamic Spectrum Management I

Leveraging Link Adaptation for Fully Autonomous and Distributed Underlay Dynamic Spectrum Access Based on Neural Network Cognitive Engine
Andres Kwasinski and Fatemeh Shah-Mohammadi (Rochester Institute of Technology, USA)
Underlay Dynamic Spectrum Access (DSA) addresses the challenge of radio spectrum scarcity by allowing Cognitive Radios (CRs) in a secondary network (the "secondary users" - SUs) to share a spectrum band with an incumbent primary network (PN). In underlay DSA, CRs transmit over the same portion of the spectrum being used by the PN and at the same time, by limiting their transmit power level so that the interference they create on the PN remains below a tolerable threshold. Despite having been extensively studied, the main challenge in underlay DSA remains on how to establish the interference threshold for the PN links and how the SUs autonomously become aware of the interference they create on the PN, especially when following an ideal operating setup where there is no exchange of information between primary and secondary networks. In contrast to previous works that assume that SUs could access the control feedback channel in the PN, we will present a technique that does not rely on any information exchange between the networks but, instead, takes advantage of the use at the primary network of adaptive modulation and coding (AMC). More specifically, in the presented technique we propose an underlay DSA mechanism that only requires the estimation of the adapted modulation scheme being used at the nearest PN link (this can be achieved by performing modulation classification on the radio waveform from the PN link that the CR is passively listening to during spectrum sensing). Since the full AMC mode (modulation order and channel coding rate) chosen for transmission depends on the link signal-to-interference-plus-noise ratio (SINR), by estimating this mode, it is possible for a CR to learn the SINR experienced by a primary link. However, in a general setting with no exchange of information between the primary and the secondary network, while a CR can apply signal processing techniques to infer during spectrum sensing the modulation order being used by a primary transmitter, there is no previous known practical way to infer the experienced throughput of the primary link. The technique to be presented applies a non-linear autoregressive exogenous neural network (NARX-NN) cognitive engine in the SUs to infer, without tapping into feedback or control channels from another network, the full AMC mode used in the transmission of the other network, and to leverage this inference in the realization of a fully autonomous and distributed underlay DSA scheme. For this, the NARX-NN cognitive engine outputs an estimate of the throughput at the nearest primary link from using as inputs the estimated modulation order in the same link and the transmit power for a sequence of probe messages. Moreover, leveraging the use of adaptive modulation in the PN also allows the CR to establish the PN interference threshold. This is because the use in a radio link of adaptive modulation allows for the background noise to increase up to a certain level before the average throughput in the network starts to decrease. In the context of underlay DSA where the PN does not exchange any information with the SN, the interference imposed on the PN by an underlay-transmitting CR can be seen as a background noise that can be increased up to a level which does not affect the average throughput in the PN. In the technique to be presented, a CR couples these observations with the estimation of the full AMC mode in the nearest PN link provided by the cognitive engine to decide on its transmit power and other waveform settings. Simulation results will show that the presented technique is able to limit the reduction in PN relative average throughput within a prescribed target maximum value, while at the same time finding transmit settings for the SUs that will result in as large throughput in the SN as could be allowed by the PN interference limit. As such, while succeeding in its main goal of autonomously and distributedly determining the transmit power of the SUs such that the interference they create remains below the PN limit, the presented technique is also able to manage the tradeoff between the effect of the SN on the PN and the achievable throughput at the SN. Specifically, simulation results will show that for the proposed system with a target primary network maximum relative average throughput reduction of 2%, the achieved relative change is less than 2.5%, while at the same time achieving useful average throughput values in the secondary network between 132kbps and 50kbps. Moreover, it will be seen that the enabling factor in the operation of the proposed technique is the ability of the NARX neural network cognitive engine to accurately predict the modulation scheme and channel coding rate used in a primary link without the need to exchange information between the primary network and the secondary (e.g. access to feedback channels). This ability also results in a significant increase in the transmission opportunities in the SN compared to schemes with the same basic approach but able to only use modulation classification information from the primary link, instead of the full AMC mode.
 

Handheld Wireless Prognostic Steering of Asset Sustainment Using Artificial Intelligence - Withdrawn

Gregory Thompson (1350 Donax Avenue & Andromeda Systems Inc., USA)
Many assets have sustainment solutions in place that are proposed to the user upon identification of a trigger/problem/failure. Presently, solutions are prioritized by the number of successful fault resolution executions. A requirement exists to utilize intuitive optimization algorithms to prioritize successful fault resolution at the handheld level or point of performance. This new methodology considers cost and performance (down time), as well as success rate - improving availability and affordability. A successful intuitive algorithm solution will prioritize user actions to prevent, mitigate or resolve challenges while minimizing the cost and downtime associated with each action. Today's high technology systems employ sophisticated sustainment systems to leverage advanced technology, artificial intelligence, and system self-maintenance. One of the key Business Analytics (BA) functions in Prognostics and Health Management (PHM) is the ability to manage the health of a system or component and efficiently return it to an operational status. Increasingly, original equipment manufacturers (OEM) are integrating a Diagnostic Steering Algorithm to Prioritize Pre-Packaged Maintenance Solutions into their Computerized Maintenance Management Systems (CMMS). This software tool is used to aid the maintainer in resolving faults and anomalies and is usually initiated from within a Work Order (WO) for every scheduled and unscheduled maintenance activity. This software module offers the maintainer a virtual equivalent to a legacy asset fault isolation manual. It provides the maintainer the ability to troubleshoot failure modes on components, subsystems, and systems more affectively closer to the point of performance while minimizing training resources. By using a smartphone application the user obviates having to return the system to maintenance, but rather returns the asset to service by selecting from a prioritized solution set at the point of performance. This proposal offers a solution utilizing the application of a learning algorithm that uses point of performance collected maintenance data to mature a Diagnostic Steering algorithm when Prioritizing Pre-Packaged Maintenance Solutions. Key challenges and major technical hurdles to be addressed: the accuracy, availability, and accessibility of the relevant data is essential to the success of this application. In addition, measuring the performance metrics relative to cost, performance, and success probability are critical but difficult to achieve. Employment of all measure of wireless data collection efforts is necessary. Thirdly, mastering the learning portion of the algorithm to teach itself after each iteration, find solutions faster, skip or eliminate risky paths, and recording results correctly in order to compare across several systems, geographies, environments, and utilization properties. The initial test trials for the proof of concept were conducted on a subsystem of a squadron of fighter jets, the data availability for our initial assessment was sufficient, but did not provide a high enough level of confidence. Second and third trails were better and encouraging to seek additional funding and test on other projects such as data centers and tug boats. One of the unexpected significant results from the test trials was the reduction of false positives, because the learning algorithm notices the trend in items that are returned to service as okay (RTOK). This proposal provides a novel approach to provide secure closed loop systems for monitoring, control, diagnostics, and prognostics of complex systems, facilities, communications, and environmental equipment using handheld wireless devices. This will serve to provide monitoring and control of operational system health and integrity along with associated data analytics in providing cost, performance, and probability of success for each recommended course of action.

 

Cognitive Anti-jamming Satellite-to-Ground Communications on NASA's SCaN Testbed

Dale Mortensen, P. E. (NASA Glenn Research Center, USA); Sudharman Jayaweera (Bluecom Systems and Consulting, LLC, USA)
Machine learning aided cognitive anti-jamming communications is designed, developed and demonstrated on a live satellite-to-ground link. A wideband autonomous cognitive radio (WACR) is designed and implemented as a hardware-inthe- loop (HITL) prototype. The cognitive engine (CE) of the WACR is implemented on a PC while the software-defined radio (SDR) platform utilized two different radios for spectrum sensing and actual communications. The cognitive engine performs spectrum knowledge acquisition over the complete spectrum range available for the SATCOM system operation and learns an antijamming communications protocol to avoid both intentional jammers and inadvertent interferers using reinforcement learning. When the current satellite-to-ground link is jammed, the cognitive engine of the ground receiver directs the satellite transmitter to switch to a new channel that is predicted to be jammer-free for the longest possible duration. The end-to-end, closed-loop system was tested on the NASA's Space Communications and Networking (SCaN) testbed on the International Space Station (ISS). The experimental results demonstrated the feasibility of satellite-to-ground cognitive anti-jamming communications along with excellent anti-jamming capability of machine-learning aided cognitive protocols against several different types of jammers. Index Terms—Cognitive anti-jamming communications, cognitive radios, machine learning, Q-learning, reinforcement learning, satellite communications, wideband autonomous cognitive radios.

 

Beyond Cognitive Radio: Embedding Artificial Intelligence and Machine Learning Withdrawn
Vincent J Kovarik, Jr (VINMA Systems, LLC)
Cognitive Radio has been an evolving area within the communications community for more than a decade. The initial focus of this research was primarily focused on dynamic use of spectrum as a means to maximize usage of available spectrum. An evolutionary extension of this technology was in the application of dynamic spectrum to dynamically avoid interference. This paper presents a different perspective that takes a systemic perspective of the radio system by embedding artificial intelligence and machine learning as a logical extension of radio management. Capabilities such as detecting trends in operational state, predicted or anticipated behaviors, and identifying patterns of usage are discussed. How these capabilities can be applied to improve the operational capabilities of the system are presented.
 
Spectrum Sensing and Cognitive Techniques in High Frequency (HF) Communication Networks Withdrawn
Noel Teku and Tamal Bose (University of Arizona, USA)
The High Frequency (HF) band, ranging from 3 -30 MHz, has been a popular medium for maintaining long-range communications at a low cost by bouncing signals off the ionosphere. As a means of making communications in the HF band less reliant on manual operation, the Automatic Link Establishment (ALE) protocol was introduced. ALE has been adopted in multiple military and NATO standards, serving as a guideline for the design of second and third generation (2G/3G) HF radios. Under ALE, a station performs a link quality analysis (LQA) to evaluate the viability of available frequencies for transmitting to another station based on a specific ranking (i.e. BER, SNR). Once this test is completed, the station will then utilize the channels with the highest LQA scores for initial linking attempts. However, due to the frequent ionospheric variations/instability, the stored LQA scores may not accurately reflect the current channel conditions. In addition, if the available channels are unsuitable for usage, a link may not be established. Future revisions of ALE will require being able to sense the channel and adapt its linking process accordingly in order to maintain reliability and more effectively prevent collisions between users. Thus, the objective of this paper is to present a survey of research efforts that investigate different cognitive/spectrum sensing approaches that can be used to improve the performance of ALE.

[TOP]

TS2: Topics in Advanced Communications

Heterogeneous System Architecture 1.2

John Glossner (Optimum Semiconductor Technologies, USA)

We describe the version 1.2 Heterogeneous System Architecture (HSA) specifications ratified in 2Q2018. HSA is a runtime specification, system architecture, and virtual instruction set architecture (HSAIL) that allows multiple heterogeneous processors to execute programs. Building upon cache coherent shared virtual memory, the original specifications covered CPU+GPU cooperation. Version 1.2 greatly expands the processor suppport to include Digital Signal Processors, fixed function accelerators, and FPGA's. In addition, the v1.2 specification includes support for vector execution models by the addition of HSAIL instructions including parallel for. This has also enabled support for direct C++17 parallelization using the PSTL. Other enhancements include unified profiling of heterogeneous processors and a common debug API's. Open source implementations of all tools, runtime, device drivers, compilers, and finalizers are freely available. Production hardware is shipping from AMD and their Ubuntu implementation of HSA (called ROCm) is available as open source.

 

Extending the USRP/RFNOC Framework for Implementing Latency-Sensitive Radio Systems

Joshua Monson (University of Southern California -- Information Sciences Institute, USA); Zhongren Cao (C-3 Comm Systems, LLC, USA); Pei Liu (New York University, USA); Matthew French and Travis Haroldsen (USC/ISI, USA)
In this paper, we discuss the experiences, lessons learned, and development effort to implement a broadband CSMA/CA based OFDM transceiver with stringent timing requirement on a resource-limited Ettus E310 USRP Software Defined Radio (SDR) platform. To accomplish this objective, we altered and extended the default USRP/RFNOC framework. First, we removed over provisioned FPGA resources; second, we altered the build flow to incorporate Vivado IP integrator; third, we constructed a softcore process for low latency operations, and we added a secondary bus to enable low latency control communications between the embedded ARM and the softcore processor; and last, but not least, we provided the TX module direct access to the RX packet buffer. With these extensions, we successfully prototyped and demonstrated a CSMA/CA based OFDM transceiver network supporting link layer cooperative transmissions using multiple USRP E310.

 

Using Standardized Semantic Technologies for Discover and Invocation of RF-based Microservices

Mieczyslaw Kokar (Northeastern University, USA); Jakub Moskal and Olivier Hurez-Martin (VIStology, Inc., USA)
In this paper, we describe a microservices-based approach to the implementation of distributed RF applications, in which devices can request for and execute microservices on behalf of other nodes in the RF network. All device descriptions and requests for services are expressed in a formal semantic language.

 

Energy-Efficient Transmission in 5G Communications

Jun Chen (National Instruments, USA)
Energy-efficient transmission of battery-powered mobiles is becoming crucial in the era of 5th Generation (5G) networks to improve device communication times on a battery charge. In this paper, we present a method to maximize energy efficiency (EE) at the mobile station transmitter equipped with multiple antennas. The system leverages channel state information reference signals (CSI-RS) for energy-efficient transmission with adaptive modulation schemes and linear precoding. Methods and simulation results show that the solution offers more than a 10 X EE performance improvement relative to traditional architectures in multipath fading environments.

[TOP]

14:00 - 15:30

TS3: Machine Learning and AI for Dynamic Spectrum Management II - Session Canceled

Cognitive Engine Testbed for Vehicular Communications Withdrawn

William R Anderson (Wofford College, USA); Ashley Aponik (Yale University, USA); Youssef Daoud (Tennessee Technological University, USA); Noel Teku, Garrett Vanhoy and Tamal Bose (University of Arizona, USA)
Ad-hoc networks have the potential to increase the safety and reliability of autonomous vehicles. The amount of radio spectrum available for such networks is limited, however. The use of cognitive radio, especially when integrated with reinforcement learning algorithms, may help to ease the issue of limited spectrum by finding optimal transmission policies and detecting the presence of other users, especially in a scenario where a primary user and secondary user are contesting for spectrum. This paper presents a testbed for simulating cognitive engines in these networks using a variety of reinforcement learning algorithms, including ε-greedy, Softmax Strategy, and Q-Learning. The goal of these cognitive engines is to learn to choose the best modulation and coding rates given various channel models. The user can choose the desired channel model and optimization goal (i.e. maximize throughput, minimize bit error rate). The cognitive engine then learns the optimal coding rates and modulation schemes for the given environment, and the testbed displays the performance of each learning algorithm.

 

On the Effectiveness of Obfuscation Techniques Against Traffic Classification - Withdrawn

Garrett Vanhoy and Tamal Bose (University of Arizona, USA)
It has been some time since a few studies have revealed the alarming fact that the details of a user's online browsing activity can be discerned to various degrees by analyzing the flow of data between client and server despite the employment of modern encryption techniques. Multiple solutions to this critical security issue have been proposed, but a thorough evaluation of how each of these solutions manage the design trade-off of their effectiveness and the resource overhead they introduce has yet to be performed. An ideal solution would completely prevent a malicious eavesdropper from being able to discern user activity to any degree while not introducing any additional resource overhead such as additional computational resources or additional effective bandwidth. In this work, the most recent solutions to this issue will be applied to the same data streams and their effectiveness will be measured by the ability of a deep neural network to discern (classify) user activity based on the captured activity of an emulated Wi-Fi interface between client and server as well as live packet captures.

 

 [TOP]

TS4: Topics in Spectrum Sharing

The IEEE 1900.5.2 Standard for Modeling Spectrum Consumption

Carlos E. Caicedo Bastidas (Syracuse University, USA); John A. Stine (The MITRE Corporation, USA)
The recently completed IEEE 1900.5.2 Standard for Modeling Spectrum Consumption is poised to be a key component in the support and implementation of Dynamic Spectrum Management and Dynamic Spectrum Access operations in the near future. The standard defines a data model for spectrum consumption models (SCMs) and procedures to arbitrate compatibility among combinations of RF devices and/or systems that have expressed the boundaries of their spectrum use with SCMs. SCMs built in accordance with IEEE 1900.5.2 allow spectrum users to describe their anticipated use of spectrum so that various stakeholders can understand and resolve spectrum use conflicts even in cases where some system and deployment characteristics are not fully shared. SCMs are ideal for users to negotiate boundaries for spectrum sharing and spectrum trading. The modeling methods were designed to make compatibility computations tractable and so SCMs enable the creation of algorithms to optimize the use of spectrum across multiple users. SCMs are machine readable and can be used to provide spectrum use policy to spectrum dependent systems (SDSs) for autonomous selection of RF channels. The SCMs are also a means for SDSs to autonomously collaborate in the use of spectrum

 

Licensed Shared Access (LSA) evolution deploying US CBRS protocols and sensing the secondary user

Heikki Kokkinen (Fairspectrum, Finland); Seppo Yrjölä (Nokia, Finland); Nuno Borges Carvalho (University of Aveiro/IT Aveiro, Portugal); José Pedro Borrego (ANACOM, Portugal)
This paper presents a spectrum sharing system concept, where the primary user senses the secondary system and reports the sensing information to the spectrum manager with the measurement parameters of the CBRS SAS-CBSD protocol. The spectrum manager changes the operating parameters of the secondary system as required. In the sharing scenarios, where the primary user accesses the spectrum locally and temporarily, and the secondary user is always active, the most cost-efficient sensing-based interference protection is to carry out sensing only when the primary user accesses the spectrum and at the primary user location. Most spectrum proposals so far either use administrative spectrum assignment information available from the national regulatory authority or carry out sensing in vast areas or at the location of the secondary user. The administrative information must be maintained and it may not be well suited to fast response spectrum use. The administrative information based interference protection relies on propagation modeling, and in the current spectrum sharing systems the geographic information does not take into account individual buildings or other large man-made obstacles between the primary and secondary user. Sensing at another location than in the primary user location introduces error compared to the measurement at the primary user receiver antenna. Sensing primary user power levels, which are lower or similar as the secondary user power levels, the secondary use may significantly interfere the sensing of the primary signal. Also from time perspective, it is better to decrease the power levels of the secondary users to non-interfering levels before primary spectrum access begins. All these issues are solved when the sensing takes place at the receiver antenna of the primary user just before the primary use begins. In this study, the sensing at the primary user location is tested on 2.3 GHz band where a mobile operator is the secondary user and PMSE wireless camera communication is the primary user. The actual mobile operators, broadcasters, and national regulatory authority participate in the pilot setup. The mobile network uses commercial basestations, core network and network management system.

 

Demonstrating Changing Priorities in Dynamic Spectrum Access

Topi Tuukkanen (Finnish Defence Research Agency, Finland); Heikki Kokkinen (Fairspectrum, Finland); Seppo Yrjölä (Nokia, Finland); Jaakko Ojaniemi (Aalto University, Finland); Arto Kivinen (Fairspectrum Ltd., Finland); Jarkko Paavola (Turku University of Applied Sciences, Finland)
Increasing consumer consumption of mobile data challenges spectrum availability. The mandated tasks of military and public safety need secure spectrum access that varies in time and location even for normal/peacetime duties not excluding responses to large-scale disasters, catastrophes, hybrid warfare or homeland defense scenarios. Rapid short-term changes in time and space are not well supported by contemporary spectrum administration and management schemes. This study demonstrates that adjustments to the Licensed Shared Access scheme together with the introduction of changing priorities in a spectrum manager function would provide administrations the tools to rapidly adjust spectrum assignments in time and space. The results showed that when properly implemented, including administrative procedures as well as cyber security considerations, these adjustments provide mobile network operators sufficient security of spectrum access to continue investments and that authorities have access to spectrum when needed for their legally mandated tasks.

 

A Quantitative Cross-System Approach to Spectrum Efficiency

William Kozma, Michael Cotton and Giulia McHenry (National Telecommunications and Information Administration (NTIA), USA)
In 2018, NTIA began a multiyear effort to create a more holistic approached to the problem of quantifying spectrum usage though its spectrum efficiency (SE) effort. The goal of this effort is to establish a set of versatile metrics that are implementable, objective, and supported by regulators and the Federal spectrum users that allow for a quantitative look at the SE of new and existing systems within spectrum bands. To support this, NTIA looked at the three orthogonal dimensions of space, time, and frequency. Multiple metrics were developed for each dimension in a system agnostic manner to allow regulators the flexibility understand its spectrum needs. Additionally, although historical SE was measured in bps/Hz, this new approach favors unitless metrics that incorporate system utility, allowing a system's mission parameters to be captured in the analysis without attempting to convert everything into a corresponding data rate (ie bps). It is the focus on utility it what supports the ability to compare disparate systems. In our work, we established metrics such as Assignment Efficiency, which is the ratio of the spectrum producing utility to the spectrum assigned to the system. Likewise, Occupancy Efficiency is the ratio of spectrum producing utility to the spectrum occupied by a system. The spectrum assigned and occupied are known or measurable values. The utility value captures the mission and system design, including technical parameters such as availability and signal plus interference to noise ratio (SINR). This new approach looks to augment the decades of research in the area of spectrum efficiency to create a more complete picture of spectrum usage at the national level. By better characterizing the spectrum environment, including system mission and utility characteristics, NTIA looks to give spectrum regulators a comprehensive set of tools in order to increase spectrum usage and compatibility between new and existing systems.

[TOP]


Thursday, 15 November

10:30 - 12:00
 
14:00 - 15:30
16:30

10:30 - 12:00

TS5: Machine Learning and Architectural SDR Concepts

Boosting Artificial Intelligence in Software Defined Systems with Open Infrastructure

Li Zhou (National University of Defense Technology, P.R. China); Qi Tang (NUDT, P.R. China); Jun Xiong, Haitao Zhao, Shengchun Huang and Jibo Wei (National University of Defense Technology, P.R. China)
Software infrastructure is playing an increasing role in both terminals and clouds in recent years, such as software communication architecture (SCA) for terminals and OpenStack for clouds. The common benefit is that they both enable a system to decouple applications from the platform with a standardized application programming interface (API). As a component or a higher-level application in SCA systems, artificial intelligence (AI) is significant to meet various challenges, which potentially include wireless network planning, resource optimization, complicated decision making and in-depth knowledge discovery. Meanwhile, open clouds are evolving into intelligent clouds, which allow users to use AI functions to perform cognitive tasks such as analysis, gaining insights from data or transfer learning, which is also known as AI-as-a-Service (AIaaS). In this presentation, we would like to talk about how to take advantages of AlaaS in the open infrastructure, to boost AI and innovations in software defined systems.

 

Towards an Ontology for SCA APIs

Durga Suresh and Mieczyslaw Kokar (Northeastern University, USA)
The Software Communication Architecture (SCA) is an implementation-independent architectural framework that specifies a standardized infrastructure for a software defined radio. The intent behind the standardization of the SCA was to support waveform and component portability, interchangeability of implementations, interoperability of software and hardware components, software reuse, and architecture scalability for communications platforms. To the best our knowledge, there are no open-source implementations of the SCA. SCA has many APIs. All SCA APIs and the SCA itself are specified in UML. While UML tools provide some methods for syntactically constraining the development of a specification of a system, they don't support the capability of verifying or enforcing the semantic constraints. In this paper, we are presenting our work on creating a SCA Ontology (SCA4.1) that can be used as a base ontology in the radio domain to help with prototyping the SCA APIs. This ontology is based on Nuvio (Northeastern and VIStology) ontology, a new foundational ontology developed by our team, inspired by the original Cognitive Radio Ontology (CRO), Quantity, Units, Dimensions and Types Ontology (QUDT), DOLCE ultralight ontology (DUL) and Situation Theory ontology (STO-L). The top-level structure, properties and the relationships between the classes and objects will be presented to show the components and interfaces of the SCA. We will then discuss the evaluation of this ontology based on 1) coverage of knowledge, 2) inference capability, 3) precision in defining classes and 4) extendibility.

 

Characterization of Direct Conversion Software Defined Radios for Use in Broadband Spectrum Measurements
Todd Schumann (National Telecommunications and Information Administration & University of Florida, USA); Jeffery Wepman and Michael Cotton (NTIA, USA)
As new technological trends push for larger bandwidths, many low-cost, widely tunable devices, such as software defined radios (SDRs), have migrated back to homodyne or direct-conversion architectures. Additionally, to keep the costs low, these devices often employ no front-end filtering, aside from the fixed, low-pass, anti-aliasing filter directly before digitization. This allows the devices to sweep across large bandwidths by tuning only the local oscillator (LO). To use such systems as easily reconfigurable, low-cost alternatives to laboratory grade spectrum analyzers for use in spectrum monitoring, SDRs must be calibrated and the drawbacks of such devices compared with spectrum analyzers must be identified. In this report, an SDR was calibrated by adding a scale factor: an offset to the measured power such that the measured power would match the power measured by a laboratory grade power meter. This scale factor was found to vary in frequency and contain discontinuities as the SDR redistributed gain elements in different frequency regions. Additionally, the scale factor was found to have discontinuities with gain, as the gain distribution did not increase perfectly linearly across some steps. This ultimately suggests a matrix scale factor correction, which can be interpolated to cover frequency/gain pairs which were not directly measured. However, once the calibration was applied, it was found that the long-term accuracy of measurements was well within 1dBm, and often much better in many frequency ranges. The compression point, spurious free limit, and displayed average noise level (DANL) were also measured for many frequency/gain pairs. The mechanism for compression was identified for each pair and the spurious free dynamic range is often as much as 10dB lower than the compression based dynamic range, due to the harmonic mixing associated with the direct conversion architecture. Using an SDR in place of a laboratory grade spectrum analyzer is a perfect example of tradeoffs. Whereas using laboratory grade spectrum analyzers to monitor the spectrum in many locations for long periods of time is prohibitively expensive, using SDRs in their place adds more uncertainty to the measurements. Many of the non-idealities can be removed with a proper calibration as mentioned above, giving a long-term accuracy within 1dBm in the case of the SDR used in this study. However, the architectural challenges associated with the homodyne architectures ultimately limit the dynamic range and bandwidth of many SDRs. However, by understanding the differences and their mechanisms, accurate measurements can be made, allowing broad and inexpensive spectrum monitoring. PLEASE NOTE: This abstract is still in the process of NTIA's internal review. As such, it has not yet been approved for public release. Please do not publish this abstract in the proceedings. The final presentation, paper, and abstract WILL however be approved for public release.

TS6: CBRS Deployment Considerations

3.5 GHz ESC Sensor Test Apparatus Using Field-Measured Waveforms

Raied Caromi (National Institute of Standard and Technology, USA); John Mink, Cosburn Wedderburn, Michael R. Souryal and Naceur El Ouni (National Institute of Standards and Technology, USA)
This paper presents a framework and apparatus for laboratory testing of environmental sensing capability (ESC) sensors for the 3.5 GHz band. These sensors are designed to detect federal incumbent signals in the band so that the federal incumbent can be dynamically protected from harmful interference. The proposed testing framework is unique in that it utilizes waveforms measured in the field to reproduce in a controlled laboratory environment what the sensor would experience in the field, and with repeatability unattainable in live field testing. Test signals comprise the incumbent signal to be detected, co-channel commercial signals, and the out-of-band emissions of adjacent-band incumbents, including channel propagation effects that can affect sensor performance. We describe the implementation of this framework in software-controlled instrumentation for automated testing of large numbers of test waveforms capable of producing statistically significant performance metrics such as rates of detection and false alarm in a time efficient manner.

 

Industrial Internet of Things need for Low Cost Spectrum

Al Jette (Nokia, USA); Seppo Yrjölä (Nokia, Finland)
This paper will examine the problems currently faced by industry in acquiring spectrum to support the Industrial Internet of Things (IIOT) along with introducing a framework to support this critical global need. The paper will examine industrial use cases where such spectrum is needed, identify their requirements and introduce a spectrum sharing model to support the IIOT needs. The aspects of how spectrum is currently allocated and how it fails to address the IIOT needs will be evaluated. We will then examine the US CBRS spectrum sharing model and compare to other sharing models, such as LSA in Europe in terms of IIOT needs and requirements. Note: This will be a joint paper with Seppo Yrjola (Nokia) and we are thinking of doing this as BOTH a Paper and a Presentation.

 

Assessing the feasibility of the US CBRS concept based business models and implications for access to and value of spectrum

Seppo Yrjölä (Nokia, Finland)
This paper will introduce the framework that should be accounted for when assessing the feasibility of the US CBRS concept based business models and implications for access to and value of spectrum. Study reviews terms for GAA and PAL and the access mechanisms likely to arise in practice related to operation of SAS and potential impact on spectrum availability. We will develop a view of options for the spectrum supply side, how this could interface with demand from IoT and private networks, and the options for business models that could potentially develop. Finally, we will extend the analysis to CBRS evolution and a comparison with the sharing mechanisms elsewhere, in particular, LSA in Europe.

 

Using Extreme Sensitivity GPS for In-building to Out-door Propagation Modeling

Christopher Kurby (iPosi Inc., USA)
Spectrum sharing has emerged as a major initiative to enhance the spectrum use to increase public access to under used spectrum. Recently the CBRS (Citizens Band Radio Service) band of 3.55 to 3.7 GHz has been approved for band sharing. The first release of requirements under the FCC rules drafted by the WinnForum have been generated for using the band across several services. The rules require protection of higher priority and legacy services from new entries. The present rules use simple but popular propagation models like Hata extended to 3.7 GHz called eHata. Many of the new fixed terminals called CBSDs are planned to be used in-doors but the eHata model and others do not model indoor losses. Therefore the WinnForum has established an in-door to out-door loss factor value of 15 dB to be assigned to in-door CBSDs. This factor when used with the out-door Hata model generally is very conservative in estimating the loss, but not always, and leads to an under utilization of the Spectrum. This paper illustrates a method accepted by the WinnForum for the second release that allows the use of building loss measurements using GPS. Traditionally GPS is limited in sensitivity that makes in-door building loss measurements untenable. We have built an assisted GPS receiver with an extreme GPS sensitivity of -175 dBm and when embedded in the CBSDs enables direct loss measurements. This paper provides some measurements of GPS taken in-doors and illustrates how this can be used to manage interference and shows that 9 to 23 dB and even more capacity can be realized for interference to a Fixed Satellite Service (FSS) using this method as an example.

 

14:00 - 15:30

TS7:  Aspects of SDR Performance

RPCExpress: a try to implement an efficient middleware from the ground up based on requirements of embedded software defined systems
Qi Tang (NUDT, P.R. China); Li Zhou (National University of Defense Technology, P.R. China); Shan Wang (National University of Defense Technology & University of Montreal, P.R. China); Haitao Zhao, Jun Xiong, Shengchun Huang and Jibo Wei (National University of Defense Technology, P.R. China)
The middleware is among the three infrastructural system software. Embedded software defined systems (SDS), e.g., the SDR system, also deeply depend on the middleware. These systems are generally component-based, constructed using the model driven architecture and enabled by PIM, PSM and the transformation technique. Inter-component interaction and reconfiguration/reconstruction in either component level or application level on heterogeneous platforms are basic operations in the SDS, which are enabled by current middleware techniques, including CORBA, ICE, COM+, RMI, RPC, etc. Whereas, most available middlewares focus on large-scale distributed network computing systems, of which the size and weight are not well adapted to embedded systems. Especially, the SCA-based SDR system shifts its middleware from CORBA to transfer mechanism demonstrates the above issues exist in the off-the-shelf middleware. For this reason, we initiated a program to craft a function-complete but small and efficient middleware, named RPCExpress, for the embedded SDS. Initial efforts are demonstrated to be quite fruitful. The first edition of RPCExpress enables component-based software development, supports a set of semantics of IDL and provides an IDL to C++ transformation tool. Component-based evaluation is carried out on a practical embedded platform, and the integration with SCA is on the way. The experimental result demonstrates that the RPCExpress outperforms the off-the-shelf middleware in efficiency and footprint.
  
Optimizing the Efficiency of the Transfer Mechanism in SCA-based Radio Systems
Shan Wang (National University of Defense Technology & University of Montreal, P.R. China); Qi Tang (NUDT, P.R. China); Jian Wang (University of Calgary, P.R. China)
As a classical transfer mechanism, CORBA has been widely used in SCA-based software radio systems. However, the communication efficiency of the middleware based on CORBA, such as TAO and omniORB, is not very satisfactory, and this has become one of the challenges that constrain SCA technology to be further widely applied. In SCA 4.1 specifications, the transfer mechanism becomes more flexible. We try to improve the efficiency of the transfer mechanism from two aspects: inter-chip and intra-chip, we call them remote-call and local-call separately. Considering the remote-call delay is not low, improving the local-call efficiency becomes the premier task. This achievement also provides more possibilities and options for component granularity within GPPs. On this basis, we are working on optimizing the efficiency of inter-chip communications.
 
CERTIF: a testing methodology and a test bench to assess the conformance to software defined radio standards
Alain Ribault (KerEval, France)
In order to facilitate interoperability and portability of software defined radio components, the conformance to SDR standards (including APIs and behavior specifications) is really important. Due to the huge number of requirements and to ensure reproducibility of the conformance assessment, a testing methodology is necessary to show SDR requirements coverage and test result status as well as an automatic test bench to ease tests execution. This presentation will show how to use a structured approach to develop and maintain a test bench for the assessment to SDR standards conformance. The presentation will be made by Alain Ribault, CTO of KEREVAL. KEREVAL is a French testing Lab. Draft plan: 1) Needs for the assessment of SDR conformance 2) Testing methodology a. From the SDR requirements to the tests b. Test design process c. Compliance checkpoints definition d. Modeling e. Testing generation 3) The test bench 4) Test of SDR components: an example 5) Conclusion / Q&A
 

TS8: Applications of the SCA

Using SCAv4.1 Tools to Develop Applications and Platforms

Juan Pablo Zamora Zapata and Steve Bernier (NordiaSoft, Canada)
Software Defined Systems is the industry's response to the ever-increasing complexity and flexibility requirements of today's electronic systems. The Software Communications Architecture (SCA) enables the fulfillment of such requirements for complex Heterogeneous Embedded Distributed Systems (HEDS). In this tutorial, NordiaSoft will demonstrate how SCA software can be developed using tools that combine model-driven design (MDD) and rapid application development (RAD). Using Zero Merge code generation technology, the business source code for an SCA component (modulator, encoder, etc.) is kept separate from the infrastructure code that deals with deploying, instantiating, configuring, connecting, inter-process communications, etc. This tutorial will show how the proposed approach provides the greatest potential of reuse for intellectual property. This SCAv4.1 tutorial will also illustrate concepts like application creation, packaging, installation, deployment and execution using an SCA FM Waveform Application.

  

Mission Adaptable Ground Station for Small Satellites

Howen Fernando and Marcus Matsumura (SSC - Pacific, USA)
The technological growth for Small Satellites (i.e. CubeSats, etc.), and the increasing collaboration in this domain, prompted the development of a Software Defined Radio ground station to address scalability, interoperability, and adaptability using the Software Communications Architecture 4.1 Standard. SCA enables the use of one ground station platform for many different small satellites, allowing the SCA-based ground station to switch from one satellite system to another as they become visible.

 

Porting an SCA 2.2.2 platform and applications to SCA 4.1 - ADLINK's experience Withdrawn

Paul Elder (ADLINK Technology, Canada)
ADLINK is preparing its DTP 4700 Development and Test Platform for SCA 4.1. The "DTP" is a Linux-based hardware platform that integrates many devices common to SDR development including FGPA, DSP, GPS and a full transceiver stack. The DTP SCA software, including applications was originally developed for SCA 2.2.2 using Spectra CX, ADLINK's SCA development tool set. This talk will examine: • The approaches we evaluated (and sometimes rejected) for porting to an entire platform from 2.2.2 to 4.1 • The tools we built to reduce our migration pain. • The architectural changes we elected to make along the way (moving from an ORB-Everywhere approach to FPGA/DSP to MHAL) During the talk, Spectra CX 4 (for SCA 4.1) will be used to demonstrate some of the migration techniques used.

Update on recent ESSOR activities

Topi Tuukkanen (Finland MoD)

Main messages:

  • ESSOR waveform is mature
  • Operational testing has continued - successfully
  • ESSOR program is planning for future within evolving EU cooperation frameworks
16:30
 

Invited Presentations

Impacts of Propagation Models on CBRS GAA Coexistence and Deployment Density (presentation)
Yi Hsuan (Google, USA)
WInnF has been working on the CBRS GAA coexistence framework and made significant progress on this important issue. Based on the framework, SAS administrators use propagation models to estimate the interference experienced by GAA users and assign different GAA channels to CBSDs which cause strong interference to each other. In this paper, we evaluate how GAA coexistence is impacted by using different propagation models to estimate interference. Propagation models considered include the standardized propagation models used by SAS for incumbent and PAL protection as well as clutter-aware/ray tracing propagation models. The impact is evaluated in the form of GAA deployment density and bandwidth available to CBSDs in different deployment scenarios.
 
Enabling PKI for WInnForum/CBRS device manufacturers
Geoffrey Noakes (DigiCert, Inc., USA)
All WInnForum/CBRS Manufacturers should have PKI device certificates in their products. At the WInnForum website the requirement is stated as "Subscribers should install all WInnForum authorized CBRS Root CA certificates in their device trust anchor stores to validate received certificates." DigiCert is an "Approved Certification Authority" for the WInnForum/CBRS consortium. In this presentation, DigiCert will provide the attendees with an update on the state of using PKI and device certificates in their WInnForum/CBRS products.
 
powered by MemberClicks