Integrated Media Systems Center

[an NSF Graduated Center] The Integrated Media Systems Center (IMSC) is an informatics research center that delivers data-driven solutions for real-world applications. We find enhanced solutions to fundamental Data Science problems and apply these advances to achieve major societal impact. To this end, we target four application domains: Transportation, Health, Media and Entertainment, and Smart Cities. For each domain, we develop large-scale System Integration Prototypes. Each prototype is designed to address real-world problems as well as to conduct fundamental and applied multidisciplinary research in data science. Our work is supported by city, state and federal government competitive research grants and by the support and sponsorship of our industry partners such as Google, Microsoft, Intel, Oracle, Chevron and Northrop Grumman. Over the years IMSC has had a vibrant technology transfer program leading to more than ten successful startups and filing more than one hundred invention disclosures. These activities and our research focus make IMSC a recognized figure in its domain and one of the world’s leading authorities in the emerging field of Geo-Informatics. Founded in 1996 by C. L. Max Nikias – now USC’s President – IMSC is hosted by the USC Viterbi School of Engineering and benefits from the support of the School’s Faculty and Staffs. IMSC has been an energetic force in the expansion of the USC Viterbi School of Engineering, serving as the catalyst for new curricular programs and continuously expands education and research efforts with international programs and collaborations.

Research Areas

Data science isn’t limited to signal analysis and databases, nor is it singularly focused on one particular domain. Rather, it is a living pipeline spanning the Acquisition of real data, the management of Access to this information, myriads of Analytical techniques, and, of course, real applications of these Actionable insights. To that end, we have assembled a diverse faculty with specialties that cover each unique aspect of this A4 data pipeline.
1. DATA ACQUISITION
Acquisition is one of the most important steps as it impacts all the other steps of the A4 pipeline regardless of the application.For example, in the area of Smart Cities as much of the nation’s Acquisition infrastructure is underdeveloped, any bias or inconsistency in a dataset can have severe repercussions throughout the analysis phase, leading to significant challenges. In this context, to engineer solutions, we look to advanced sensing techniques involving the Internet of Things (IoT), where data collection can be integrated with consumers’ daily lives. Then, by developing sophisticated control techniques, we can eliminate inconsistency before it begins, and ensure the most thorough Acquisition possible.
Sensors
Murali Annavaram has been a faculty member in the Ming-Hsieh Department of Electrical Engineering at the University of Southern California since 2007. He currently holds the Robert G. and Mary G. Lane Early Career Chair, and his research focuses on energy efficiency and reliability in computing platforms with a focus on energy efficient sensor management for body area sensor networks for real-time, continuous health monitoring. He also has an active research group focused on computer systems architecture exploring reliability challenges in the future CMOS technologies.
Internet of Things (IoT)
Professor Krishnamachari’s research spans the design and evaluation of algorithms and protocols for next-generation IoT networks and connected vehicles. In collaboration with IMSC, his research group also explores traffic estimation, based on urban road sensor data, and green traffic control for smart cities, which investigates novel mechanisms based on speed limits for road stretches to influence drivers and self-driving cars to move through cities in ways that minimize their negative impact on the urban environment. His group also investigates algorithms for dispersed computing that allow for flexible and efficient distributed edge computation, which are likely to play an increasing role in smart communities in the context of applications involving video edge analytics, and distributed network monitoring applications. Lastly, in collaboration with both IMSC and the USC Marshall school of business, Dr. Krishnamachari’s group is helping develop a novel middleware platform for IoT in smart communities called I3, that allows real-time data providers and data consumers to exchange data in return for monetary incentives while ensuring that data owners can control who can see what data and what purpose they can use it for.
Control
Professor Savla has a broad interest in performance analysis and optimization of a variety of transportation systems, from urban traffic to logistics. His group designs tools at the interface of control theory, queuing theory, network flow, and game theory to facilitate analysis and control. The emphasis is on integrating canonical features from transportation systems to extend the applicability of these tools beyond their traditional domains of communication and manufacturing systems. This has also given us an understanding of the spatio-temporal redundancy in the information required about system parameters (e.g., geometric properties, traveler behavior, demand) and real-time state (e.g., congestion, incidents) for various performance metrics. This information has direct implications for data acquisition and processing. He collaborates with IMSC to apply these techniques to arterial and freeway networks, including connected and autonomous vehicles, as well as to dynamic vehicle routing.
Wearables
In the Motor Behavior and Neurorehabilitation Laboratory, we aim to understand the neurobehavioral basis of motor learning. Specifically, we are interested in the brain-behavior relationships that are optimal for the preparation, and execution of skilled movement behaviors in healthy aging and in those recovering from hemiparetic stroke. We have worked with IMSC on Rehabilitation Engineering Research, funded through one CTSI grant (POCM project), leveraging advances in smart technologies including virtual reality applications (Kinect™ camera), and body worn sensors (APDM sensors), for rehabilitation purposes including diagnostics.
2. DATA STORAGE & ACCESS
Access to data is an incredibly complex problem—so much so that the National Academy of Engineering has named access to health informatics one of its top twelve Grand Challenges. It brings together aspects of data science as well as software engineering, consumer policy, and ethics, leading to a truly disciplinary effort. IMSC at USC has developed innovative new techniques in spatial and media data management, as well as pioneered original approaches to data security and privacy. Thus, by focusing on Access, we create an effective bridge between data Acquisition and Analysis.
Spatial Data Management
Professor Shahabi and the InfoLAB conduct pioneering research in areas related to data management (including query processing and analysis), data integration, data mining and machine learning, geospatial data management, and large-scale rendering and visualization.
--> Selected Research in Transportation
Traffic Forecasting
A spatiotemporal network is a spatial network (e.g., road network) along with the corresponding time-dependent weight (e.g., travel time) for each edge of the network. The design and analysis of policies and plans on spatiotemporal networks require realistic models that accurately represent the temporal behavior of such networks. We build a traffic modeling framework for road networks that enables:
- generating an accurate temporal model from archived temporal data collected from a spatiotemporal network (so as to be able to publish the temporal model of the spatiotemporal network without having to release the real data)
- augmenting any given spatial network model with a corresponding realistic temporal model custom-built for that specific spatial network (in order to be able to generate a spatiotemporal network model from a solely spatial network model).
We used the proposed framework to generate the temporal model of the Los Angeles County freeway network and publish it for public use.
We have built the technologies for solving traffic prediction problems using the traffic sensor datasets from the IMSC TransDec platform. Due to thorough sensor instrumentations of the road network in Los Angeles as well as the vast availability of auxiliary commodity sensors from which traffic information can be derived (e.g., CCTV cameras, and GPS devices), a large volume of real-time and historical traffic data at very high spatial and temporal resolutions have become available. Therefore, how to mine valuable information from these data is important. We have piloted the studies of traffic prediction for individual road segments using such large datasets. We utilized the spatiotemporal behaviors of rush hours and events to perform an accurate prediction of both short-term and long-term average speed on road-segments, even in the presence of infrequent events (e.g., accidents). By utilizing both the topology of the road network and sensor dataset, we overcame the sparsity of our sensor dataset and extend the prediction task to the entire road network. We also addressed the problems related to the impact of traffic incidents. We developed a set of methods to predict the dynamic evolution of the impact of incidents.
We then study the online traffic prediction problem. One key challenge in traffic prediction is how much to rely on prediction models that are constructed using historical data in real-time traffic situations, which may differ from that of the historical data and can change over time. To overcome this challenge, we propose a novel online framework that learns from the current traffic situation (context) in real-time and predicts the future traffic by matching the current situation to the most effective prediction model trained using historical data. As real-time traffic data arrive, the traffic context space is adaptively partitioned to efficiently estimate the effectiveness of each base predictor in a different situation.
Media Data Management
Recently, new forms of multimedia data (such as text, numbers, tags, signals, geo-tag, 3D/VR/AR and sensor data, etc.) has emerged in many applications on top of conventional multimedia data (image, video, audio). Multimedia has become the “biggest of big data” as the foundation of today’s data-driven discoveries. Moreover, such new multimedia data is increasingly involved in a growing list of science and engineering domains, such as driverless cars, drones, smart cities, biomedical instruments, and security surveillance. Multimedia data has also has embedded information that can be mined for various purposes. Thus, storing, indexing, searching, integrating, and recognizing the vast amounts of data create unprecedented challenges. IMSC is working on solutions for acquisition, management, and analysis of a large multimedia data.
--> Selected Research in Media
Multimedia Information Processing and Retrieval
Large-scale spatial-visual search faces two major challenges: search performance due to the large volume of the dataset and inaccuracy of search results due to the image matching imprecisions. First, the large scale of geo-tagged image datasets and the demand for real-time response make it critical to develop efficient spatial-visual query processing mechanisms. Towards this end, we focus on designing index structures that expedite the evaluation of spatial-visual queries. Second, retrieving relevant images is challenging due to two types of inaccuracies: spatial (due to camera position and scene location mismatch) and visual (due to dimensionality reduction). We propose a set of novel hybrid index structures based on R*-tree and LSH, i.e., a two-level index structure consisting of one primary index associated with a set of secondary structures. In particular, there are two variations to this class: using R*-tree as a primary structure (termed Augmented Spatial First Index) or using LSH as primary (termed Augmented Visual First Index). We experimentally showed that all hybrid structures greatly outperform the baselines with the maximum speed-up factor of 46.
Geospatial Multimedia Sentiment Analysis
Even though many sentiment analysis techniques have been developed and available, there are still limitations in reliably using sentiment analysis since there is no dominantly accepted technique. Taking advantage of existing state-of-the-art sentiment classifiers, we propose a novel framework for geo-spatial sentiment analysis of disaster-related multimedia data objects. Our framework addresses three types of challenges: the inaccuracy and discrepancy associated with various text and image sentiment classifiers, the geo-sentiment discrepancy among data objects in a local geographical area, and observing diverse sentiments from multimedia data objects (i.e., text and image). To overcome these challenges, we proposed a novel framework composed of three phases: sentiment analysis, spatial-temporal partitioning, and geo-sentiment modeling. To estimate the aggregated sentiment score for a set of objects in a local region, our geo-sentiment model considers the sentiment labels generated by multiple classifiers in addition to those of geo-neighbors. To obtain sentiment with high certainty, the model measures the disagreement among correlated sentiment labels either by entropy or variance metric. We used our framework to analyze the disasters of Hurricane Sandy and Napa Earthquake based on datasets collected from Twitter and Flickr. Our analysis results were analogous to FEMA, and USGS reports.
--> Selected Research in Smart Cities
Big Data in Disasters
In a disaster, fast initial data collection is critical for first response. With the wide availability of smart mobile devices such as smartphones, a dynamic and adaptive crowdsourcing on disaster and after disaster has been getting attention in disaster situation awareness. We have been working on maximizing visual awareness in a disaster using smartphones, especially with constrained bandwidth resources. Specifically, We are currently performing a federally funded international joint project (US NSF and Japan JST joint) about data collection in disasters using MediaQ.
--> Selected Research in Transportation
Realtime Traffic Flow Data Extraction
Vision-based traffic flow analysis is getting more attention due to its non-intrusive nature. However, real-time video processing techniques are CPU-intensive so accuracy of extracted traffic flow data from such techniques may be sacrificed in practice. Moreover, the traffic measurements extracted from cameras have hardly been validated with real dataset due to the limited availability of real world traffic data. This study provides a case study to demonstrate the performance enhancement of vision-based traffic flow data extraction algorithm using a hardware device, Intel video analytics coprocessor, and also to evaluate the accuracy of the extracted data by comparing them to real data from traffic loop detector sensors in Los Angeles County (from our transportation project).
Security
Data breaches harm nations, businesses, and individuals. The Democratic National Committee breach affected the United States presidential election of 2016, the Anthem breach imperiled 80 million United States patients, the Equifax breach endangered 143 million United States citizens, the Ashley Madison breach led to suicides, and 60% of the small businesses that suffer a data breach close down within six months. An average of five million data records are breached daily; nine billion have been breached since 2013. According to the US Department of Health and Human Services, 174 million health records of the United States residents have been breached. Naveed is developing systems to prevent such data breaches with a focus on electronic health records.
Privacy
Data is at the core of many of today’s innovations, but often the data used is highly personal or sensitive. My research aims to develop and deploy algorithms and technologies that enable data-driven innovations while preserving privacy. To that end, I study how the data-mining techniques commonly used in the context of the web and social applications have to be changed so that they preserve the rigorous privacy guarantee of differential privacy while remaining useful, and demonstrate how quantitative analyses and machine learning techniques can identify novel privacy risks and support development of tools to empower companies, governments, and individuals to protect privacy.
3. DATA ANALYSIS
Analysis is the heart of data science research, combining technical advances with creativity to glean useful insights from vast quantities of information. Machine learning, game theory, signal processing, and spatial computing are some of the areas in which USC faculty excel, contributing to our impact on local communities and the nation at large. With powerful Analysis, we can transition the world’s information from messy, disorganized data to concise, relevant patterns, which in turn lead to Actionable systems.
Machine Learning
The USC Melady Lab develops machine learning and data mining algorithms for solving problems involving data with special structure, including time series, spatiotemporal data, and relational data. We work closely with domain experts to solve challenging problems and make significant impacts in computational biology, social media analysis, climate modeling, health care, and business intelligence.
Game Theory
Classically in economics, the study of how information influences strategic interactions has been largely descriptive. Much of the recent work in my group examines the associated prescriptive question, which is becoming increasingly important in today’s information economy: what information should a system designer make available to agents so that they can collectively make good decisions? This task is referred to as persuasion or information structure design.
This type of research holds special promise in the field of transportation as global optimizations, though not always locally advantageous, are necessary to improve overall traffic flow.
Traffic Management
Our research lays the groundwork for an algorithmic theory of persuasion, building on top of a recent flurry of work on persuasion in the economics community. In [1], I proved that optimal persuasion is computationally intractable in the generic case of two competing agents and a public communication channel. This explains the lack of an economic characterization of optimal policies. In [2], we showed how to circumvent this impossibility in multi-agent settings which are “smooth”. Somewhat surprisingly, we showed in [3] that optimal persuasion admits computationally efficient, instructive, and near-optimal policies when there is only one decision-making agent. In [4], we show a similar positive result when there are multiple agents with binary actions, no inter-agent externalities, and the system designer can communicate privately with each of the agents. I also recently wrote a survey [5] on the topic.
Being that algorithmic persuasion is coming into its own as a field, we believe the time is ripe for a killer application of our models and algorithms. One promising application is traffic management. In an ongoing collaboration with IMSC, and working with the LA Metro, we are exploring the use of algorithmic persuasion as a tool for traffic management in Los Angeles. Persuasion algorithms would be informed by real-time traffic sensors and historical traffic data, and would be integrated into a GPS navigation app.
Signal Processing
In recent years, Prof Ortega and his team have focused their research on the development of novel tools for Graph Signal Processing (GSP). GSP methods can be used to analyze sensor and communication networks, traffic networks and electrical grids, online social networks, as well as graphs associated to machine learning tasks. On the theoretical front, this work has focused on designing graph filters, anomaly detection, graph sampling and learning graphs from data. These methods are being applied to the various applications currently studied in IMSC. As an example, within the Health domain, his group is developing approaches for human activity analysis using GSP on the graph connecting the estimated positions of body joints (i.e., the skeleton graph). This work has been applied for motion analysis in Parkinson’s Disease patients and also being explored for manufacturing applications. As another example, in the Transportation domain, GSP can be used to analyze traffic patterns and detect dependencies in traffic flows across cities. Sensor networks deployed in Smart Cities applications can benefit from sampling methods developed in GSP, which can also be used to detect anomalies, indicative of sensor malfunction. Lastly, both sampling and filtering can be applied to analyze information collected from online social networks.
Spatial Computing
We develop computer algorithms and build intelligent applications that discover, collect, fuse, and analyze data from heterogeneous sources to solve real world problems.
Air Quality Modeling
Air quality models are important for studying the impact of air pollutant on health conditions. Existing work typically relies on area-specific, expert-selected attributes of pollution emissions (e,g., transportation) and dispersion (e.g., meteorology) for building the model for each combination of study areas, pollutant types, and spatiotemporal scales.
In this project, we are building a data mining approach, PRISMS-DA, that utilizes publicly available OpenStreetMap (OSM) data and the IMSC transportation data to automatically generate air quality model for the concentrations for any type of pollutants at various temporal scales. Currently, PRISMS-DA automatically generates (domain-) expert-free model for accurate PM2.5 concentration predictions in the Los Angeles Metropolitan Area. The automatically generated model can be used to improve air quality models that traditionally rely on expert-selected input.
Artificial Intelligence
As the volume of public and private video data expands at a rapid rate, with YouTube alone reporting more than 100 videos posted every minute, it becomes increasingly crucial to be able to quickly find specific moments within this growing aggregation of video data.
In collaboration with IMSC, Nevatia and his team address the problem of efficiently locating specific portions within long expanses of video through semantic content extraction and content indexing. To this end, we are developing a finer set of query terms based on new detection and tracking methodologies. The resulting representations can then be used for extracting contents that will allow understanding of the events occurring in the scene. Such automated tools will be essential for analysts in the intelligence community to cope with and make effective use of the vast quantities of video data that are becoming increasingly available.
Computer Vision
The Computer Vision Laboratory at the University of Southern California is one of the major centers of computer vision research for thirty years. The lab teams with IMSC to apply research involving image and scene segmentation, video analysis, range data analysis, perceptual grouping, shape analyis and object recognition. Our approach emphasizes the use of segmented symbolic descriptions. This research has obvious application to robotics, manufacturing, mobile robots, surveillance and aerial photo interpretation.
4. TAKING ACTION ON BIG DATA
The ability to take Action on conclusions gained from the application of data science is what separates IMSC at USC from other organizations. Since its inception, IMSC, has had 96 invention disclosures filed at the USC Technology Transfer Office, 51 patents filed, six patents issued, 88 commercial licenses and technology transfers, and nine company spin-offs established. Data science is one of the fastest growing computer science disciplines, and the one poised to make the greatest impact; IMSC aims to continue to be a leader in applying such technology to real-life Action.
Transportation Policy
Professor Giuliano conducts research on relationships between land use and transportation, transportation policy analysis, and information technology applications in transportation. Her current research includes examination of relationships between land use and freight flows, and development of applications for transportation system analysis using archived real-time data, and analysis of commercial and residential development around transit stations.
Precision Medicine
Founded in 2006 by internationally-recognized physician visionary Dr. Leslie Saxon, the USC Center for Body Computing (CBC) is a thought leader innovation hub designed to bring together digital and life sciences executives, sensor and mobile app investors, strategists, designers, investors and thought leaders from healthcare, entertainment and technology to collaborate on transformative care solutions. In collaboration with IMSC and the Keck Medicine of USC, the CBC conducts clinical trials and human performance studies. These studies help define and guide member’s product development efforts; validate efficacy of sensor and app design; and support patient empowered health using digital tools. The future patient care models, including the Virtual Care Clinic, being developed at the CBC will leverage technology along with physician expertise to bring disease treatment and management options to more people on demand at an affordable cost.
Human Performance
The Kuhn laboratory is aimed at integrating patient, model system, and high-content single cell data to translate clinically observed correlations into a mechanistic understanding of the physical and biological underpinnings of cancer dynamics. The organizing framework of the physical dynamics of cancer at the lab focus on the spatial distributions and temporal evolution of the disease at the cellular, human, and population scale. As part of the Convergence Science Initiative on cancer, the Kuhn lab collaborates with IMSC to disseminate its capabilities and to collaborate on evolving physical sciences concepts that could address specific challenges
Data-Driven Journalism
Gabriel Kahn has worked as a newspaper correspondent and editor for two decades, including 10 years at The Wall Street Journal, where he served as Los Angeles bureau chief, deputy Hong Kong bureau chief and deputy Southern Europe bureau chief, based in Rome. He has reported from more than a dozen countries on three continents. At USC, his work centers around studying changing business models in media. He is the co-director of the Media, Economics and Entrepreneurship program at Annenberg.
He began collaborating with IMSC in 2016 on the Crosstown Traffic project, which seeks to visualize an array of data about mobility in Los Angeles. Since then, he has been working with IMSC on a series of other projects which seeks to tell the story of Los Angeles through data.
IoT Marketplace
The continued evolution of technology puts a plethora of options on every organizations doorstep. Companies know they need to evolve products, services, processes, and even their core strategy in an effort to maintain and grow market share. But, resources are limited and the path forward is often uncertain. Many unknowns will need to be resolved as the organization evolves to take advantage of the opportunities available to it.
Jerry Power and the staff at The Institute for Communication Technology Management (CTM) within the Marshall School of business are work closely with IMSC to shed light on these issues. The CTM team regularly studies market trends on behalf of sponsor companies to understand the enablers and inhibitors of market evolution. These efforts include an emphasis on business strategy, business processes, and market behaviors that must be understood to translate technology achievements into business success. CTM, working together, with IMSC, is able to align technology realization with business strategy.
Virtual Reality
The age of social media and immersive technologies has created a growing need for processing detailed visual representations of ourselves in virtual reality (VR). A realistic simulation of our presence in virtual worlds is unthinkable without a compelling and directable 3D digitization of ourselves. With the wide availability of mobile cameras and the emergence of 3D sensors, Professor Li and his team develop methods that allow computers to digitize, process, and understand dynamic objects from the physical world without the use of professional equipment or user input. His current research focuses on unobtrusive 3D scanning and performance capture of humans in everyday settings with applications in digital content creation and VR. The objectives of his proposed research is to develop automated digitization frameworks that can create high fidelity virtual avatars using consumer sensors and a deployable performance-driven facial animation system to enable, quite literally, face-to-face communication in cyberspace.
The industry standard of creating life-like digital characters still relies on a combination of skilled artists and expensive 3D scanning hardware. While recent advances in geometry processing have significantly pushed the capabilities of modeling anatomical human parts such as faces and bodies in controlled studio environments, highly convoluted structures such as hairstyles and wearable accessories (glasses, hats, etc.) are still difficult to compute without manual intervention. The task of modeling human parts in the wild is further challenged by occlusions such as hair and clothing, partial visibility, and bad lighting conditions. We aim to develop methods that are operable by untrained users and can automatically generate photorealistic digital models of human faces and hair using accessible sensors. Nowadays, complex facial animations of virtual avatars can be directly driven by a person’s facial performance. The latest techniques are consumer friendly, real-time, markerless, calibration-free, and only require a single video camera as input. However, a true immersive experience requires a user to wear a VR head mounted display (HMD) which generally occludes a large part of the upper face region. Our goal is to enable facial performance capture capabilities with VR headsets and transfer true-to-life facial expressions from the users to the digital avatars. Inspired by the recent progress in deep learning techniques for 2D images, we believe that an end-to-end approach for 3D modeling, animation, and rendering is possible using deep neural network-based synthesis and inference techniques.
Smart Buildings
Professor Choi develops integrated, human-centered frameworks for intelligent environmental control in a building. The physiological signals of the occupants, their environmental satisfaction data, and their ambient environmental data are integrated by using sensing agents (such as wearable as well as remote sensors), survey tools, and embedded environmental sensors in the building. This integrative approach enables users’ data-driven, multi-criteria decisions for determining building thermal, air, lighting, and acoustic environment system controls/designs that will potentially lower energy usage awhile improving occupant comfort. The incredible volume of data needed to inform these decisions make IMSC a natural partner for the development of Dr. Choi’s frameworks.
--> Selected Research in Smart Cities
Human-Building Integration
This project develops an integrated human-centered framework for intelligent environmental control in a building. The physiological signals of the occupants, as well as their ambient environmental data, are integrated by using sensing agents (such as wearable/remote sensors) and embedded environmental sensors in the building. This novel concept embodies an integrative approach that promotes viable bio-sensing-driven multi-criteria decisions for determining building thermal and lighting system controls.
This human-centered approach provides a framework that will 1) address sensor data processing and analysis challenges that are inherent in large and dynamic datasets generated from sensing agents; 2) develop methods for optimizing decisions and solutions to multiple-criteria problems pertaining to occupants’ preferences; and 3) establish a human-centered control approach that is integrated with a conventional control system for building retrofits to enable real-time decision making and system optimization that will enhance energy efficient operations and occupants’ comfort.
Human physiological signals, as a proactive consideration for an interactive variable in the control system, not only help to confirm an occupant’s environmental sensations and comfort perceptions but they also optimally maintain a building’s systems operations without overshooting which results in energy savings. There has been still only limited progress in human physiology-based environmental control approaches in the building engineering discipline. Our interdisciplinary researcher team has identified the possibility of using human physiological signals directly to estimate a user’s thermal and visual sensations in real time with the help of advanced sensing technologies, such as wearable devices. These sensing-driven environmental comfort models were integrated with a building systems control and revealed a 20-35% savings potential on the energy usage for cooling, heating, and lighting, as compared to using the conventional rule-based control by eliminating the overshooting systems control within the individuals’ comfort ranges.

Facilities & Resources

Partner Organizations

Viterbi School of Engineering (USC)

Abbreviation

IMSC

Country

United States

Region

Americas

Primary Language

English

Evidence of Intl Collaboration?

Industry engagement required?

Associated Funding Agencies

Contact Name

Cyrus Shahabi

Contact Title

Director

Contact E-Mail

shahabi@usc.edu

Website

General E-mail

Phone

(213) 740-8945

Address

University of Southern California, Powell Hall of Engineeri…
3737 Watt Way, PHE 306
Los Angeles
CA
90089-0272

[an NSF Graduated Center] The Integrated Media Systems Center (IMSC) is an informatics research center that delivers data-driven solutions for real-world applications. We find enhanced solutions to fundamental Data Science problems and apply these advances to achieve major societal impact. To this end, we target four application domains: Transportation, Health, Media and Entertainment, and Smart Cities. For each domain, we develop large-scale System Integration Prototypes. Each prototype is designed to address real-world problems as well as to conduct fundamental and applied multidisciplinary research in data science. Our work is supported by city, state and federal government competitive research grants and by the support and sponsorship of our industry partners such as Google, Microsoft, Intel, Oracle, Chevron and Northrop Grumman. Over the years IMSC has had a vibrant technology transfer program leading to more than ten successful startups and filing more than one hundred invention disclosures. These activities and our research focus make IMSC a recognized figure in its domain and one of the world’s leading authorities in the emerging field of Geo-Informatics. Founded in 1996 by C. L. Max Nikias – now USC’s President – IMSC is hosted by the USC Viterbi School of Engineering and benefits from the support of the School’s Faculty and Staffs. IMSC has been an energetic force in the expansion of the USC Viterbi School of Engineering, serving as the catalyst for new curricular programs and continuously expands education and research efforts with international programs and collaborations.

Abbreviation

IMSC

Country

United States

Region

Americas

Primary Language

English

Evidence of Intl Collaboration?

Industry engagement required?

Associated Funding Agencies

Contact Name

Cyrus Shahabi

Contact Title

Director

Contact E-Mail

shahabi@usc.edu

Website

General E-mail

Phone

(213) 740-8945

Address

University of Southern California, Powell Hall of Engineeri…
3737 Watt Way, PHE 306
Los Angeles
CA
90089-0272

Research Areas

Data science isn’t limited to signal analysis and databases, nor is it singularly focused on one particular domain. Rather, it is a living pipeline spanning the Acquisition of real data, the management of Access to this information, myriads of Analytical techniques, and, of course, real applications of these Actionable insights. To that end, we have assembled a diverse faculty with specialties that cover each unique aspect of this A4 data pipeline.
1. DATA ACQUISITION
Acquisition is one of the most important steps as it impacts all the other steps of the A4 pipeline regardless of the application.For example, in the area of Smart Cities as much of the nation’s Acquisition infrastructure is underdeveloped, any bias or inconsistency in a dataset can have severe repercussions throughout the analysis phase, leading to significant challenges. In this context, to engineer solutions, we look to advanced sensing techniques involving the Internet of Things (IoT), where data collection can be integrated with consumers’ daily lives. Then, by developing sophisticated control techniques, we can eliminate inconsistency before it begins, and ensure the most thorough Acquisition possible.
Sensors
Murali Annavaram has been a faculty member in the Ming-Hsieh Department of Electrical Engineering at the University of Southern California since 2007. He currently holds the Robert G. and Mary G. Lane Early Career Chair, and his research focuses on energy efficiency and reliability in computing platforms with a focus on energy efficient sensor management for body area sensor networks for real-time, continuous health monitoring. He also has an active research group focused on computer systems architecture exploring reliability challenges in the future CMOS technologies.
Internet of Things (IoT)
Professor Krishnamachari’s research spans the design and evaluation of algorithms and protocols for next-generation IoT networks and connected vehicles. In collaboration with IMSC, his research group also explores traffic estimation, based on urban road sensor data, and green traffic control for smart cities, which investigates novel mechanisms based on speed limits for road stretches to influence drivers and self-driving cars to move through cities in ways that minimize their negative impact on the urban environment. His group also investigates algorithms for dispersed computing that allow for flexible and efficient distributed edge computation, which are likely to play an increasing role in smart communities in the context of applications involving video edge analytics, and distributed network monitoring applications. Lastly, in collaboration with both IMSC and the USC Marshall school of business, Dr. Krishnamachari’s group is helping develop a novel middleware platform for IoT in smart communities called I3, that allows real-time data providers and data consumers to exchange data in return for monetary incentives while ensuring that data owners can control who can see what data and what purpose they can use it for.
Control
Professor Savla has a broad interest in performance analysis and optimization of a variety of transportation systems, from urban traffic to logistics. His group designs tools at the interface of control theory, queuing theory, network flow, and game theory to facilitate analysis and control. The emphasis is on integrating canonical features from transportation systems to extend the applicability of these tools beyond their traditional domains of communication and manufacturing systems. This has also given us an understanding of the spatio-temporal redundancy in the information required about system parameters (e.g., geometric properties, traveler behavior, demand) and real-time state (e.g., congestion, incidents) for various performance metrics. This information has direct implications for data acquisition and processing. He collaborates with IMSC to apply these techniques to arterial and freeway networks, including connected and autonomous vehicles, as well as to dynamic vehicle routing.
Wearables
In the Motor Behavior and Neurorehabilitation Laboratory, we aim to understand the neurobehavioral basis of motor learning. Specifically, we are interested in the brain-behavior relationships that are optimal for the preparation, and execution of skilled movement behaviors in healthy aging and in those recovering from hemiparetic stroke. We have worked with IMSC on Rehabilitation Engineering Research, funded through one CTSI grant (POCM project), leveraging advances in smart technologies including virtual reality applications (Kinect™ camera), and body worn sensors (APDM sensors), for rehabilitation purposes including diagnostics.
2. DATA STORAGE & ACCESS
Access to data is an incredibly complex problem—so much so that the National Academy of Engineering has named access to health informatics one of its top twelve Grand Challenges. It brings together aspects of data science as well as software engineering, consumer policy, and ethics, leading to a truly disciplinary effort. IMSC at USC has developed innovative new techniques in spatial and media data management, as well as pioneered original approaches to data security and privacy. Thus, by focusing on Access, we create an effective bridge between data Acquisition and Analysis.
Spatial Data Management
Professor Shahabi and the InfoLAB conduct pioneering research in areas related to data management (including query processing and analysis), data integration, data mining and machine learning, geospatial data management, and large-scale rendering and visualization.
--> Selected Research in Transportation
Traffic Forecasting
A spatiotemporal network is a spatial network (e.g., road network) along with the corresponding time-dependent weight (e.g., travel time) for each edge of the network. The design and analysis of policies and plans on spatiotemporal networks require realistic models that accurately represent the temporal behavior of such networks. We build a traffic modeling framework for road networks that enables:
- generating an accurate temporal model from archived temporal data collected from a spatiotemporal network (so as to be able to publish the temporal model of the spatiotemporal network without having to release the real data)
- augmenting any given spatial network model with a corresponding realistic temporal model custom-built for that specific spatial network (in order to be able to generate a spatiotemporal network model from a solely spatial network model).
We used the proposed framework to generate the temporal model of the Los Angeles County freeway network and publish it for public use.
We have built the technologies for solving traffic prediction problems using the traffic sensor datasets from the IMSC TransDec platform. Due to thorough sensor instrumentations of the road network in Los Angeles as well as the vast availability of auxiliary commodity sensors from which traffic information can be derived (e.g., CCTV cameras, and GPS devices), a large volume of real-time and historical traffic data at very high spatial and temporal resolutions have become available. Therefore, how to mine valuable information from these data is important. We have piloted the studies of traffic prediction for individual road segments using such large datasets. We utilized the spatiotemporal behaviors of rush hours and events to perform an accurate prediction of both short-term and long-term average speed on road-segments, even in the presence of infrequent events (e.g., accidents). By utilizing both the topology of the road network and sensor dataset, we overcame the sparsity of our sensor dataset and extend the prediction task to the entire road network. We also addressed the problems related to the impact of traffic incidents. We developed a set of methods to predict the dynamic evolution of the impact of incidents.
We then study the online traffic prediction problem. One key challenge in traffic prediction is how much to rely on prediction models that are constructed using historical data in real-time traffic situations, which may differ from that of the historical data and can change over time. To overcome this challenge, we propose a novel online framework that learns from the current traffic situation (context) in real-time and predicts the future traffic by matching the current situation to the most effective prediction model trained using historical data. As real-time traffic data arrive, the traffic context space is adaptively partitioned to efficiently estimate the effectiveness of each base predictor in a different situation.
Media Data Management
Recently, new forms of multimedia data (such as text, numbers, tags, signals, geo-tag, 3D/VR/AR and sensor data, etc.) has emerged in many applications on top of conventional multimedia data (image, video, audio). Multimedia has become the “biggest of big data” as the foundation of today’s data-driven discoveries. Moreover, such new multimedia data is increasingly involved in a growing list of science and engineering domains, such as driverless cars, drones, smart cities, biomedical instruments, and security surveillance. Multimedia data has also has embedded information that can be mined for various purposes. Thus, storing, indexing, searching, integrating, and recognizing the vast amounts of data create unprecedented challenges. IMSC is working on solutions for acquisition, management, and analysis of a large multimedia data.
--> Selected Research in Media
Multimedia Information Processing and Retrieval
Large-scale spatial-visual search faces two major challenges: search performance due to the large volume of the dataset and inaccuracy of search results due to the image matching imprecisions. First, the large scale of geo-tagged image datasets and the demand for real-time response make it critical to develop efficient spatial-visual query processing mechanisms. Towards this end, we focus on designing index structures that expedite the evaluation of spatial-visual queries. Second, retrieving relevant images is challenging due to two types of inaccuracies: spatial (due to camera position and scene location mismatch) and visual (due to dimensionality reduction). We propose a set of novel hybrid index structures based on R*-tree and LSH, i.e., a two-level index structure consisting of one primary index associated with a set of secondary structures. In particular, there are two variations to this class: using R*-tree as a primary structure (termed Augmented Spatial First Index) or using LSH as primary (termed Augmented Visual First Index). We experimentally showed that all hybrid structures greatly outperform the baselines with the maximum speed-up factor of 46.
Geospatial Multimedia Sentiment Analysis
Even though many sentiment analysis techniques have been developed and available, there are still limitations in reliably using sentiment analysis since there is no dominantly accepted technique. Taking advantage of existing state-of-the-art sentiment classifiers, we propose a novel framework for geo-spatial sentiment analysis of disaster-related multimedia data objects. Our framework addresses three types of challenges: the inaccuracy and discrepancy associated with various text and image sentiment classifiers, the geo-sentiment discrepancy among data objects in a local geographical area, and observing diverse sentiments from multimedia data objects (i.e., text and image). To overcome these challenges, we proposed a novel framework composed of three phases: sentiment analysis, spatial-temporal partitioning, and geo-sentiment modeling. To estimate the aggregated sentiment score for a set of objects in a local region, our geo-sentiment model considers the sentiment labels generated by multiple classifiers in addition to those of geo-neighbors. To obtain sentiment with high certainty, the model measures the disagreement among correlated sentiment labels either by entropy or variance metric. We used our framework to analyze the disasters of Hurricane Sandy and Napa Earthquake based on datasets collected from Twitter and Flickr. Our analysis results were analogous to FEMA, and USGS reports.
--> Selected Research in Smart Cities
Big Data in Disasters
In a disaster, fast initial data collection is critical for first response. With the wide availability of smart mobile devices such as smartphones, a dynamic and adaptive crowdsourcing on disaster and after disaster has been getting attention in disaster situation awareness. We have been working on maximizing visual awareness in a disaster using smartphones, especially with constrained bandwidth resources. Specifically, We are currently performing a federally funded international joint project (US NSF and Japan JST joint) about data collection in disasters using MediaQ.
--> Selected Research in Transportation
Realtime Traffic Flow Data Extraction
Vision-based traffic flow analysis is getting more attention due to its non-intrusive nature. However, real-time video processing techniques are CPU-intensive so accuracy of extracted traffic flow data from such techniques may be sacrificed in practice. Moreover, the traffic measurements extracted from cameras have hardly been validated with real dataset due to the limited availability of real world traffic data. This study provides a case study to demonstrate the performance enhancement of vision-based traffic flow data extraction algorithm using a hardware device, Intel video analytics coprocessor, and also to evaluate the accuracy of the extracted data by comparing them to real data from traffic loop detector sensors in Los Angeles County (from our transportation project).
Security
Data breaches harm nations, businesses, and individuals. The Democratic National Committee breach affected the United States presidential election of 2016, the Anthem breach imperiled 80 million United States patients, the Equifax breach endangered 143 million United States citizens, the Ashley Madison breach led to suicides, and 60% of the small businesses that suffer a data breach close down within six months. An average of five million data records are breached daily; nine billion have been breached since 2013. According to the US Department of Health and Human Services, 174 million health records of the United States residents have been breached. Naveed is developing systems to prevent such data breaches with a focus on electronic health records.
Privacy
Data is at the core of many of today’s innovations, but often the data used is highly personal or sensitive. My research aims to develop and deploy algorithms and technologies that enable data-driven innovations while preserving privacy. To that end, I study how the data-mining techniques commonly used in the context of the web and social applications have to be changed so that they preserve the rigorous privacy guarantee of differential privacy while remaining useful, and demonstrate how quantitative analyses and machine learning techniques can identify novel privacy risks and support development of tools to empower companies, governments, and individuals to protect privacy.
3. DATA ANALYSIS
Analysis is the heart of data science research, combining technical advances with creativity to glean useful insights from vast quantities of information. Machine learning, game theory, signal processing, and spatial computing are some of the areas in which USC faculty excel, contributing to our impact on local communities and the nation at large. With powerful Analysis, we can transition the world’s information from messy, disorganized data to concise, relevant patterns, which in turn lead to Actionable systems.
Machine Learning
The USC Melady Lab develops machine learning and data mining algorithms for solving problems involving data with special structure, including time series, spatiotemporal data, and relational data. We work closely with domain experts to solve challenging problems and make significant impacts in computational biology, social media analysis, climate modeling, health care, and business intelligence.
Game Theory
Classically in economics, the study of how information influences strategic interactions has been largely descriptive. Much of the recent work in my group examines the associated prescriptive question, which is becoming increasingly important in today’s information economy: what information should a system designer make available to agents so that they can collectively make good decisions? This task is referred to as persuasion or information structure design.
This type of research holds special promise in the field of transportation as global optimizations, though not always locally advantageous, are necessary to improve overall traffic flow.
Traffic Management
Our research lays the groundwork for an algorithmic theory of persuasion, building on top of a recent flurry of work on persuasion in the economics community. In [1], I proved that optimal persuasion is computationally intractable in the generic case of two competing agents and a public communication channel. This explains the lack of an economic characterization of optimal policies. In [2], we showed how to circumvent this impossibility in multi-agent settings which are “smooth”. Somewhat surprisingly, we showed in [3] that optimal persuasion admits computationally efficient, instructive, and near-optimal policies when there is only one decision-making agent. In [4], we show a similar positive result when there are multiple agents with binary actions, no inter-agent externalities, and the system designer can communicate privately with each of the agents. I also recently wrote a survey [5] on the topic.
Being that algorithmic persuasion is coming into its own as a field, we believe the time is ripe for a killer application of our models and algorithms. One promising application is traffic management. In an ongoing collaboration with IMSC, and working with the LA Metro, we are exploring the use of algorithmic persuasion as a tool for traffic management in Los Angeles. Persuasion algorithms would be informed by real-time traffic sensors and historical traffic data, and would be integrated into a GPS navigation app.
Signal Processing
In recent years, Prof Ortega and his team have focused their research on the development of novel tools for Graph Signal Processing (GSP). GSP methods can be used to analyze sensor and communication networks, traffic networks and electrical grids, online social networks, as well as graphs associated to machine learning tasks. On the theoretical front, this work has focused on designing graph filters, anomaly detection, graph sampling and learning graphs from data. These methods are being applied to the various applications currently studied in IMSC. As an example, within the Health domain, his group is developing approaches for human activity analysis using GSP on the graph connecting the estimated positions of body joints (i.e., the skeleton graph). This work has been applied for motion analysis in Parkinson’s Disease patients and also being explored for manufacturing applications. As another example, in the Transportation domain, GSP can be used to analyze traffic patterns and detect dependencies in traffic flows across cities. Sensor networks deployed in Smart Cities applications can benefit from sampling methods developed in GSP, which can also be used to detect anomalies, indicative of sensor malfunction. Lastly, both sampling and filtering can be applied to analyze information collected from online social networks.
Spatial Computing
We develop computer algorithms and build intelligent applications that discover, collect, fuse, and analyze data from heterogeneous sources to solve real world problems.
Air Quality Modeling
Air quality models are important for studying the impact of air pollutant on health conditions. Existing work typically relies on area-specific, expert-selected attributes of pollution emissions (e,g., transportation) and dispersion (e.g., meteorology) for building the model for each combination of study areas, pollutant types, and spatiotemporal scales.
In this project, we are building a data mining approach, PRISMS-DA, that utilizes publicly available OpenStreetMap (OSM) data and the IMSC transportation data to automatically generate air quality model for the concentrations for any type of pollutants at various temporal scales. Currently, PRISMS-DA automatically generates (domain-) expert-free model for accurate PM2.5 concentration predictions in the Los Angeles Metropolitan Area. The automatically generated model can be used to improve air quality models that traditionally rely on expert-selected input.
Artificial Intelligence
As the volume of public and private video data expands at a rapid rate, with YouTube alone reporting more than 100 videos posted every minute, it becomes increasingly crucial to be able to quickly find specific moments within this growing aggregation of video data.
In collaboration with IMSC, Nevatia and his team address the problem of efficiently locating specific portions within long expanses of video through semantic content extraction and content indexing. To this end, we are developing a finer set of query terms based on new detection and tracking methodologies. The resulting representations can then be used for extracting contents that will allow understanding of the events occurring in the scene. Such automated tools will be essential for analysts in the intelligence community to cope with and make effective use of the vast quantities of video data that are becoming increasingly available.
Computer Vision
The Computer Vision Laboratory at the University of Southern California is one of the major centers of computer vision research for thirty years. The lab teams with IMSC to apply research involving image and scene segmentation, video analysis, range data analysis, perceptual grouping, shape analyis and object recognition. Our approach emphasizes the use of segmented symbolic descriptions. This research has obvious application to robotics, manufacturing, mobile robots, surveillance and aerial photo interpretation.
4. TAKING ACTION ON BIG DATA
The ability to take Action on conclusions gained from the application of data science is what separates IMSC at USC from other organizations. Since its inception, IMSC, has had 96 invention disclosures filed at the USC Technology Transfer Office, 51 patents filed, six patents issued, 88 commercial licenses and technology transfers, and nine company spin-offs established. Data science is one of the fastest growing computer science disciplines, and the one poised to make the greatest impact; IMSC aims to continue to be a leader in applying such technology to real-life Action.
Transportation Policy
Professor Giuliano conducts research on relationships between land use and transportation, transportation policy analysis, and information technology applications in transportation. Her current research includes examination of relationships between land use and freight flows, and development of applications for transportation system analysis using archived real-time data, and analysis of commercial and residential development around transit stations.
Precision Medicine
Founded in 2006 by internationally-recognized physician visionary Dr. Leslie Saxon, the USC Center for Body Computing (CBC) is a thought leader innovation hub designed to bring together digital and life sciences executives, sensor and mobile app investors, strategists, designers, investors and thought leaders from healthcare, entertainment and technology to collaborate on transformative care solutions. In collaboration with IMSC and the Keck Medicine of USC, the CBC conducts clinical trials and human performance studies. These studies help define and guide member’s product development efforts; validate efficacy of sensor and app design; and support patient empowered health using digital tools. The future patient care models, including the Virtual Care Clinic, being developed at the CBC will leverage technology along with physician expertise to bring disease treatment and management options to more people on demand at an affordable cost.
Human Performance
The Kuhn laboratory is aimed at integrating patient, model system, and high-content single cell data to translate clinically observed correlations into a mechanistic understanding of the physical and biological underpinnings of cancer dynamics. The organizing framework of the physical dynamics of cancer at the lab focus on the spatial distributions and temporal evolution of the disease at the cellular, human, and population scale. As part of the Convergence Science Initiative on cancer, the Kuhn lab collaborates with IMSC to disseminate its capabilities and to collaborate on evolving physical sciences concepts that could address specific challenges
Data-Driven Journalism
Gabriel Kahn has worked as a newspaper correspondent and editor for two decades, including 10 years at The Wall Street Journal, where he served as Los Angeles bureau chief, deputy Hong Kong bureau chief and deputy Southern Europe bureau chief, based in Rome. He has reported from more than a dozen countries on three continents. At USC, his work centers around studying changing business models in media. He is the co-director of the Media, Economics and Entrepreneurship program at Annenberg.
He began collaborating with IMSC in 2016 on the Crosstown Traffic project, which seeks to visualize an array of data about mobility in Los Angeles. Since then, he has been working with IMSC on a series of other projects which seeks to tell the story of Los Angeles through data.
IoT Marketplace
The continued evolution of technology puts a plethora of options on every organizations doorstep. Companies know they need to evolve products, services, processes, and even their core strategy in an effort to maintain and grow market share. But, resources are limited and the path forward is often uncertain. Many unknowns will need to be resolved as the organization evolves to take advantage of the opportunities available to it.
Jerry Power and the staff at The Institute for Communication Technology Management (CTM) within the Marshall School of business are work closely with IMSC to shed light on these issues. The CTM team regularly studies market trends on behalf of sponsor companies to understand the enablers and inhibitors of market evolution. These efforts include an emphasis on business strategy, business processes, and market behaviors that must be understood to translate technology achievements into business success. CTM, working together, with IMSC, is able to align technology realization with business strategy.
Virtual Reality
The age of social media and immersive technologies has created a growing need for processing detailed visual representations of ourselves in virtual reality (VR). A realistic simulation of our presence in virtual worlds is unthinkable without a compelling and directable 3D digitization of ourselves. With the wide availability of mobile cameras and the emergence of 3D sensors, Professor Li and his team develop methods that allow computers to digitize, process, and understand dynamic objects from the physical world without the use of professional equipment or user input. His current research focuses on unobtrusive 3D scanning and performance capture of humans in everyday settings with applications in digital content creation and VR. The objectives of his proposed research is to develop automated digitization frameworks that can create high fidelity virtual avatars using consumer sensors and a deployable performance-driven facial animation system to enable, quite literally, face-to-face communication in cyberspace.
The industry standard of creating life-like digital characters still relies on a combination of skilled artists and expensive 3D scanning hardware. While recent advances in geometry processing have significantly pushed the capabilities of modeling anatomical human parts such as faces and bodies in controlled studio environments, highly convoluted structures such as hairstyles and wearable accessories (glasses, hats, etc.) are still difficult to compute without manual intervention. The task of modeling human parts in the wild is further challenged by occlusions such as hair and clothing, partial visibility, and bad lighting conditions. We aim to develop methods that are operable by untrained users and can automatically generate photorealistic digital models of human faces and hair using accessible sensors. Nowadays, complex facial animations of virtual avatars can be directly driven by a person’s facial performance. The latest techniques are consumer friendly, real-time, markerless, calibration-free, and only require a single video camera as input. However, a true immersive experience requires a user to wear a VR head mounted display (HMD) which generally occludes a large part of the upper face region. Our goal is to enable facial performance capture capabilities with VR headsets and transfer true-to-life facial expressions from the users to the digital avatars. Inspired by the recent progress in deep learning techniques for 2D images, we believe that an end-to-end approach for 3D modeling, animation, and rendering is possible using deep neural network-based synthesis and inference techniques.
Smart Buildings
Professor Choi develops integrated, human-centered frameworks for intelligent environmental control in a building. The physiological signals of the occupants, their environmental satisfaction data, and their ambient environmental data are integrated by using sensing agents (such as wearable as well as remote sensors), survey tools, and embedded environmental sensors in the building. This integrative approach enables users’ data-driven, multi-criteria decisions for determining building thermal, air, lighting, and acoustic environment system controls/designs that will potentially lower energy usage awhile improving occupant comfort. The incredible volume of data needed to inform these decisions make IMSC a natural partner for the development of Dr. Choi’s frameworks.
--> Selected Research in Smart Cities
Human-Building Integration
This project develops an integrated human-centered framework for intelligent environmental control in a building. The physiological signals of the occupants, as well as their ambient environmental data, are integrated by using sensing agents (such as wearable/remote sensors) and embedded environmental sensors in the building. This novel concept embodies an integrative approach that promotes viable bio-sensing-driven multi-criteria decisions for determining building thermal and lighting system controls.
This human-centered approach provides a framework that will 1) address sensor data processing and analysis challenges that are inherent in large and dynamic datasets generated from sensing agents; 2) develop methods for optimizing decisions and solutions to multiple-criteria problems pertaining to occupants’ preferences; and 3) establish a human-centered control approach that is integrated with a conventional control system for building retrofits to enable real-time decision making and system optimization that will enhance energy efficient operations and occupants’ comfort.
Human physiological signals, as a proactive consideration for an interactive variable in the control system, not only help to confirm an occupant’s environmental sensations and comfort perceptions but they also optimally maintain a building’s systems operations without overshooting which results in energy savings. There has been still only limited progress in human physiology-based environmental control approaches in the building engineering discipline. Our interdisciplinary researcher team has identified the possibility of using human physiological signals directly to estimate a user’s thermal and visual sensations in real time with the help of advanced sensing technologies, such as wearable devices. These sensing-driven environmental comfort models were integrated with a building systems control and revealed a 20-35% savings potential on the energy usage for cooling, heating, and lighting, as compared to using the conventional rule-based control by eliminating the overshooting systems control within the individuals’ comfort ranges.

Facilities & Resources

Partner Organizations

Viterbi School of Engineering (USC)