The objective of the Theme-based Research Scheme (TRS) is to focus academic research efforts of the institutions on themes of strategic importance to the long-term development of Hong Kong. The funded projects led by the Engineering Faculty Members are shown below:
Institute of Medical Intelligence and XR
2022/23 (Twelfth Round)
Funding amount: HK$50.607M
Participating Institutions: CUHK, HKU, HKUST, PolyU
Project Coordinator: Prof. Pheng Ann Heng
Technological innovation presented new and promising ways to improve medical diagnosis, treatment, education, and healthcare service with ever-increasing rigor, subtlety, insight, and precision. Among these advanced technologies, artificial intelligence (AI) and extended reality (XR) are growing fast and making huge transformations in medicine and healthcare. XR refers to all real-and-virtual combined environments and human-machine interactions generated by computer technology and wearables. Integrating AI and XR can open many possibilities towards delivering precision medicine for next-generation healthcare. Yet, there remain challenges to apply AI / XR to medical image computing and computer-assisted intervention in real-world clinical applications. Our project's objective is to build a world-class institute for medical intelligence and XR in Hong Kong by developing cutting-edge techniques aimed at overcoming these challenges and facilitating “one-stop” medicine and healthcare services, covering screening, diagnosis, treatment, management, and prognosis. Specifically, we will address the following major challenges and questions: (1) How does medical intelligence help precision medicine? (2) How can intelligent data analytics be delivered to clinicians/patients in a human-centered way with AI and XR? (3) How can intuitive AI-enabled interaction be facilitated for clinicians with future intelligent XR systems? We will innovate solutions to address these challenges: (1) Intelligent Personalized Diagnosis (IPD) for diagnosis via advanced medical image analysis; (2) AI-XR Interaction & Virtual Surgery (IVS) for next-generation visualization, assessment, treatment coordination, precise planning, and surgical training and education; (3) Intraoperative AI-AR Assisted Surgery (IAS) for dynamic medical interpretation, efficient execution of a surgical plan, and real-time fused intraoperative image guidance; and (4) an integrated multi-facet pipeline platform for applications to liver cancer (hepatocellular carcinoma) and kidney cancer diagnosis, treatment, prognosis, and medical training. Our research team has collaborated fruitfully over a long period. Our backgrounds and strengths are complementary, enabling synergies to be achieved. Ultimately, our collective efforts will advance the frontier of AI and XR in medical and healthcare applications.
Research and Development of Artificial Intelligence in Extraction and Identification of Spoken Language Biomarkers for Screening and Monitoring of Neurocognitive Disorders
2019/20 (Ninth Round)
Funding amount: HK$45M
Participating Institutions: CUHK, HKUST, PolyU
Project Coordinator: Prof. Helen Meng
Population ageing is a global concern. According to WHO, our world's population aged 60+ will nearly double to 22% by 2050, while Hong Kong's population aged 65+ will rise to 35%. Ageing is accompanied by various high-burden geriatric syndromes, which escalate public healthcare expenditures. This situation, coupled with a shrinking workforce and narrowing tax base, jeopardizes our society's sustainability. Neurocognitive disorders (NCD) – including age-related cognitive decline, mild cognitive impairment, and various types of dementia – are particularly prominent in older adults. Dementia has an insidious onset followed by gradual, irreversible deterioration in memory, communication, judgment, and other domains; care costs are estimated at USD 1 trillion today and are expected to double by 2030. This presents a dire need for better disease screening and management. NCD diagnoses and monitoring are largely conducted by clinical professionals face-to-face using neuropsychological tests. Such testing is limited due to clinician shortages; capturing snapshots of cognition that ignore intra-individual variability; subjective recall of cognitive functioning; inter-rater variability in assessment; and language/cultural biases. To address these issues, we will develop an automated, objective, highly accessible evaluation platform based on inexpensively acquirable biomarkers for NCD screening and monitoring. Platform accessibility enables active, remote monitoring, and the generation of patient alerts for prompt treatment between clinical visits. Collecting individualized "big data" over time enables flagging of subtle changes in cognition for early detection of cognitive decline. These actions will prevent under-diagnosis, enhance disease management, delay institutionalization, and lower care costs. NCD often manifests in communicative impairments. Hence, we target spoken language biomarkers – non-intrusive alternatives to blood tests and brain scans for NCD screening and monitoring. Spoken language can be easily captured remotely. Speech event records (e.g. latencies, dysfluencies) at millisecond resolutions enable sensitive cognitive assessments. We will develop Artificial Intelligence (AI)-driven technologies to automatically extract spoken language biomarkers. Our work is novel in its comprehensive dimensional coverage of conversational spoken language dialogs (from hesitations to dialog coherence), using fit-for-purpose deep learning techniques for feature extraction and selection. Our systems will be highly adaptable across environments to ensure consistent, objective NCD assessments. Our research will offer unprecedented data and technological support for early NCD diagnoses and timely clinical care. This aligns with WHO's plan of making dementia a public health and social care priority at national and international levels. We aim to control the overwhelming burden of NCD through AI-enabled healthcare that better supports patients and caregivers in Hong Kong.
Advancing Emerging Research and Innovations Important to Hong Kong
Project Title: Image-guided Automatic Robotic Surgery
2018/19 (Eigth Round)
Funding amount: HK$45M
Participating Institutions: CUHK
Project Coordinator: Prof. Yunhui Liu
Robotics is widely considered to be one of the key technological drivers of economic growth and competitiveness. It therefore is important for Hong Kong to develop its own robotics technology. The challenge lies in where to position Hong Kong in this extremely broad interdisciplinary area. Since Hong Kong has the best healthcare system in the region, developing robotics technology for healthcare is an obvious strategy, which will build upon our existing strengths.
The objective of this project is to establish a world-class research center in surgical robotics in Hong Kong by forming an interdisciplinary team with the necessary expertise in engineering and medicine from local universities, and collaborating with internationally-respected institutions such as Intuitive Surgical Inc., Imperial College London, and Johns Hopkins University. Existing surgical robots work in a remote control mode in which a surgeon tele-controls the robots with full attention. It is widely considered that such remote-controlled robots will be replaced by next-generation ones that will assist surgeons with high-level intelligence, and automatically perform particular steps of surgical procedures. The development of such intelligent surgical robots presents several big scientific challenges including (1) how to efficiently and reliably sense surgical objects/fields; (2) how to automatically carry out pre-operative (pre-op) surgical planning and navigate the robots in the highly dynamic and individual-dependent environment; (3) how to control the actions of surgical robots safely and accurately; (4) how to equip surgical robots with high-level intelligence, e.g., the abilities of situation awareness and reasoning. By combining the expertise and experiences in the different areas, we aim to develop innovative solutions to those challenges, which include (1) novel systems and algorithms for real-time sensing of 3D geometry, force, and biomechanical properties of surgical objects; (2) data-driven surgical planning and navigation by using the huge image data of robotic surgery at the Prince of Wales Hospital (PWH) and Intuitive Surgical Inc.; (3) visually servoed controllers for robots interacting with soft tissues; and (4) robotic surgery intelligence based on deep learning. The technologies will be integrated into a prototype of image-guided surgical robots that is able to automatically assist surgeons and perform a single surgical step or several connected steps of surgical procedures. We will use the total laparoscopic hysterectomy (TLH) as the example to validate the system by ex-vivo experiments, and if approved, by clinical trials and pilot applications.
Smart Solar Energy Harvesting, Storage, and Utilization
2013/14 Exercise (Third Round)
Funding amount: HK$ 60.33M
Participating Institutions: CUHK, POLYU, HKUST, HKU
Project Coordinator: Prof. C.P. Wong
Prof. Wong Ching-ping, Dean of Engineering, and his team members study the new technology in energy harvesting, storage and utilization in this project. To sustainably utilize solar energy, intelligent power distribution grids need to be locally developed for solar energy generation, storage, and utilization at affordable cost and with enhanced security of supply through flexible transition between grid interconnected and islanded operating modes. All of these issues are in line with the strategic objectives on sustainable development outlined by the Hong Kong Government in 2005.
They will develop high-performance vacuum deposited thin-film photovoltaics (PV) devices and modules which will make low-cost, high-throughput, and large-area PV production possible. They will explore new materials and processing approaches for high energy-density batteries and supercapacitors, so as to realize a hybrid storage system. They will also formulate strategies to integrate, manage, and control various subsystems to enhance the efficiency and security in energy utilization.
Research and Development of Artificial Intelligence in Extraction and Identification of Spoken Language Biomarkers for Screening and Monitoring of Neurocognitive Disorders
2019/20 (Ninth Round)
Funding amount: HK$45M
Participating Institutions: CUHK, HKUST, PolyU
Project Coordinator: Prof. Helen Meng
Population ageing is a global concern. According to WHO, our world's population aged 60+ will nearly double to 22% by 2050, while Hong Kong's population aged 65+ will rise to 35%. Ageing is accompanied by various high-burden geriatric syndromes, which escalate public healthcare expenditures. This situation, coupled with a shrinking workforce and narrowing tax base, jeopardizes our society's sustainability. Neurocognitive disorders (NCD) – including age-related cognitive decline, mild cognitive impairment, and various types of dementia – are particularly prominent in older adults. Dementia has an insidious onset followed by gradual, irreversible deterioration in memory, communication, judgment, and other domains; care costs are estimated at USD 1 trillion today and are expected to double by 2030. This presents a dire need for better disease screening and management. NCD diagnoses and monitoring are largely conducted by clinical professionals face-to-face using neuropsychological tests. Such testing is limited due to clinician shortages; capturing snapshots of cognition that ignore intra-individual variability; subjective recall of cognitive functioning; inter-rater variability in assessment; and language/cultural biases. To address these issues, we will develop an automated, objective, highly accessible evaluation platform based on inexpensively acquirable biomarkers for NCD screening and monitoring. Platform accessibility enables active, remote monitoring, and the generation of patient alerts for prompt treatment between clinical visits. Collecting individualized "big data" over time enables flagging of subtle changes in cognition for early detection of cognitive decline. These actions will prevent under-diagnosis, enhance disease management, delay institutionalization, and lower care costs. NCD often manifests in communicative impairments. Hence, we target spoken language biomarkers – non-intrusive alternatives to blood tests and brain scans for NCD screening and monitoring. Spoken language can be easily captured remotely. Speech event records (e.g. latencies, dysfluencies) at millisecond resolutions enable sensitive cognitive assessments. We will develop Artificial Intelligence (AI)-driven technologies to automatically extract spoken language biomarkers. Our work is novel in its comprehensive dimensional coverage of conversational spoken language dialogs (from hesitations to dialog coherence), using fit-for-purpose deep learning techniques for feature extraction and selection. Our systems will be highly adaptable across environments to ensure consistent, objective NCD assessments. Our research will offer unprecedented data and technological support for early NCD diagnoses and timely clinical care. This aligns with WHO's plan of making dementia a public health and social care priority at national and international levels. We aim to control the overwhelming burden of NCD through AI-enabled healthcare that better supports patients and caregivers in Hong Kong.