Home >> Events >> Seminars Archives >> Seminar Series 2020/2021
Seminar Series 2020/2021
July 2021
23 July
3:00 pm - 4:00 pm
20 July
11:00 am - 12:00 pm
Structurally Stable Assemblies: Theory, Algorithms, and Applications
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Dr. SONG Peng
Assistant Professor
Pillar of Information Systems Technology and Design
Singapore University of Technology and Design
Abstract:
An assembly with rigid parts is structurally stable if it can preserve its form under external forces without collapse. Structural stability is a necessary condition for using assemblies in practice such as furniture and architecture. However, designing structurally stable assemblies remains a challenging task for general and expert users since slight variation on the geometry of an individual part may affect the whole assembly’s structural stability. In this talk, I will introduce our attempts in the past years in advancing the theory and algorithms for computational design and fabrication of structurally stable assemblies. The key technique is to analyze structural stability in the kinematic space by utilizing static-kinematic duality and to ensure structural stability with geometry optimization using a two-stage approach (i.e., kinematic design and geometry realization). Our technique can handle assemblies that are structurally stable in different degrees, namely stable under a single external force, a set of external forces, and arbitrary external forces. The usefulness of these structurally stable assemblies has been demonstrated in applications like personalized puzzles, interlocking furniture, and free-form discrete architecture.
Biography:
Peng Song is an Assistant Professor at the Pillar of Information Systems Technology and Design, Singapore University of Technology and Design (SUTD), where he directs the Computer Graphics Laboratory (CGL). Prior to joining SUTD in 2019, he was a research scientist at EPFL, Switzerland. He received his PhD from Nanyang Technological University, Singapore in 2013, his master and bachelor degrees both from Harbin Institute of Technology, China in 2010 and 2007 respectively. His research is in the area of computer graphics, with a focus on computational fabrication and geometry processing. He serves as a co-organizer of a weekly web series on Computational Fabrication, and a program committee member of several leading conferences in computer graphics including SIGGRAPH Asia and Pacific Graphics.
Join Zoom Meeting:
https://cuhk.zoom.us/j/98242753532
Enquiries: Miss Karen Chan at Tel. 3943 8439
May 2021
12 May
2:00 pm - 3:00 pm
Towards Trustworthy Full-Stack AI
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Dr. Fang Chengfang
Abstract:
Due to lack of security consideration at the early development of AI algorithms, most AI systems are not robust against adversarial manipulation.
In critical applications such as healthcare, autonomous driving, and malware detection, security risks can be devastating, and thus attract numerous research efforts.
In this seminar, I will introduce some of the AI security and privacy research topics from an industry point of view, including the risk analysis throughout AI lifecycle and the pipeline of defense, in the hopes of providing a more complete picture on top of academic research to the audience.
Biography:
Chengfang Fang obtained his Ph.D. degree from National University of Singapore before joining Huawei in 2013. He has been working on security and privacy protection in several areas including machine learning, internet of things, mobile device and biometrics for more than 10 years. He has published over 20 research papers and obtained 15 patents in this domain. He is currently a principal researcher of Trustworthiness Technology Lab in Huawei Singapore Research Center.
Join Zoom Meeting:
https://cuhk.zoom.us/j/92800336791
Enquiries: Miss Karen Chan at Tel. 3943 8439
10 May
2:00 pm - 3:00 pm
High Performance Fluid Simulation and its Applications
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Dr. Xiaopei Liu
Assistant Professor
School of Information Science and Technology
Shanghai Tech University
Abstract:
Efficient and accurate high-resolution fluid simulation in complex environment is desirable in many practical applications, e.g., the aerodynamic shape design of airplanes and cars, as well as the production of special effects in movies and games. However, this has been a challenging problem for a very long time, and yet not well solved. In this talk, I will introduce our attempts in the past years in advancing the computational techniques for high-performance fluid simulations by developing statistical kinetic model with variational principles, in a single-phase flow scenario where strong turbulence and complex geometric objects exist. I will also introduce how the general idea can be extended to multiphase flow simulations in order to allow both large density ratio and high Reynolds number. To improve computational efficiency, I will further introduce our GPU optimization and machine learning techniques that are designed as both low-level and high-level accelerations. Rendering and visualization of fluid flow data will also be briefly covered. Finally, validations in real scenarios and demonstrations of results in different applications, such as the aerodynamic simulations over aircrafts, cars and architectures for shape design purposes, the blood flow simulations inside coronary arteries for clinical diagnosis, the simulation of visual flow phenomena for movies and games, will all be shown in this talk, with a new application for learning the control policy of a fish-like underwater robot with our fast simulator.
Biography:
Dr. Xiaopei Liu is now an assistant professor at School of Information Science and Technology, Shanghai Tech University, affiliated with Visual and Data Intelligence (VDI) center. He obtained his PhD degree on computer science and engineering from The Chinese University of Hong Kong (CUHK), and worked as a postdoctoral Research Fellow at Nanyang Technological University (NTU) in Singapore, where he started the multi-disciplinary research on fluid simulation and visualization, both on classical and quantum fluids. Most of his publications are top journals and conferences, which cover multiple disciplines, such as ACM TOG, ACM SIGGRAPH/SIGGRAPH Asia, IEEE TVCG, APS PRD, AIP POF, etc. Dr. Xiaopei Liu is now working on high-performance fluid simulation in complex environment, with applications to visual effects, computational design & fabrication, medical diagnosis, robot learning, as well as fundamental science. He is also conducting research on simulation-based UAV design optimization & autonomous navigation.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93649176456
Enquiries: Miss Karen Chan at Tel. 3943 8439
06 May
10:30 am - 11:30 am
Dynamic Voltage Scaling: from Low Power to Security
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Dr. Qu Gang
Abstract:
Dynamic voltage scaling (DVS) is one of the most effective and widely used techniques for low power design. It adjusts system operating voltage and clock frequency based on the real time application’s computation and deadline information in order to reduce the power and energy consumption. In this talk, I will share our research results on DVS and the lessons I have learned in three different periods of my research career. First, in the late 1990’s, as a graduate student, we formulated the problem of DVS for energy minimization and derived a series of optimal solutions under different system settings to guide the practice of DVS enabled system design. Then in 2000, I became an assistant professor and we studied how to apply DVS to scenarios where the traditional execution-time-for-energy tradeoff does not exist. Finally, in the past five years, we developed DVS-based attacks to break the trusted execution environment in model computing platforms. I will also show our work on enhancing system security by DVS through examples of device authentication and countermeasures to machine learning model inversion attacks. It is my hope that this talk can shed light on how to find a research topic and make your contributions.
Biography:
Gang Qu received his B.S. in mathematics from the University of Science and Technology of China (USTC) and Ph.D. in computer science from the University of California, Los Angeles (UCLA). He is currently a professor in the Department of Electrical and Computer Engineering at the University of Maryland, College Park, where he leads the Maryland Embedded Systems and Hardware Security Lab (MeshSec) and the Wireless Sensor Laboratory. His research activities are on trusted integrated circuit design, hardware security, energy efficient system design and wireless sensor networks. He has focused recently on applications in the Internet of Things, cyber-physical systems, and machine learning. He has published more than 250 conference papers and journal articles on these topics with several best paper awards. Dr. Qu is an enthusiastic teacher. He has taught and co-taught various security courses, including a popular MOOC on Hardware Security through Coursera. Dr. Qu has served 17 times as the general or program chair/co-chair for international conferences and workshops. He is currently on the editorial board of IEEE TCAD, TETC, ACM TODAES, JCST, Integration, and HSS. Dr. Qu is a fellow of IEEE.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96878058667
Enquiries: Miss Karen Chan at Tel. 3943 8439
March 2021
15 March
9:45 am - 10:45 am
Prioritizing Computation and Analyst Resources in Large-scale Data Analytics
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Ms. Kexin RONG
PhD student, Department of Computer Science
Stanford University
Abstract:
Data volumes are growing exponentially, fueled by an increased number of automated processes such as sensors and devices. Meanwhile, the computational power available for processing this data – as well as analysts’ ability to interpret it – remain limited. As a result, database systems must evolve to address these new bottlenecks in analytics. In my work, I ask: how can we adapt classic ideas from database query processing to modern compute- and analyst-limited data analytics?
In this talk, I will discuss the potential for this kind of systems development through the lens of several practical systems I have developed. By drawing insights from database query optimization, such as pushing workload- and domain-specific filtering, aggregation, and sampling into core analytics workflows, we can dramatically improve the efficiency of analytics at scale. I will illustrate these ideas by focusing on two systems — one designed to optimize visualizations for streaming infrastructure and application telemetry and one designed for high-volume seismic waveform analysis — both of which have been field-tested at scale. I will also discuss lessons from production deployments at companies including Datadog, Microsoft, Google and Facebook.
Biography:
Kexin Rong is a Ph.D. student in Computer Science at Stanford University, co-advised by Professor Peter Bailis and Professor Philip Levis. She designs and builds systems to enable data analytics at scale, supporting applications including scientific analysis, infrastructure monitoring, and analytical queries on big data clusters. Prior to Stanford, she received her bachelor’s degree in Computer Science from California Institute of Technology.
Join Zoom Meeting:
https://cuhk.zoom.us/j/97794511231?pwd=Qjg2RlArcUNrbHBwUmxNSW4yTVIxZz09
Enquiries: Miss Caroline TAI at Tel. 3943 8440
12 March
9:45 am - 10:45 am
Toward a Deeper Understanding of Generative Adversarial Networks
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Dr. Farzan FARNIA
Postdoctoral Research Associate
Laboratory for Information and Decision Systems, MIT
Abstract:
While modern adversarial learning frameworks achieve state-of-the-art performance on benchmark image, sound, and text datasets, we still lack a solid understanding of their robustness, generalization, and convergence behavior. In this talk, we aim to bridge this gap between theory and practice using a principled analysis of these frameworks through the lens of optimal transport and information theory. We specifically focus on the Generative Adversarial Network (GAN) framework which represents a game between two machine players for learning the distribution of data. In the first half of the talk, we study equilibrium in GAN games for which we show the classical Nash equilibrium may not exist. We then introduce a new equilibrium notion for GAN problems, called proximal equilibrium, through which we develop a GAN training algorithm with improved stability. We provide several numerical results on large-scale datasets supporting our proposed training method for GANs. In the second half of the talk, we attempt to understand why GANs often fail in learning multi-modal distributions. We focus our study on the benchmark Gaussian mixture models and demonstrate the failures of standard GAN architectures under this simple class of multi-modal distributions. Leveraging optimal transport theory, we design a novel architecture for the GAN players which is tailored to mixtures of Gaussians. We theoretically and numerically show the significant gain achieved by our designed GAN architecture in learning multi-modal distributions. We conclude the talk by discussing some open research challenges in adversarial learning.
Biography:
Farzan Farnia is a postdoctoral research associate at the Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, where he is co-supervised by Professor Asu Ozdaglar and Professor Ali Jadbabaie. Prior to joining MIT, Farzan received his master’s and PhD degrees in electrical engineering from Stanford University and his bachelor’s degrees in electrical engineering and mathematics from Sharif University of Technology. At Stanford, he was a graduate research assistant at the Information Systems Laboratory advised by Professor David Tse. Farzan’s research interests include statistical learning theory, optimal transport theory, information theory, and convex optimization. He has been the recipient of the Stanford Graduate Fellowship (Sequoia Capital fellowship) from 2013-2016 and the Numerical Technology Founders Prize as the second top performer of Stanford’s electrical engineering PhD qualifying exams in 2014.
Join Zoom Meeting:
https://cuhk.zoom.us/j/99476583146?pwd=QVdsaTJLYU1ab2c0ODV0WmN6SzN2Zz09
Enquiries: Miss Caroline TAI at Tel. 3943 8440
11 March
9:00 am - 10:00 am
Sensitive Data Analytics with Local Differential Privacy
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Mr. Tianhao WANG
PhD student, Department of Computer Science
Purdue University
Abstract:
When collecting sensitive information, local differential privacy (LDP) can relieve users’ privacy concerns, as it allows users to add noise to their private information before sending data to the server. LDP has been adopted by big companies such as Google and Apple for data collection and analytics. My research focuses on improving the ecosystem of LDP. In this talk, I will first share my research on the fundamental tools in LDP, namely the frequency oracles (FOs), which estimate the frequency of each private value held by users. We proposed a framework that unifies different FOs and optimizes them. Our optimized FOs improve the estimation accuracy of Google’s and Apple’s implementations by 50% and 90%, respectively, and serve as the state-of-the-art tools for handling more advanced tasks. In the second part of my talk, I will present our work on extending the functionality of LDP, namely, how to make a database system that satisfies LDP while still supporting a variety of analytical queries.
Biography:
Tianhao Wang is a Ph.D. candidate in the department of computer science, Purdue University, advised by Prof. Ninghui Li. He received his B.Eng. degree from software school, Fudan University in 2015. His research area is security and privacy, with a focus on differential privacy and applied cryptography. He is a member of DPSyn, which won several international differential privacy competitions. He is a recipient of the Bilsland Dissertation Fellowship and the Emil Stefanov Memorial Fellowship.
Join Zoom Meeting:
https://cuhk.zoom.us/j/94878534262?pwd=Z2pjcDUvQVlETzNoVWpQZHBQQktWUT09
Enquiries: Miss Caroline TAI at Tel. 3943 8440
11 March
3:15 pm - 4:15 pm
Toward Reliable NLP Systems via Software Testing
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Dr. Pinjia HE
Postdoctoral researcher, Computer Science Department
ETH Zurich
Abstract:
NLP systems such as machine translation have been increasingly utilized in our daily lives. Thus, their reliability becomes critical; mistranslations by Google Translate, for example, can lead to misunderstanding, financial loss, threats to personal safety and health, etc. On the other hand, due to their complexity, such systems are difficult to get right. Because of their nature (i.e., based on large, complex neural networks), traditional reliability techniques are challenging to be applied. In this talk, I will present my recent work that has spearheaded the testing of machine translation systems, toward building reliable NLP systems. In particular, I will describe three complementary approaches which collectively found 1,000+ diverse translation errors in the widely-used Google Translate and Bing Microsoft Translator. I will also describe my work on LogPAI, an end-to-end log management framework powered by AI algorithms for traditional software reliability, and conclude the talk with my vision for making both traditional and intelligent software such as NLP systems more reliable.
Biography:
Pinjia HE has been a postdoctoral researcher in the Computer Science Department at ETH Zurich after receiving his PhD degree from The Chinese University of Hong Kong (CUHK) in 2018. He has research expertise in software engineering and artificial intelligence, and is particularly passionate about making both traditional and intelligent software reliable. His research on automated log analysis and machine translation testing appeared in top computer science venues, such as ICSE, ESEC/FSE, ASE, and TDSC. The LogPAI project led by him has been starred 2,000+ times on GitHub and downloaded 30,000+ times by 380+ organizations, and won a Most Influential Paper (MIP) award at ISSRE. He also won a 2016 Excellent Teaching Assistantship at CUHK. He has served on program committees of MET’21, DSML’21, ECOOP’20 Artifact, and ASE’19 Demo, and reviewed for top journals and conferences (e.g., TSE, TOSEM, ICSE, KDD, and IJCAI). According to Google Scholar, he has an h-index of 14 and 1,200+ citations.
Join Zoom Meeting:
https://cuhk.zoom.us/j/98498351623?pwd=UHFFUU1QbExYTXAxTWxCMk9BbW9mUT09
Enquiries: Miss Caroline TAI at Tel. 3943 8440
03 March
2:00 pm - 3:00 pm
Edge AI – A New Battlefield for Hardware Security Research
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. CHANG Chip Hong
Associate Professor
Nanyang Technological University (NTU) of Singapore
Abstract:
The flourishing of Internet of Things (IoT) has rekindled on-premise computing to allow data to be analyzed closer to the source. To support edge Artificial Intelligence (AI), hardware accelerators, open-source AI model compilers and commercially available toolkits have evolved to facilitate the development and deployment of applications that use AI at its core. This “model once, run optimized anywhere” paradigm shift in deep learning computations introduces new attack surfaces and threat models that are methodologically different from existing software-based attack and defense mechanisms. Existing adversarial examples modify the input samples presented to an AI application either digitally or physically to cause a misclassification. Nevertheless, these input-based perturbations are not robust or stealthy on multi-view target. To generate a good adversarial example for misclassifying a real-world target of variational viewing angle, lighting and distance, a decent number of pristine samples of the target object are required. The feasible perturbations are substantial and visually perceptible. Edge AI also poses a difficult catchup for existing adversarial example detectors that are designed based on sophisticated offline analyses with the assumption that the deep learning model is implemented on a general purpose 32-bit floating-point CPU or GPU cluster. This talk will first present a new glitch injection attack on edge DNN accelerator capable of misclassifying a target under variational viewpoints. The attack pattern for each target of interest consists of sparse instantaneous glitches, which can be derived from just one sample of the target. The second part of this talk will present a new hardware-oriented approach for in-situ detection of adversarial inputs feeding through a spatial DNN accelerator architecture or a third-party DNN Intellectual Property (IP) implemented on the edge. With negligibly small hardware overhead, the glitch injection circuit and the trained shallow binary tree detector can be easily implemented alongside with a deep learning model on an edge AI accelerator hardware.
Biography:
Prof. Chip Hong Chang is an Associate Professor at the Nanyang Technological University (NTU) of Singapore. He held concurrent appointments at NTU as Assistant Chair of Alumni of the School of EEE from 2008 to 2014, Deputy Director of the Center for High Performance Embedded Systems from 2000 to 2011, and Program Director of the Center for Integrated Circuits and Systems from 2003 to 2009. He has coedited five books, and have published 13 book chapters, more than 100 international journal papers (>70 are in IEEE), more than 180 refereed international conference papers (mostly in IEEE), and have delivered over 40 colloquia and invited seminars. His current research interests include hardware security and trustable computing, low-power and fault-tolerant computing, residue number systems, and application-specific digital signal processing algorithms and architectures. Dr. Chang currently serves as the Senior Area Editor of IEEE Transactions on Information Forensic and Security (TIFS), and Associate Editor of the IEEE Transactions on Circuits and Systems-I (TCAS-I) and IEEE Transactions on Very Large Scale Integration (TVLSI) Systems. He was the Associate Editor of the IEEE TIFS and IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD) from 2016 to 2019, IEEE Access from 2013 to 2019, IEEE TCAS-I from 2010 to 2013, Integration, the VLSI Journal from 2013 to 2015, Springer Journal of Hardware and System Security from 2016 to 2020 and Microelectronics Journal from 2014 to 2020. He also guest edited eight journal special issues including IEEE TCAS-I, IEEE Transactions on Dependable and Secure Computing (TDSC), IEEE TCAD and IEEE Journal on Emerging and Selected Topics in Circuits and Systems (JETCAS). He has held key appointments in the organizing and technical program committees of more than 60 international conferences (mostly IEEE), including the General Co-Chair of 2018 IEEE Asia-Pacific Conference on Circuits and Systems and the inaugural Workshop Chair and Steering Committee of the ACM CCS satellite workshop on Attacks and Solutions in Hardware Security. He is the 2018-2019 IEEE CASS Distinguished Lecturer, a Fellow of the IEEE and the IET.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93797957554?pwd=N2J0VjBmUFh6N0ZENVY0U1RvS0Zhdz09
Meeting ID: 937 9795 7554
Password: 607354
Enquiries: Miss Caroline TAI at Tel. 3943 8440
February 2021
02 February
2:00 pm - 3:00 pm
Design Exploration of DNN Accelerators using FPGA and Emerging Memory
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Dr. Guangyu SUN
Associate Professor
Center for Energy-efficient Computing and Applications (CECA)
Peking University
Abstract:
Deep neural networks (DNN) have been successfully used in the fields, such as computer vision and natural language processing. In order to improve the processing efficiency, various hardware accelerators have been proposed for DNN applications. In this talk, I will first review our works about design space exploration and design automation for DNN accelerators on FPGA platforms. Then, I will quickly introduce the potential and challenges of using emerging memory for energy-efficient DNN inference. After that, I will try to provide some advices for graduate study.
Biography:
Dr. Guangyu Sun is an associate professor at Center for Energy-efficient Computing and Applications (CECA) in Peking University. He received his B.S. and M.S degrees from Tsinghua University, Beijing, in 2003 and 2006, respectively. He received his Ph.D. degree in Computer Science from the Pennsylvania State University in 2011. His research interests include computer architecture, acceleration system, and design automation for modern applications. He has published 100+ journals and refereed conference papers in these areas. He is an associate editor of ACM TECS and ACM JETC.
Join Zoom Meeting:
https://cuhk.zoom.us/j/95836460304?pwd=UkRwSldjNWdUWlNvNnN2TTlRZ1ZUdz09
Meeting ID: 958 3646 0304
Password: 964279
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
January 2021
29 January
2:00 pm - 3:00 pm
In-Memory Computing – an algorithm –architecture co-design approach towards the POS/w era
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. LI Jiang
Associate Professor
Department of computer science and engineering
Shanghai Jiao Tong University
Abstract:
The rapid rising computing power over the past decade has supported the advance of Artificial Intelligence. Still, in the post-Moore era, AI chips with traditional CMOS process and Van-Neumann architectures face huge bottlenecks in memory walls and energy efficiency wall. In-memory computing architecture based on emerging memristor technology has become a very competitive computing paradigm to deliver two order-of-magnitude higher energy efficiency. The memristor process has apparent advantages in power consumption, multi-bit, and cost. However, it faces challenges of the low manufacturing scalability and process variation, which lead to the instability of computation and limited capability of accommodate large and complex neural networks. This talk will introduce the algorithm and architecture co-optimization approach to solve the above challenges.
Biography:
Li Jiang is an associate professor from Dept. of CSE, Shanghai Jiao Tong University. He received the B.S. degree from the Dept. of CS&E, Shanghai Jiao Tong University in 2007, the MPhil and the Ph.D. degree from the Dept. of CS&E, the Chinese University of Hong Kong in 2010 and 2013 respectively. He has published more than 50 peer-reviewd papers in top-tier computer architecture and EDA conferences and journals, including a best paper nomination in ICCAD. According to the IEEE Digital Library, five papers ranked in the top 5% of citations of all papers collected at its conferences. The achievements have been highly recognized and cited by academic and industry experts, including Academician Zheng Nanning, Academician William Dally, Prof. Chengming Hu, and many ACM/IEEE fellows. Some of the achievements have been introduced into the IEEE P1838 standard, and a number of technologies have been put into commercial use in cooperation with TSMC, Huawei and Alibaba. He got best Ph.D. Dissertation award in ATS 2014, and was in the final list of TTTC’s E. J. McCluskey Doctoral Thesis Award. He received ACM Shanghai Rising Star award, and CCF VLSI early career award. He received 2020 CCF distinguished Speaker. He serves as co-chair and TPC member in several international and national conferences, such as MICRO, DATE, ASP-DAC, ITC-Asia, ATS , CFTC, CTC and etc. He is an associate Editor of IET Computers Digital Techniques, VLSI the Integration Journal.
Join Zoom Meeting:
https://cuhk.zoom.us/j/95897084094?pwd=blZlanFOczF4aWFvM2RuTDVKWFlZZz09
Meeting ID: 958 9708 4094
Password: 081783
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
December 2020
14 December
2:00 pm - 3:00 pm
Speed up DNN Model Training: An Industrial Perspective
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Mr. Mike Hong
CTO of BirenTech
Abstract:
Training large DNN models is compute-intensive, often taking days, weeks or even months to complete. Therefore, how to speed it up has attracted lots of attention from both academia and industry. In this talk, we shall cover a number of accelerated DNN training techniques from an industrial perspective, including various optimizers, large batch training, distributed computation and all-reduce network topology.
Biography:
Mike Hong has been working on GPU architecture design for 26 years and is currently serving as the CTO of BirenTech, an intelligent chip design company that has attracted more than 200 million US$ series A round financing since founded in 2019. Before joining Biren, Mike was the Chief Architect in S3, Principal Architect for Tesla architecture in NVIDIA, GPU team leader and the Chief Architect in HiSilicon. Mike holds more than 50 US patents including the texture compression patent which is the industrial standard for all the PCs, Macs and game consoles.
Join Zoom Meeting:
https://cuhk.zoom.us/j/92074008389?pwd=OE1EbjBzWk9oejh5eUlZQ1FEc0lOUT09
Meeting ID: 920 7400 8389
Password: 782536
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
03 December
11:00 am - 12:00 pm
Artificial Intelligence for Radiotherapy in the Era of Precision Medicine
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. CAI Jing
Professor of Department of Health Technology and Informatics
The Hong Kong Polytechnic University (PolyU)
Abstract:
Artificial Intelligence (AI) is evolving rapidly and promises to transform the world in an unprecedented way. The tremendous possibilities that AI can bring to radiation oncology have triggered a flood of activities in the field. Particularly, with the support of big data and accelerated computation, deep learning is taking off with tremendous algorithmic innovations and powerful neural network models. AI technology has great promises in improving radiation therapy from treatment planning to treatment assessment. It can aid radiation oncologists in reaching unbiased consensus treatment planning, help train junior radiation oncologists, update practitioners, reduce professional costs, and improve quality assurance in clinical trials and patient care. It can significantly reduce physicians’ time and efforts required to contour, plan, and review. Given the promising learning tools and massive computational resources that are becoming readily available, AI will dramatically change the landscape of radiation oncology research and practice soon. This presentation will give an overview of the recent advances in AI for radiation oncology applications, followed with a set of examples of AI applications in various aspects of radiation therapy, including but not limited to, organ segmentation, target volume delineation, treatment planning, quality assurance, response assessment, outcome prediction, etc. A number of examples of AI applications in radiotherapy will be illustrated in the presentation. For example, I will present a new approach to derive the lung functional images for function-guided radiation therapy, using a deep convolutional neural network to learn and exploit the underlying functional in-formation in the CT image and generate functional perfusion image. I will demonstrate a novel method for pseudo-CT generation from multi-parametric MR images using multi-channel multi-path generative adversarial network (MCMP-GAN) for MRI-based radiotherapy application. I will also show promising capability of MRI-based radiomics features for pre-treatment identification of adaptive radiation therapy eligibility in nasopharyngeal carcinoma (NPC) patients.
Biography:
Prof. CAI Jing earned his PhD in Engineering Physics in 2006 and then completed his clinical residency in Medical Physics in 2009 from the University of Virginia, USA. He entered the ranks of academia as Assistant Professor at Duke University in 2009, and was promoted to Associate Professor in 2014. He joined the Hong Kong Polytechnic University in 2017, and is currently a full Professor and the funding Programme Leader of Medical Physics MSc Programme in the Department of Health Technology and Informatics. He is board certified in Therapeutic Radiological Physics by American Board of Radiography (ABR) since 2010. He is the PI/Co-PI for more than 20 external research funds, including 5 NIH, 3 GRF, 3 HMRF and 1 ITSP grants, with a total funding of more than 40M HK Dollars. He has published over 100 journal papers and 200 conference papers/abstracts, and has mentored over 60 trainees as the supervisor. He serves on the editorial boards for several prestigious journals in the fields of medical physics and radiation oncology. He was elected to Fellow of American Association of Physicists in Medicine (AAPM) in 2018.
Join Zoom Meeting:
https://cuhk.zoom.us/j/92068646609?pwd=R0ZRR1VXSmVQOUkyQnZrd0t4dW0wUT09
Meeting ID: 920-6864-6609
Password: 076760
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
October 2020
30 October
2:00 pm - 3:00 pm
Closing the Loop of Human and Robot
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. LU Cewu
Research Professor at Shanghai Jiao Tong University (SJTU)
Abstract:
This talk is toward closing the loop of Human and Robot. We present our recent research of human activity understanding and robot learning. For Human side, we present our recent research “Human Activity Knowledge Engine (HAKE)” which largely improves human activity understanding. The improvements of Alphapose (famous pose estimator) are also introduced. For robot side, we discuss our understanding of robot task and new insight “Primitive model”. Thus, GraspNet – first dynamic grasping benchmark dataset is proposed, a novel end-to-end grasping deep learning approach is also introduced. A 3D point-level semantic embedding method for object manipulation will be discussed. Finally, we will discuss how to further close the Loop of Human and Robot.
Biography:
Cewu Lu is a Research Professor at Shanghai Jiao Tong University (SJTU). Before he joined SJTU, he was a research fellow at Stanford University working under Prof. Fei-Fei Li and Prof. Leonidas J. Guibas. He got the his PhD degree from The Chinese Univeristy of Hong Kong, supervised by Prof. Jiaya Jia. He is selected as young 1000 talent plan. Prof. Lu Cewu is selected as MIT TR35 – “MIT Technology Review, 35 Innovators Under 35” (China), and Qiushi Outstanding Young Scholar (求是杰出青年学者),which is the only one AI awardee in recent 3 years. Prof. Lu serves as an Area Chair for CVPR 2020 and reviewer for 《nature》. Prof. Lu has published about 100 papers in top AI journal and conference, including 9 papers being ESI high cited paper. His research interests fall mainly in Computer Vision and Robotics Learning.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96062514495?pwd=aEp4aEl5UVhjOW1XemdpWVNZTVZOZz09
Meeting ID: 960-6251-4495
Password: 797809
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
22 October
3:00 pm - 4:00 pm
Detecting Vulnerabilities using Patch-Enhanced Vulnerability Signatures
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. HUO Wei
Professor of Institute of Information Technology (IIE)
Chinese Academy of Sciences (CAS)
Abstract:
Recurring vulnerabilities widely exist and remain undetected in real-world systems, which are often resulted from reused code base or shared code logic. However, the potentially small differences between vulnerable functions and their patched functions as well as the possibly large differences between vulnerable functions and target functions to be detected bring challenges to the current solutions. I shall introduce a novel approach to detect recurring vulnerabilities with low false positives and low false negatives. The evaluation on ten open-source systems has shown that the approach proposed significantly outperformed state-of-the-art clone-based and function matching-based recurring vulnerability detection approaches, with 23 CVE identifiers assigned.
Biography:
Wei HUO is a full professor within Institute of Information Technology (IIE), Chinese Academy of Sciences (CAS). He focuses on software security, vulnerability detection, program analysis, etc. He leads the group of VARAS (Vulnerability Analysis and Risk Assessment System). He has published multi papers at top venues in computer security and software engineering, including ASE, ICSE, Usenix Security. Besides, his group has uncovered hundreds of 0-day vulnerabilities in popular software and firmware, with 100+ CVEs assigned.
Join Zoom Meeting:
https://cuhk.zoom.us/j/97738806643?pwd=dTIzcWhUR2pRWjBWaG9tZkdkRS9vUT09
Meeting ID: 977-3880-6643
Password: 131738
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
15 October
9:30 am - 10:30 am
Computational Fabrication and Assembly: from Optimization and Search to Learning
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. FU Chi Wing Philip
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Computational fabrication is an emerging research topic in computer graphics, beginning roughly a decade ago with the need to develop computational solutions for efficient 3D printing and later for 3D fabrication and object assembly at large. In this talk, I will introduce a series of research works in this
area with a particular focus on the following two recent ones:
(i) Computational LEGO Technic assembly, in which we model the component bricks, their connection mechanisms, and the input user sketch for computation, and then further develop an optimization model with necessary constraints and our layout modification operator to efficiently search for an optimum LEGO Technic assembly. Our results not only match the input sketch with coherently-connected LEGO Technic bricks but also respect the intended symmetry and structural integrity of the designs.
(ii) TilinGNN, the first neural optimization approach to solve a classical instance of the tiling problem, in which we formulate and train a neural network model to maximize the tiling coverage on target shapes, while avoiding overlaps and holes between the tiles in a self-supervised manner. In short, we model the tiling problem as a discrete problem, in which the network is trained to predict the goodness of each candidate tile placement, allowing us to iteratively select tile placements and assemble a tiling
on the target shape.
In the end, I will try to present also some of the results from my other research works in the areas of point cloud processing, 3D vision, and augmented reality.
Biography:
Chi-Wing Fu is an associate professor in the department of computer science and engineering at the Chinese University of Hong Kong (CUHK). His research interests are in computer graphics, vision, and human-computer interaction, or more specifically in computation fabrication, 3D computer vision, and user interaction. Chi-Wing obtained his B.Sc. and M.Phil. from the CUHK and his Ph.D. from Indiana University, Bloomington. Before re-joining the CUHK in early 2016, he was an associate professor with tenure at the school of computer science and engineering at Nanyang Technological University, Singapore.
Join Zoom Meeting:
https://cuhk.zoom.us/j/99943410200
Meeting ID: 999 4341 0200
Password: 492333
Enquiries: Miss Caroline Tai at Tel. 3943 8440
14 October
2:00 pm - 3:00 pm
Bioinformatics: Turning experimental data into biomedical hypotheses, knowledge and applications
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. YIP Yuk Lap Kevin
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Contemporary biomedical research relies heavily on high-throughput technologies that examine many objects, their individual activities or their mutual interactions in a single experiment. The data produced are usually high-dimensional, noisy and biased. An important aim of bioinformatics is to extract useful information from such data for developing both conceptual understandings of the biomedical phenomena and downstream applications. This requires the integration of knowledge from multiple disciplines, such as data properties from the biotechnology, molecular and cellular mechanisms from biology, evolutionary principles from genetics, and patient-, disease- and drug-related information from medicine. Only with these inputs can the data analysis goals be meaningfully formulated as computational problems and properly solved. Computational findings also need to be subsequently validated and functionally tested by additional experiments, possibly iterating back-and-forth between data production and data analysis many times before a conclusion can be drawn. In this seminar, I will use my own research to explain how bioinformatics can help create new biomedical hypotheses, knowledge and applications, with a focus on recent works that use machine learning methods to study basic molecular mechanisms and specific human diseases.
Biography:
Kevin Yip is an associate professor in Department of Computer Science and Engineering at The Chinese University of Hong Kong (CUHK). He obtained his bachelor degree in computer engineering and master degree in computer science from The University of Hong Kong, and his PhD degree in computer science from Yale University. Before joining CUHK, he has worked as a researcher in HKU-Pasteur Institute, Yale Center for Medical Informatics, and Department of Molecular Biophysics and Biochemistry at Yale University. Since his master study, Dr. Yip has been conducting research in bioinformatics, with special interests in modeling gene regulatory
mechanisms and studying how their perturbations are related to human diseases. Dr. Yip has participated in several international research consortia, including Encyclopedia of DNA Elements (ENCODE), model organism ENCODE (modENCODE), and International Human Epigenomics Consortium (IHEC). Locally, Dr. Yip has been collaborating with scientists and clinicians in the quest of understanding the mechanisms that underlie different human diseases, such as hepatocellular carcinoma, nasopharyngeal carcinoma, type II diabetes, and Hirschsprung’s disease. Dr. Yip received the title of Outstanding Fellow from Faculty of Engineering and the Young Researcher Award from CUHK in 2019.
Join Zoom Meeting:
https://cuhk.zoom.us/j/98458448644
Meeting ID: 984 5844 8644
Password: 945709
Enquiries: Miss Caroline Tai at Tel. 3943 8440
14 October
3:30 pm - 4:30 pm
Dependable Storage Systems
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. LEE Pak Ching Patrick
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Making large-scale storage systems dependable against failures is critical yet non-trivial in the face of the ever-increasing amount of data. In this talk, I will present my work on dependable storage systems, with the primary goal of improving the fault tolerance, recovery, security, and performance of different types of storage architectures. To make a case, I will present new theoretical and applied findings on erasure coding, a low-cost redundancy technique for fault-tolerant storage. I will present general techniques and code constructions for accelerating the repair of storage failures, and further propose a unified framework for readily deploying a variety of erasure coding solutions in state-of-the-art distributed storage systems. I will also introduce my other work on the dependability of applied distributed systems, in the areas of encrypted deduplication, key-value stores, network measurement, and stream processing. Finally, I will highlight the industrial impact of our work beyond publications.
Biography:
Patrick P. C. Lee is now an Associate Professor in the Department of Computer Science and Engineering at the Chinese University of Hong Kong. His research interests are in various applied/systems topics on improving the dependability of large-scale software systems, including storage systems, distributed systems and networks, and cloud computing. He now serves as an Associate Editor in IEEE/ACM Transactions on Networking and ACM Transactions on Storage. He served as a TPC co-chair of APSys 2020, and as a TPC member of several major systems and networking conferences. He received the best paper awards at CoNEXT 2008, TrustCom 2011, and SRDS 2020. For details, please refer to his personal homepage: http://www.cse.cuhk.edu.hk/~pclee.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96195753407
Meeting ID: 961 9575 3407
Password: 892391
Enquiries: Miss Caroline Tai at Tel. 3943 8440
13 October
2:00 pm - 3:00 pm
From Combating Errors to Embracing Errors in Computing Systems
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. Xu Qiang
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Faults are inevitable in any computing systems, and they may occur due to environmental disturbance, circuit aging, or malicious attacks. On the one hand, designers try all means to prevent, contain, and control faults to achieve error-free computation, especially for those safety-critical applications. On the other hand, many applications in the big data era (e.g., search engine and recommended systems) that require lots of computing power are often error-tolerant. In this talk, we present some techniques developed at our group over the past several years, including error-tolerant solutions that combat all sorts of hardware faults and approximate computing techniques that embrace errors in computing systems for energy savings.
Biography:
Qiang Xu is an associate professor of Computer Science & Engineering at the Chinese University of Hong Kong. He leads the CUhk REliable laboratory (CURE Lab.), and his research interests include electronic design automation, fault-tolerant computing and trusted computing. Dr. Xu has published 150+ papers at referred journals and conference proceedings, and received two Best Paper Awards and five Best Paper Award Nominations. He is currently serving as an associate editor for IEEE Transaction on Computer-Aided Design of Integrated Circuits and Systems and Integration, the VLSI Journal.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96930968459
Meeting ID: 969 3096 8459
Password: 043377
Enquiries: Miss Caroline Tai at Tel. 3943 8440
12 October
9:30 am - 10:30 am
Memory/Storage Optimization for Small/Big Systems
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. Zili SHAO
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Memory/storage optimization is one of the most critical issues in computer systems. In this talk, I will first summarize our work in optimizing memory/storage systems for embedded and big data applications. Then, I will present an approach by deeply integrating device and application to optimize flash-based key-value caching – one of the most important building blocks in modern web infrastructures and high-performance data-intensive applications. I will also introduce our recent work in optimizing unique address checking for IoT blockchains.
Biography:
Zili Shao is an Associate Professor at Department of Computer Science and Engineering, The Chinese University of Hong Kong. He received his Ph.D. degree from The University of Texas at Dallas in 2005. Before joining CUHK in 2018, he was with Department of Computing, The Hong Kong Polytechnic University, where he started in 2005. His current research interests include embedded software and systems, storage systems and related industrial applications.
Join Zoom Meeting:
https://cuhk.zoom.us/j/95131164721
Meeting ID: 951 3116 4721
Password: 793297
Enquiries: Miss Caroline Tai at Tel. 3943 8440
12 October
11:00 am - 12:00 pm
VLSI Mask Optimization: From Shallow To Deep Learning
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. YU Bei
Assistant Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
The continued scaling of integrated circuit technologies, along with the increased design complexity, has exacerbated the challenges associated with manufacturability and yield. In today’s semiconductor manufacturing, lithography plays a fundamental role in printing design patterns on silicon. However, the growing complexity and variation of the manufacturing process have tremendously increased the lithography modeling and simulation cost. Both the role and the cost of mask optimization – now indispensable in the design process – have increased. Parallel to these developments are the recent advancements in machine learning which have provided a far-reaching data-driven perspective for problem solving. In this talk, we shed light on the recent deep learning based approaches that have provided a new lens to examine traditional mask optimization challenges. We present hotspot detection techniques, leveraging advanced learning paradigms, which have demonstrated unprecedented efficiency. Moreover, we demonstrate the role deep learning can play in optical proximity correction (OPC) by presenting its successful application in our full-stack mask optimization framework.
Biography:
Bei Yu is currently an Assistant Professor at the Department of Computer Science and Engineering, The Chinese University of Hong Kong. He received the Ph.D degree from Electrical and Computer Engineering, University of Texas at Austin, USA in 2014, and the M.S. degree in Computer Science from Tsinghua University, China in 2010. His current research interests include machine learning and combinatorial algorithm with applications in VLSI computer aided design (CAD). He has served as TPC Chair of 1st ACM/IEEE Workshop on Machine Learning for CAD (MLCAD), served in the program committees of DAC, ICCAD, DATE, ASPDAC, ISPD, the editorial boards of ACM Transactions on Design Automation of Electronic Systems (TODAES), Integration, the VLSI Journal, and IET Cyber-Physical Systems: Theory & Applications. He is Editor of IEEE TCCPS Newsletter.
Dr. Yu received six Best Paper Awards from International Conference on Tools with Artificial Intelligence (ICTAI) 2019, Integration, the VLSI Journal in 2018, International Symposium on Physical Design (ISPD) 2017, SPIE Advanced Lithography Conference 2016, International Conference on Computer-Aided Design (ICCAD) 2013, Asia and South Pacific Design Automation Conference (ASPDAC) 2012, four other Best Paper Award Nominations (ASPDAC 2019, DAC 2014, ASPDAC 2013, and ICCAD 2011), six ICCAD/ISPD contest awards, IBM Ph.D. Scholarship in 2012, SPIE Education Scholarship in 2013, and EDAA Outstanding Dissertation Award in 2014.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96114730370
Meeting ID: 961 1473 0370
Password: 984602
Enquiries: Miss Caroline Tai at Tel. 3943 8440
09 October
4:00 pm - 5:00 pm
Local Versus Global Security in Computation
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. Andrej BOGDANOV
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Secret sharing schemes are at the heart of cryptographic protocol design. In this talk I will present my recent discoveries about the informational and computational complexity of secret sharing and their relevance to secure multiparty computation:
- The share size in the seminal threshold secret sharing scheme of Shamir and Blakley from the 1970s is essentially optimal.
- Secret reconstruction can sometimes be carried out in the computational model of bounded-depth circuits, without resorting to modular linear algebra.
- Private circuits that are secure against local information leakage are also secure against limited but natural forms of global leakage.
I will also touch upon some loosely related results in cryptography, pseudorandomness, and coding theory.
Biography:
Andrej Bogdanov is associate professor of Computer Science and Engineering and director of the Institute of Theoretical Computer Science and Communications at the Chinese University of Hong Kong. His research interests are in cryptography, pseudorandomness, and sublinear-time algorithms.
Andrej obtained his B.S. and M. Eng. degrees from MIT in 2001 and his Ph.D. from UC Berkeley in 2005. Before joining CUHK in 2008 he was a postdoctoral associate at the Institute for Advanced Study in Princeton, at DIMACS (Rutgers University), and at ITCS (Tsinghua University). He was a visiting professor at the Tokyo Institute of Technology in 2013 and a long-term program participant at the UC Berkeley Simons Institute for the Theory of Computing in 2017.
Join Zoom Meeting:
https://cuhk.zoom.us/j/94008322629
Meeting ID: 940 0832 2629
Password: 524278
Enquiries: Miss Caroline Tai at Tel. 3943 8440
08 October
3:00 pm - 4:00 pm
A Compiler Infrastructure for Embedded Multicore SoCs
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Dr. Sheng Weihua
Chief Expert
Software Tools and Engineering at Huawei
Abstract:
Compilers play a pivotal role in the software development process for microprocessors, by automatically translating high-level programming languages into machinespecific executable code. For a long time, while processors were scalar, compilers were regarded as a black box among the software community, due to their successful internal encapsulation of machine-specific details. Over a decade ago, major computing processor manufacturers began to compile multiple (simple) cores into a single chip, namely multicores, to retain scaling according to Moore’s law. The embedded computing industry followed suit, introducing multicores years later, amid aggressive marketing campaigns aimed at highlighting the number of processors for product differentiation in consumer electronics. While the transition from scalar (uni)processors to multicores is an evolutionary step in terms of hardware, it has given rise to fundamental changes in software development. The performance “free lunch”, having ridden on the growth of faster processors, is over. Compiler technology does not develop and scale for multicore architectures, which contributes considerably to the software crisis in the multicore age. This talk addresses the challenges associated with developing compilers for multicore SoCs (Systems-On-Chip) software development, focusing on embedded systems, such as wireless terminals and modems. It also captures a trajectory from research towards a commercial prototyping, shedding light on some lessons on how to do research effectively.
Biography:
Mr. Sheng has had early career roots in the electronic design automation industry (CoWare and Synopsys). He has spearheaded the technology development on multicore programming tools at RWTH Aachen University from 2007 to 2013, which later turned into the foundation of Silexica. He has a proven record of successful consultation and collaboration with top tier technology companies on multicore design tools. Mr. Sheng is a co-founder of Silexica Software Solutions GmbH in Germany. He served as CTO during 2014-2016. Since 2017, as VP and GM of APAC, he was responsible for all aspects of Silexica sales and operations across the APAC region. In 2019 he joined Huawei Technologies. Mr. Sheng received BEng from Tsinghua University and MSc/PhD from RWTH Aachen University in Germany.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93855822245
Meeting ID: 938-5582-2245
Password: 429533
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
07 October
3:00 pm - 4:00 pm
Robust Deep Neural Network Design under Fault Injection Attack
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. Xu Qiang
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Deep neural networks (DNNs) have gained mainstream adoption in the past several years, and many artificial intelligence (AI) applications employ DNNs for safety- and security-critical tasks, e.g., biometric authentication and autonomous driving. In this talk, we first briefly discuss the security issues in deep learning. Then, we focus on fault injection attacks and introduce some of our recent works in this domain.
Biography:
Qiang Xu leads the CUhk REliable laboratory (CURE Lab.) and his research interests include fault-tolerant computing and trusted computing. He has published 150+ papers in these fields and received a number of best paper awards/nominations.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93862944206
Meeting ID: 938-6294-4206
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
haha!
Seminar Series 2020/2021
Structurally Stable Assemblies: Theory, Algorithms, and Applications
Location
Speaker:
Dr. SONG Peng
Assistant Professor
Pillar of Information Systems Technology and Design
Singapore University of Technology and Design
Abstract:
An assembly with rigid parts is structurally stable if it can preserve its form under external forces without collapse. Structural stability is a necessary condition for using assemblies in practice such as furniture and architecture. However, designing structurally stable assemblies remains a challenging task for general and expert users since slight variation on the geometry of an individual part may affect the whole assembly’s structural stability. In this talk, I will introduce our attempts in the past years in advancing the theory and algorithms for computational design and fabrication of structurally stable assemblies. The key technique is to analyze structural stability in the kinematic space by utilizing static-kinematic duality and to ensure structural stability with geometry optimization using a two-stage approach (i.e., kinematic design and geometry realization). Our technique can handle assemblies that are structurally stable in different degrees, namely stable under a single external force, a set of external forces, and arbitrary external forces. The usefulness of these structurally stable assemblies has been demonstrated in applications like personalized puzzles, interlocking furniture, and free-form discrete architecture.
Biography:
Peng Song is an Assistant Professor at the Pillar of Information Systems Technology and Design, Singapore University of Technology and Design (SUTD), where he directs the Computer Graphics Laboratory (CGL). Prior to joining SUTD in 2019, he was a research scientist at EPFL, Switzerland. He received his PhD from Nanyang Technological University, Singapore in 2013, his master and bachelor degrees both from Harbin Institute of Technology, China in 2010 and 2007 respectively. His research is in the area of computer graphics, with a focus on computational fabrication and geometry processing. He serves as a co-organizer of a weekly web series on Computational Fabrication, and a program committee member of several leading conferences in computer graphics including SIGGRAPH Asia and Pacific Graphics.
Join Zoom Meeting:
https://cuhk.zoom.us/j/98242753532
Enquiries: Miss Karen Chan at Tel. 3943 8439
Towards Trustworthy Full-Stack AI
Location
Speaker:
Dr. Fang Chengfang
Abstract:
Due to lack of security consideration at the early development of AI algorithms, most AI systems are not robust against adversarial manipulation.
In critical applications such as healthcare, autonomous driving, and malware detection, security risks can be devastating, and thus attract numerous research efforts.
In this seminar, I will introduce some of the AI security and privacy research topics from an industry point of view, including the risk analysis throughout AI lifecycle and the pipeline of defense, in the hopes of providing a more complete picture on top of academic research to the audience.
Biography:
Chengfang Fang obtained his Ph.D. degree from National University of Singapore before joining Huawei in 2013. He has been working on security and privacy protection in several areas including machine learning, internet of things, mobile device and biometrics for more than 10 years. He has published over 20 research papers and obtained 15 patents in this domain. He is currently a principal researcher of Trustworthiness Technology Lab in Huawei Singapore Research Center.
Join Zoom Meeting:
https://cuhk.zoom.us/j/92800336791
Enquiries: Miss Karen Chan at Tel. 3943 8439
High Performance Fluid Simulation and its Applications
Location
Speaker:
Dr. Xiaopei Liu
Assistant Professor
School of Information Science and Technology
Shanghai Tech University
Abstract:
Efficient and accurate high-resolution fluid simulation in complex environment is desirable in many practical applications, e.g., the aerodynamic shape design of airplanes and cars, as well as the production of special effects in movies and games. However, this has been a challenging problem for a very long time, and yet not well solved. In this talk, I will introduce our attempts in the past years in advancing the computational techniques for high-performance fluid simulations by developing statistical kinetic model with variational principles, in a single-phase flow scenario where strong turbulence and complex geometric objects exist. I will also introduce how the general idea can be extended to multiphase flow simulations in order to allow both large density ratio and high Reynolds number. To improve computational efficiency, I will further introduce our GPU optimization and machine learning techniques that are designed as both low-level and high-level accelerations. Rendering and visualization of fluid flow data will also be briefly covered. Finally, validations in real scenarios and demonstrations of results in different applications, such as the aerodynamic simulations over aircrafts, cars and architectures for shape design purposes, the blood flow simulations inside coronary arteries for clinical diagnosis, the simulation of visual flow phenomena for movies and games, will all be shown in this talk, with a new application for learning the control policy of a fish-like underwater robot with our fast simulator.
Biography:
Dr. Xiaopei Liu is now an assistant professor at School of Information Science and Technology, Shanghai Tech University, affiliated with Visual and Data Intelligence (VDI) center. He obtained his PhD degree on computer science and engineering from The Chinese University of Hong Kong (CUHK), and worked as a postdoctoral Research Fellow at Nanyang Technological University (NTU) in Singapore, where he started the multi-disciplinary research on fluid simulation and visualization, both on classical and quantum fluids. Most of his publications are top journals and conferences, which cover multiple disciplines, such as ACM TOG, ACM SIGGRAPH/SIGGRAPH Asia, IEEE TVCG, APS PRD, AIP POF, etc. Dr. Xiaopei Liu is now working on high-performance fluid simulation in complex environment, with applications to visual effects, computational design & fabrication, medical diagnosis, robot learning, as well as fundamental science. He is also conducting research on simulation-based UAV design optimization & autonomous navigation.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93649176456
Enquiries: Miss Karen Chan at Tel. 3943 8439
Dynamic Voltage Scaling: from Low Power to Security
Location
Speaker:
Dr. Qu Gang
Abstract:
Dynamic voltage scaling (DVS) is one of the most effective and widely used techniques for low power design. It adjusts system operating voltage and clock frequency based on the real time application’s computation and deadline information in order to reduce the power and energy consumption. In this talk, I will share our research results on DVS and the lessons I have learned in three different periods of my research career. First, in the late 1990’s, as a graduate student, we formulated the problem of DVS for energy minimization and derived a series of optimal solutions under different system settings to guide the practice of DVS enabled system design. Then in 2000, I became an assistant professor and we studied how to apply DVS to scenarios where the traditional execution-time-for-energy tradeoff does not exist. Finally, in the past five years, we developed DVS-based attacks to break the trusted execution environment in model computing platforms. I will also show our work on enhancing system security by DVS through examples of device authentication and countermeasures to machine learning model inversion attacks. It is my hope that this talk can shed light on how to find a research topic and make your contributions.
Biography:
Gang Qu received his B.S. in mathematics from the University of Science and Technology of China (USTC) and Ph.D. in computer science from the University of California, Los Angeles (UCLA). He is currently a professor in the Department of Electrical and Computer Engineering at the University of Maryland, College Park, where he leads the Maryland Embedded Systems and Hardware Security Lab (MeshSec) and the Wireless Sensor Laboratory. His research activities are on trusted integrated circuit design, hardware security, energy efficient system design and wireless sensor networks. He has focused recently on applications in the Internet of Things, cyber-physical systems, and machine learning. He has published more than 250 conference papers and journal articles on these topics with several best paper awards. Dr. Qu is an enthusiastic teacher. He has taught and co-taught various security courses, including a popular MOOC on Hardware Security through Coursera. Dr. Qu has served 17 times as the general or program chair/co-chair for international conferences and workshops. He is currently on the editorial board of IEEE TCAD, TETC, ACM TODAES, JCST, Integration, and HSS. Dr. Qu is a fellow of IEEE.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96878058667
Enquiries: Miss Karen Chan at Tel. 3943 8439
Prioritizing Computation and Analyst Resources in Large-scale Data Analytics
Location
Speaker:
Ms. Kexin RONG
PhD student, Department of Computer Science
Stanford University
Abstract:
Data volumes are growing exponentially, fueled by an increased number of automated processes such as sensors and devices. Meanwhile, the computational power available for processing this data – as well as analysts’ ability to interpret it – remain limited. As a result, database systems must evolve to address these new bottlenecks in analytics. In my work, I ask: how can we adapt classic ideas from database query processing to modern compute- and analyst-limited data analytics?
In this talk, I will discuss the potential for this kind of systems development through the lens of several practical systems I have developed. By drawing insights from database query optimization, such as pushing workload- and domain-specific filtering, aggregation, and sampling into core analytics workflows, we can dramatically improve the efficiency of analytics at scale. I will illustrate these ideas by focusing on two systems — one designed to optimize visualizations for streaming infrastructure and application telemetry and one designed for high-volume seismic waveform analysis — both of which have been field-tested at scale. I will also discuss lessons from production deployments at companies including Datadog, Microsoft, Google and Facebook.
Biography:
Kexin Rong is a Ph.D. student in Computer Science at Stanford University, co-advised by Professor Peter Bailis and Professor Philip Levis. She designs and builds systems to enable data analytics at scale, supporting applications including scientific analysis, infrastructure monitoring, and analytical queries on big data clusters. Prior to Stanford, she received her bachelor’s degree in Computer Science from California Institute of Technology.
Join Zoom Meeting:
https://cuhk.zoom.us/j/97794511231?pwd=Qjg2RlArcUNrbHBwUmxNSW4yTVIxZz09
Enquiries: Miss Caroline TAI at Tel. 3943 8440
Toward a Deeper Understanding of Generative Adversarial Networks
Location
Speaker:
Dr. Farzan FARNIA
Postdoctoral Research Associate
Laboratory for Information and Decision Systems, MIT
Abstract:
While modern adversarial learning frameworks achieve state-of-the-art performance on benchmark image, sound, and text datasets, we still lack a solid understanding of their robustness, generalization, and convergence behavior. In this talk, we aim to bridge this gap between theory and practice using a principled analysis of these frameworks through the lens of optimal transport and information theory. We specifically focus on the Generative Adversarial Network (GAN) framework which represents a game between two machine players for learning the distribution of data. In the first half of the talk, we study equilibrium in GAN games for which we show the classical Nash equilibrium may not exist. We then introduce a new equilibrium notion for GAN problems, called proximal equilibrium, through which we develop a GAN training algorithm with improved stability. We provide several numerical results on large-scale datasets supporting our proposed training method for GANs. In the second half of the talk, we attempt to understand why GANs often fail in learning multi-modal distributions. We focus our study on the benchmark Gaussian mixture models and demonstrate the failures of standard GAN architectures under this simple class of multi-modal distributions. Leveraging optimal transport theory, we design a novel architecture for the GAN players which is tailored to mixtures of Gaussians. We theoretically and numerically show the significant gain achieved by our designed GAN architecture in learning multi-modal distributions. We conclude the talk by discussing some open research challenges in adversarial learning.
Biography:
Farzan Farnia is a postdoctoral research associate at the Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, where he is co-supervised by Professor Asu Ozdaglar and Professor Ali Jadbabaie. Prior to joining MIT, Farzan received his master’s and PhD degrees in electrical engineering from Stanford University and his bachelor’s degrees in electrical engineering and mathematics from Sharif University of Technology. At Stanford, he was a graduate research assistant at the Information Systems Laboratory advised by Professor David Tse. Farzan’s research interests include statistical learning theory, optimal transport theory, information theory, and convex optimization. He has been the recipient of the Stanford Graduate Fellowship (Sequoia Capital fellowship) from 2013-2016 and the Numerical Technology Founders Prize as the second top performer of Stanford’s electrical engineering PhD qualifying exams in 2014.
Join Zoom Meeting:
https://cuhk.zoom.us/j/99476583146?pwd=QVdsaTJLYU1ab2c0ODV0WmN6SzN2Zz09
Enquiries: Miss Caroline TAI at Tel. 3943 8440
Sensitive Data Analytics with Local Differential Privacy
Location
Speaker:
Mr. Tianhao WANG
PhD student, Department of Computer Science
Purdue University
Abstract:
When collecting sensitive information, local differential privacy (LDP) can relieve users’ privacy concerns, as it allows users to add noise to their private information before sending data to the server. LDP has been adopted by big companies such as Google and Apple for data collection and analytics. My research focuses on improving the ecosystem of LDP. In this talk, I will first share my research on the fundamental tools in LDP, namely the frequency oracles (FOs), which estimate the frequency of each private value held by users. We proposed a framework that unifies different FOs and optimizes them. Our optimized FOs improve the estimation accuracy of Google’s and Apple’s implementations by 50% and 90%, respectively, and serve as the state-of-the-art tools for handling more advanced tasks. In the second part of my talk, I will present our work on extending the functionality of LDP, namely, how to make a database system that satisfies LDP while still supporting a variety of analytical queries.
Biography:
Tianhao Wang is a Ph.D. candidate in the department of computer science, Purdue University, advised by Prof. Ninghui Li. He received his B.Eng. degree from software school, Fudan University in 2015. His research area is security and privacy, with a focus on differential privacy and applied cryptography. He is a member of DPSyn, which won several international differential privacy competitions. He is a recipient of the Bilsland Dissertation Fellowship and the Emil Stefanov Memorial Fellowship.
Join Zoom Meeting:
https://cuhk.zoom.us/j/94878534262?pwd=Z2pjcDUvQVlETzNoVWpQZHBQQktWUT09
Enquiries: Miss Caroline TAI at Tel. 3943 8440
Toward Reliable NLP Systems via Software Testing
Location
Speaker:
Dr. Pinjia HE
Postdoctoral researcher, Computer Science Department
ETH Zurich
Abstract:
NLP systems such as machine translation have been increasingly utilized in our daily lives. Thus, their reliability becomes critical; mistranslations by Google Translate, for example, can lead to misunderstanding, financial loss, threats to personal safety and health, etc. On the other hand, due to their complexity, such systems are difficult to get right. Because of their nature (i.e., based on large, complex neural networks), traditional reliability techniques are challenging to be applied. In this talk, I will present my recent work that has spearheaded the testing of machine translation systems, toward building reliable NLP systems. In particular, I will describe three complementary approaches which collectively found 1,000+ diverse translation errors in the widely-used Google Translate and Bing Microsoft Translator. I will also describe my work on LogPAI, an end-to-end log management framework powered by AI algorithms for traditional software reliability, and conclude the talk with my vision for making both traditional and intelligent software such as NLP systems more reliable.
Biography:
Pinjia HE has been a postdoctoral researcher in the Computer Science Department at ETH Zurich after receiving his PhD degree from The Chinese University of Hong Kong (CUHK) in 2018. He has research expertise in software engineering and artificial intelligence, and is particularly passionate about making both traditional and intelligent software reliable. His research on automated log analysis and machine translation testing appeared in top computer science venues, such as ICSE, ESEC/FSE, ASE, and TDSC. The LogPAI project led by him has been starred 2,000+ times on GitHub and downloaded 30,000+ times by 380+ organizations, and won a Most Influential Paper (MIP) award at ISSRE. He also won a 2016 Excellent Teaching Assistantship at CUHK. He has served on program committees of MET’21, DSML’21, ECOOP’20 Artifact, and ASE’19 Demo, and reviewed for top journals and conferences (e.g., TSE, TOSEM, ICSE, KDD, and IJCAI). According to Google Scholar, he has an h-index of 14 and 1,200+ citations.
Join Zoom Meeting:
https://cuhk.zoom.us/j/98498351623?pwd=UHFFUU1QbExYTXAxTWxCMk9BbW9mUT09
Enquiries: Miss Caroline TAI at Tel. 3943 8440
Edge AI – A New Battlefield for Hardware Security Research
Location
Speaker:
Prof. CHANG Chip Hong
Associate Professor
Nanyang Technological University (NTU) of Singapore
Abstract:
The flourishing of Internet of Things (IoT) has rekindled on-premise computing to allow data to be analyzed closer to the source. To support edge Artificial Intelligence (AI), hardware accelerators, open-source AI model compilers and commercially available toolkits have evolved to facilitate the development and deployment of applications that use AI at its core. This “model once, run optimized anywhere” paradigm shift in deep learning computations introduces new attack surfaces and threat models that are methodologically different from existing software-based attack and defense mechanisms. Existing adversarial examples modify the input samples presented to an AI application either digitally or physically to cause a misclassification. Nevertheless, these input-based perturbations are not robust or stealthy on multi-view target. To generate a good adversarial example for misclassifying a real-world target of variational viewing angle, lighting and distance, a decent number of pristine samples of the target object are required. The feasible perturbations are substantial and visually perceptible. Edge AI also poses a difficult catchup for existing adversarial example detectors that are designed based on sophisticated offline analyses with the assumption that the deep learning model is implemented on a general purpose 32-bit floating-point CPU or GPU cluster. This talk will first present a new glitch injection attack on edge DNN accelerator capable of misclassifying a target under variational viewpoints. The attack pattern for each target of interest consists of sparse instantaneous glitches, which can be derived from just one sample of the target. The second part of this talk will present a new hardware-oriented approach for in-situ detection of adversarial inputs feeding through a spatial DNN accelerator architecture or a third-party DNN Intellectual Property (IP) implemented on the edge. With negligibly small hardware overhead, the glitch injection circuit and the trained shallow binary tree detector can be easily implemented alongside with a deep learning model on an edge AI accelerator hardware.
Biography:
Prof. Chip Hong Chang is an Associate Professor at the Nanyang Technological University (NTU) of Singapore. He held concurrent appointments at NTU as Assistant Chair of Alumni of the School of EEE from 2008 to 2014, Deputy Director of the Center for High Performance Embedded Systems from 2000 to 2011, and Program Director of the Center for Integrated Circuits and Systems from 2003 to 2009. He has coedited five books, and have published 13 book chapters, more than 100 international journal papers (>70 are in IEEE), more than 180 refereed international conference papers (mostly in IEEE), and have delivered over 40 colloquia and invited seminars. His current research interests include hardware security and trustable computing, low-power and fault-tolerant computing, residue number systems, and application-specific digital signal processing algorithms and architectures. Dr. Chang currently serves as the Senior Area Editor of IEEE Transactions on Information Forensic and Security (TIFS), and Associate Editor of the IEEE Transactions on Circuits and Systems-I (TCAS-I) and IEEE Transactions on Very Large Scale Integration (TVLSI) Systems. He was the Associate Editor of the IEEE TIFS and IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD) from 2016 to 2019, IEEE Access from 2013 to 2019, IEEE TCAS-I from 2010 to 2013, Integration, the VLSI Journal from 2013 to 2015, Springer Journal of Hardware and System Security from 2016 to 2020 and Microelectronics Journal from 2014 to 2020. He also guest edited eight journal special issues including IEEE TCAS-I, IEEE Transactions on Dependable and Secure Computing (TDSC), IEEE TCAD and IEEE Journal on Emerging and Selected Topics in Circuits and Systems (JETCAS). He has held key appointments in the organizing and technical program committees of more than 60 international conferences (mostly IEEE), including the General Co-Chair of 2018 IEEE Asia-Pacific Conference on Circuits and Systems and the inaugural Workshop Chair and Steering Committee of the ACM CCS satellite workshop on Attacks and Solutions in Hardware Security. He is the 2018-2019 IEEE CASS Distinguished Lecturer, a Fellow of the IEEE and the IET.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93797957554?pwd=N2J0VjBmUFh6N0ZENVY0U1RvS0Zhdz09
Meeting ID: 937 9795 7554
Password: 607354
Enquiries: Miss Caroline TAI at Tel. 3943 8440
Design Exploration of DNN Accelerators using FPGA and Emerging Memory
Location
Speaker:
Dr. Guangyu SUN
Associate Professor
Center for Energy-efficient Computing and Applications (CECA)
Peking University
Abstract:
Deep neural networks (DNN) have been successfully used in the fields, such as computer vision and natural language processing. In order to improve the processing efficiency, various hardware accelerators have been proposed for DNN applications. In this talk, I will first review our works about design space exploration and design automation for DNN accelerators on FPGA platforms. Then, I will quickly introduce the potential and challenges of using emerging memory for energy-efficient DNN inference. After that, I will try to provide some advices for graduate study.
Biography:
Dr. Guangyu Sun is an associate professor at Center for Energy-efficient Computing and Applications (CECA) in Peking University. He received his B.S. and M.S degrees from Tsinghua University, Beijing, in 2003 and 2006, respectively. He received his Ph.D. degree in Computer Science from the Pennsylvania State University in 2011. His research interests include computer architecture, acceleration system, and design automation for modern applications. He has published 100+ journals and refereed conference papers in these areas. He is an associate editor of ACM TECS and ACM JETC.
Join Zoom Meeting:
https://cuhk.zoom.us/j/95836460304?pwd=UkRwSldjNWdUWlNvNnN2TTlRZ1ZUdz09
Meeting ID: 958 3646 0304
Password: 964279
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
In-Memory Computing – an algorithm –architecture co-design approach towards the POS/w era
Location
Speaker:
Prof. LI Jiang
Associate Professor
Department of computer science and engineering
Shanghai Jiao Tong University
Abstract:
The rapid rising computing power over the past decade has supported the advance of Artificial Intelligence. Still, in the post-Moore era, AI chips with traditional CMOS process and Van-Neumann architectures face huge bottlenecks in memory walls and energy efficiency wall. In-memory computing architecture based on emerging memristor technology has become a very competitive computing paradigm to deliver two order-of-magnitude higher energy efficiency. The memristor process has apparent advantages in power consumption, multi-bit, and cost. However, it faces challenges of the low manufacturing scalability and process variation, which lead to the instability of computation and limited capability of accommodate large and complex neural networks. This talk will introduce the algorithm and architecture co-optimization approach to solve the above challenges.
Biography:
Li Jiang is an associate professor from Dept. of CSE, Shanghai Jiao Tong University. He received the B.S. degree from the Dept. of CS&E, Shanghai Jiao Tong University in 2007, the MPhil and the Ph.D. degree from the Dept. of CS&E, the Chinese University of Hong Kong in 2010 and 2013 respectively. He has published more than 50 peer-reviewd papers in top-tier computer architecture and EDA conferences and journals, including a best paper nomination in ICCAD. According to the IEEE Digital Library, five papers ranked in the top 5% of citations of all papers collected at its conferences. The achievements have been highly recognized and cited by academic and industry experts, including Academician Zheng Nanning, Academician William Dally, Prof. Chengming Hu, and many ACM/IEEE fellows. Some of the achievements have been introduced into the IEEE P1838 standard, and a number of technologies have been put into commercial use in cooperation with TSMC, Huawei and Alibaba. He got best Ph.D. Dissertation award in ATS 2014, and was in the final list of TTTC’s E. J. McCluskey Doctoral Thesis Award. He received ACM Shanghai Rising Star award, and CCF VLSI early career award. He received 2020 CCF distinguished Speaker. He serves as co-chair and TPC member in several international and national conferences, such as MICRO, DATE, ASP-DAC, ITC-Asia, ATS , CFTC, CTC and etc. He is an associate Editor of IET Computers Digital Techniques, VLSI the Integration Journal.
Join Zoom Meeting:
https://cuhk.zoom.us/j/95897084094?pwd=blZlanFOczF4aWFvM2RuTDVKWFlZZz09
Meeting ID: 958 9708 4094
Password: 081783
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
Speed up DNN Model Training: An Industrial Perspective
Location
Speaker:
Mr. Mike Hong
CTO of BirenTech
Abstract:
Training large DNN models is compute-intensive, often taking days, weeks or even months to complete. Therefore, how to speed it up has attracted lots of attention from both academia and industry. In this talk, we shall cover a number of accelerated DNN training techniques from an industrial perspective, including various optimizers, large batch training, distributed computation and all-reduce network topology.
Biography:
Mike Hong has been working on GPU architecture design for 26 years and is currently serving as the CTO of BirenTech, an intelligent chip design company that has attracted more than 200 million US$ series A round financing since founded in 2019. Before joining Biren, Mike was the Chief Architect in S3, Principal Architect for Tesla architecture in NVIDIA, GPU team leader and the Chief Architect in HiSilicon. Mike holds more than 50 US patents including the texture compression patent which is the industrial standard for all the PCs, Macs and game consoles.
Join Zoom Meeting:
https://cuhk.zoom.us/j/92074008389?pwd=OE1EbjBzWk9oejh5eUlZQ1FEc0lOUT09
Meeting ID: 920 7400 8389
Password: 782536
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
Artificial Intelligence for Radiotherapy in the Era of Precision Medicine
Location
Speaker:
Prof. CAI Jing
Professor of Department of Health Technology and Informatics
The Hong Kong Polytechnic University (PolyU)
Abstract:
Artificial Intelligence (AI) is evolving rapidly and promises to transform the world in an unprecedented way. The tremendous possibilities that AI can bring to radiation oncology have triggered a flood of activities in the field. Particularly, with the support of big data and accelerated computation, deep learning is taking off with tremendous algorithmic innovations and powerful neural network models. AI technology has great promises in improving radiation therapy from treatment planning to treatment assessment. It can aid radiation oncologists in reaching unbiased consensus treatment planning, help train junior radiation oncologists, update practitioners, reduce professional costs, and improve quality assurance in clinical trials and patient care. It can significantly reduce physicians’ time and efforts required to contour, plan, and review. Given the promising learning tools and massive computational resources that are becoming readily available, AI will dramatically change the landscape of radiation oncology research and practice soon. This presentation will give an overview of the recent advances in AI for radiation oncology applications, followed with a set of examples of AI applications in various aspects of radiation therapy, including but not limited to, organ segmentation, target volume delineation, treatment planning, quality assurance, response assessment, outcome prediction, etc. A number of examples of AI applications in radiotherapy will be illustrated in the presentation. For example, I will present a new approach to derive the lung functional images for function-guided radiation therapy, using a deep convolutional neural network to learn and exploit the underlying functional in-formation in the CT image and generate functional perfusion image. I will demonstrate a novel method for pseudo-CT generation from multi-parametric MR images using multi-channel multi-path generative adversarial network (MCMP-GAN) for MRI-based radiotherapy application. I will also show promising capability of MRI-based radiomics features for pre-treatment identification of adaptive radiation therapy eligibility in nasopharyngeal carcinoma (NPC) patients.
Biography:
Prof. CAI Jing earned his PhD in Engineering Physics in 2006 and then completed his clinical residency in Medical Physics in 2009 from the University of Virginia, USA. He entered the ranks of academia as Assistant Professor at Duke University in 2009, and was promoted to Associate Professor in 2014. He joined the Hong Kong Polytechnic University in 2017, and is currently a full Professor and the funding Programme Leader of Medical Physics MSc Programme in the Department of Health Technology and Informatics. He is board certified in Therapeutic Radiological Physics by American Board of Radiography (ABR) since 2010. He is the PI/Co-PI for more than 20 external research funds, including 5 NIH, 3 GRF, 3 HMRF and 1 ITSP grants, with a total funding of more than 40M HK Dollars. He has published over 100 journal papers and 200 conference papers/abstracts, and has mentored over 60 trainees as the supervisor. He serves on the editorial boards for several prestigious journals in the fields of medical physics and radiation oncology. He was elected to Fellow of American Association of Physicists in Medicine (AAPM) in 2018.
Join Zoom Meeting:
https://cuhk.zoom.us/j/92068646609?pwd=R0ZRR1VXSmVQOUkyQnZrd0t4dW0wUT09
Meeting ID: 920-6864-6609
Password: 076760
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
Closing the Loop of Human and Robot
Location
Speaker:
Prof. LU Cewu
Research Professor at Shanghai Jiao Tong University (SJTU)
Abstract:
This talk is toward closing the loop of Human and Robot. We present our recent research of human activity understanding and robot learning. For Human side, we present our recent research “Human Activity Knowledge Engine (HAKE)” which largely improves human activity understanding. The improvements of Alphapose (famous pose estimator) are also introduced. For robot side, we discuss our understanding of robot task and new insight “Primitive model”. Thus, GraspNet – first dynamic grasping benchmark dataset is proposed, a novel end-to-end grasping deep learning approach is also introduced. A 3D point-level semantic embedding method for object manipulation will be discussed. Finally, we will discuss how to further close the Loop of Human and Robot.
Biography:
Cewu Lu is a Research Professor at Shanghai Jiao Tong University (SJTU). Before he joined SJTU, he was a research fellow at Stanford University working under Prof. Fei-Fei Li and Prof. Leonidas J. Guibas. He got the his PhD degree from The Chinese Univeristy of Hong Kong, supervised by Prof. Jiaya Jia. He is selected as young 1000 talent plan. Prof. Lu Cewu is selected as MIT TR35 – “MIT Technology Review, 35 Innovators Under 35” (China), and Qiushi Outstanding Young Scholar (求是杰出青年学者),which is the only one AI awardee in recent 3 years. Prof. Lu serves as an Area Chair for CVPR 2020 and reviewer for 《nature》. Prof. Lu has published about 100 papers in top AI journal and conference, including 9 papers being ESI high cited paper. His research interests fall mainly in Computer Vision and Robotics Learning.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96062514495?pwd=aEp4aEl5UVhjOW1XemdpWVNZTVZOZz09
Meeting ID: 960-6251-4495
Password: 797809
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
Detecting Vulnerabilities using Patch-Enhanced Vulnerability Signatures
Location
Speaker:
Prof. HUO Wei
Professor of Institute of Information Technology (IIE)
Chinese Academy of Sciences (CAS)
Abstract:
Recurring vulnerabilities widely exist and remain undetected in real-world systems, which are often resulted from reused code base or shared code logic. However, the potentially small differences between vulnerable functions and their patched functions as well as the possibly large differences between vulnerable functions and target functions to be detected bring challenges to the current solutions. I shall introduce a novel approach to detect recurring vulnerabilities with low false positives and low false negatives. The evaluation on ten open-source systems has shown that the approach proposed significantly outperformed state-of-the-art clone-based and function matching-based recurring vulnerability detection approaches, with 23 CVE identifiers assigned.
Biography:
Wei HUO is a full professor within Institute of Information Technology (IIE), Chinese Academy of Sciences (CAS). He focuses on software security, vulnerability detection, program analysis, etc. He leads the group of VARAS (Vulnerability Analysis and Risk Assessment System). He has published multi papers at top venues in computer security and software engineering, including ASE, ICSE, Usenix Security. Besides, his group has uncovered hundreds of 0-day vulnerabilities in popular software and firmware, with 100+ CVEs assigned.
Join Zoom Meeting:
https://cuhk.zoom.us/j/97738806643?pwd=dTIzcWhUR2pRWjBWaG9tZkdkRS9vUT09
Meeting ID: 977-3880-6643
Password: 131738
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
Computational Fabrication and Assembly: from Optimization and Search to Learning
Location
Speaker:
Prof. FU Chi Wing Philip
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Computational fabrication is an emerging research topic in computer graphics, beginning roughly a decade ago with the need to develop computational solutions for efficient 3D printing and later for 3D fabrication and object assembly at large. In this talk, I will introduce a series of research works in this
area with a particular focus on the following two recent ones:
(i) Computational LEGO Technic assembly, in which we model the component bricks, their connection mechanisms, and the input user sketch for computation, and then further develop an optimization model with necessary constraints and our layout modification operator to efficiently search for an optimum LEGO Technic assembly. Our results not only match the input sketch with coherently-connected LEGO Technic bricks but also respect the intended symmetry and structural integrity of the designs.
(ii) TilinGNN, the first neural optimization approach to solve a classical instance of the tiling problem, in which we formulate and train a neural network model to maximize the tiling coverage on target shapes, while avoiding overlaps and holes between the tiles in a self-supervised manner. In short, we model the tiling problem as a discrete problem, in which the network is trained to predict the goodness of each candidate tile placement, allowing us to iteratively select tile placements and assemble a tiling
on the target shape.
In the end, I will try to present also some of the results from my other research works in the areas of point cloud processing, 3D vision, and augmented reality.
Biography:
Chi-Wing Fu is an associate professor in the department of computer science and engineering at the Chinese University of Hong Kong (CUHK). His research interests are in computer graphics, vision, and human-computer interaction, or more specifically in computation fabrication, 3D computer vision, and user interaction. Chi-Wing obtained his B.Sc. and M.Phil. from the CUHK and his Ph.D. from Indiana University, Bloomington. Before re-joining the CUHK in early 2016, he was an associate professor with tenure at the school of computer science and engineering at Nanyang Technological University, Singapore.
Join Zoom Meeting:
https://cuhk.zoom.us/j/99943410200
Meeting ID: 999 4341 0200
Password: 492333
Enquiries: Miss Caroline Tai at Tel. 3943 8440
Bioinformatics: Turning experimental data into biomedical hypotheses, knowledge and applications
Location
Speaker:
Prof. YIP Yuk Lap Kevin
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Contemporary biomedical research relies heavily on high-throughput technologies that examine many objects, their individual activities or their mutual interactions in a single experiment. The data produced are usually high-dimensional, noisy and biased. An important aim of bioinformatics is to extract useful information from such data for developing both conceptual understandings of the biomedical phenomena and downstream applications. This requires the integration of knowledge from multiple disciplines, such as data properties from the biotechnology, molecular and cellular mechanisms from biology, evolutionary principles from genetics, and patient-, disease- and drug-related information from medicine. Only with these inputs can the data analysis goals be meaningfully formulated as computational problems and properly solved. Computational findings also need to be subsequently validated and functionally tested by additional experiments, possibly iterating back-and-forth between data production and data analysis many times before a conclusion can be drawn. In this seminar, I will use my own research to explain how bioinformatics can help create new biomedical hypotheses, knowledge and applications, with a focus on recent works that use machine learning methods to study basic molecular mechanisms and specific human diseases.
Biography:
Kevin Yip is an associate professor in Department of Computer Science and Engineering at The Chinese University of Hong Kong (CUHK). He obtained his bachelor degree in computer engineering and master degree in computer science from The University of Hong Kong, and his PhD degree in computer science from Yale University. Before joining CUHK, he has worked as a researcher in HKU-Pasteur Institute, Yale Center for Medical Informatics, and Department of Molecular Biophysics and Biochemistry at Yale University. Since his master study, Dr. Yip has been conducting research in bioinformatics, with special interests in modeling gene regulatory
mechanisms and studying how their perturbations are related to human diseases. Dr. Yip has participated in several international research consortia, including Encyclopedia of DNA Elements (ENCODE), model organism ENCODE (modENCODE), and International Human Epigenomics Consortium (IHEC). Locally, Dr. Yip has been collaborating with scientists and clinicians in the quest of understanding the mechanisms that underlie different human diseases, such as hepatocellular carcinoma, nasopharyngeal carcinoma, type II diabetes, and Hirschsprung’s disease. Dr. Yip received the title of Outstanding Fellow from Faculty of Engineering and the Young Researcher Award from CUHK in 2019.
Join Zoom Meeting:
https://cuhk.zoom.us/j/98458448644
Meeting ID: 984 5844 8644
Password: 945709
Enquiries: Miss Caroline Tai at Tel. 3943 8440
Dependable Storage Systems
Location
Speaker:
Prof. LEE Pak Ching Patrick
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Making large-scale storage systems dependable against failures is critical yet non-trivial in the face of the ever-increasing amount of data. In this talk, I will present my work on dependable storage systems, with the primary goal of improving the fault tolerance, recovery, security, and performance of different types of storage architectures. To make a case, I will present new theoretical and applied findings on erasure coding, a low-cost redundancy technique for fault-tolerant storage. I will present general techniques and code constructions for accelerating the repair of storage failures, and further propose a unified framework for readily deploying a variety of erasure coding solutions in state-of-the-art distributed storage systems. I will also introduce my other work on the dependability of applied distributed systems, in the areas of encrypted deduplication, key-value stores, network measurement, and stream processing. Finally, I will highlight the industrial impact of our work beyond publications.
Biography:
Patrick P. C. Lee is now an Associate Professor in the Department of Computer Science and Engineering at the Chinese University of Hong Kong. His research interests are in various applied/systems topics on improving the dependability of large-scale software systems, including storage systems, distributed systems and networks, and cloud computing. He now serves as an Associate Editor in IEEE/ACM Transactions on Networking and ACM Transactions on Storage. He served as a TPC co-chair of APSys 2020, and as a TPC member of several major systems and networking conferences. He received the best paper awards at CoNEXT 2008, TrustCom 2011, and SRDS 2020. For details, please refer to his personal homepage: http://www.cse.cuhk.edu.hk/~pclee.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96195753407
Meeting ID: 961 9575 3407
Password: 892391
Enquiries: Miss Caroline Tai at Tel. 3943 8440
From Combating Errors to Embracing Errors in Computing Systems
Location
Speaker:
Prof. Xu Qiang
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Faults are inevitable in any computing systems, and they may occur due to environmental disturbance, circuit aging, or malicious attacks. On the one hand, designers try all means to prevent, contain, and control faults to achieve error-free computation, especially for those safety-critical applications. On the other hand, many applications in the big data era (e.g., search engine and recommended systems) that require lots of computing power are often error-tolerant. In this talk, we present some techniques developed at our group over the past several years, including error-tolerant solutions that combat all sorts of hardware faults and approximate computing techniques that embrace errors in computing systems for energy savings.
Biography:
Qiang Xu is an associate professor of Computer Science & Engineering at the Chinese University of Hong Kong. He leads the CUhk REliable laboratory (CURE Lab.), and his research interests include electronic design automation, fault-tolerant computing and trusted computing. Dr. Xu has published 150+ papers at referred journals and conference proceedings, and received two Best Paper Awards and five Best Paper Award Nominations. He is currently serving as an associate editor for IEEE Transaction on Computer-Aided Design of Integrated Circuits and Systems and Integration, the VLSI Journal.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96930968459
Meeting ID: 969 3096 8459
Password: 043377
Enquiries: Miss Caroline Tai at Tel. 3943 8440
Memory/Storage Optimization for Small/Big Systems
Location
Speaker:
Prof. Zili SHAO
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Memory/storage optimization is one of the most critical issues in computer systems. In this talk, I will first summarize our work in optimizing memory/storage systems for embedded and big data applications. Then, I will present an approach by deeply integrating device and application to optimize flash-based key-value caching – one of the most important building blocks in modern web infrastructures and high-performance data-intensive applications. I will also introduce our recent work in optimizing unique address checking for IoT blockchains.
Biography:
Zili Shao is an Associate Professor at Department of Computer Science and Engineering, The Chinese University of Hong Kong. He received his Ph.D. degree from The University of Texas at Dallas in 2005. Before joining CUHK in 2018, he was with Department of Computing, The Hong Kong Polytechnic University, where he started in 2005. His current research interests include embedded software and systems, storage systems and related industrial applications.
Join Zoom Meeting:
https://cuhk.zoom.us/j/95131164721
Meeting ID: 951 3116 4721
Password: 793297
Enquiries: Miss Caroline Tai at Tel. 3943 8440
VLSI Mask Optimization: From Shallow To Deep Learning
Location
Speaker:
Prof. YU Bei
Assistant Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
The continued scaling of integrated circuit technologies, along with the increased design complexity, has exacerbated the challenges associated with manufacturability and yield. In today’s semiconductor manufacturing, lithography plays a fundamental role in printing design patterns on silicon. However, the growing complexity and variation of the manufacturing process have tremendously increased the lithography modeling and simulation cost. Both the role and the cost of mask optimization – now indispensable in the design process – have increased. Parallel to these developments are the recent advancements in machine learning which have provided a far-reaching data-driven perspective for problem solving. In this talk, we shed light on the recent deep learning based approaches that have provided a new lens to examine traditional mask optimization challenges. We present hotspot detection techniques, leveraging advanced learning paradigms, which have demonstrated unprecedented efficiency. Moreover, we demonstrate the role deep learning can play in optical proximity correction (OPC) by presenting its successful application in our full-stack mask optimization framework.
Biography:
Bei Yu is currently an Assistant Professor at the Department of Computer Science and Engineering, The Chinese University of Hong Kong. He received the Ph.D degree from Electrical and Computer Engineering, University of Texas at Austin, USA in 2014, and the M.S. degree in Computer Science from Tsinghua University, China in 2010. His current research interests include machine learning and combinatorial algorithm with applications in VLSI computer aided design (CAD). He has served as TPC Chair of 1st ACM/IEEE Workshop on Machine Learning for CAD (MLCAD), served in the program committees of DAC, ICCAD, DATE, ASPDAC, ISPD, the editorial boards of ACM Transactions on Design Automation of Electronic Systems (TODAES), Integration, the VLSI Journal, and IET Cyber-Physical Systems: Theory & Applications. He is Editor of IEEE TCCPS Newsletter.
Dr. Yu received six Best Paper Awards from International Conference on Tools with Artificial Intelligence (ICTAI) 2019, Integration, the VLSI Journal in 2018, International Symposium on Physical Design (ISPD) 2017, SPIE Advanced Lithography Conference 2016, International Conference on Computer-Aided Design (ICCAD) 2013, Asia and South Pacific Design Automation Conference (ASPDAC) 2012, four other Best Paper Award Nominations (ASPDAC 2019, DAC 2014, ASPDAC 2013, and ICCAD 2011), six ICCAD/ISPD contest awards, IBM Ph.D. Scholarship in 2012, SPIE Education Scholarship in 2013, and EDAA Outstanding Dissertation Award in 2014.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96114730370
Meeting ID: 961 1473 0370
Password: 984602
Enquiries: Miss Caroline Tai at Tel. 3943 8440
Local Versus Global Security in Computation
Location
Speaker:
Prof. Andrej BOGDANOV
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Secret sharing schemes are at the heart of cryptographic protocol design. In this talk I will present my recent discoveries about the informational and computational complexity of secret sharing and their relevance to secure multiparty computation:
- The share size in the seminal threshold secret sharing scheme of Shamir and Blakley from the 1970s is essentially optimal.
- Secret reconstruction can sometimes be carried out in the computational model of bounded-depth circuits, without resorting to modular linear algebra.
- Private circuits that are secure against local information leakage are also secure against limited but natural forms of global leakage.
I will also touch upon some loosely related results in cryptography, pseudorandomness, and coding theory.
Biography:
Andrej Bogdanov is associate professor of Computer Science and Engineering and director of the Institute of Theoretical Computer Science and Communications at the Chinese University of Hong Kong. His research interests are in cryptography, pseudorandomness, and sublinear-time algorithms.
Andrej obtained his B.S. and M. Eng. degrees from MIT in 2001 and his Ph.D. from UC Berkeley in 2005. Before joining CUHK in 2008 he was a postdoctoral associate at the Institute for Advanced Study in Princeton, at DIMACS (Rutgers University), and at ITCS (Tsinghua University). He was a visiting professor at the Tokyo Institute of Technology in 2013 and a long-term program participant at the UC Berkeley Simons Institute for the Theory of Computing in 2017.
Join Zoom Meeting:
https://cuhk.zoom.us/j/94008322629
Meeting ID: 940 0832 2629
Password: 524278
Enquiries: Miss Caroline Tai at Tel. 3943 8440
A Compiler Infrastructure for Embedded Multicore SoCs
Location
Speaker:
Dr. Sheng Weihua
Chief Expert
Software Tools and Engineering at Huawei
Abstract:
Compilers play a pivotal role in the software development process for microprocessors, by automatically translating high-level programming languages into machinespecific executable code. For a long time, while processors were scalar, compilers were regarded as a black box among the software community, due to their successful internal encapsulation of machine-specific details. Over a decade ago, major computing processor manufacturers began to compile multiple (simple) cores into a single chip, namely multicores, to retain scaling according to Moore’s law. The embedded computing industry followed suit, introducing multicores years later, amid aggressive marketing campaigns aimed at highlighting the number of processors for product differentiation in consumer electronics. While the transition from scalar (uni)processors to multicores is an evolutionary step in terms of hardware, it has given rise to fundamental changes in software development. The performance “free lunch”, having ridden on the growth of faster processors, is over. Compiler technology does not develop and scale for multicore architectures, which contributes considerably to the software crisis in the multicore age. This talk addresses the challenges associated with developing compilers for multicore SoCs (Systems-On-Chip) software development, focusing on embedded systems, such as wireless terminals and modems. It also captures a trajectory from research towards a commercial prototyping, shedding light on some lessons on how to do research effectively.
Biography:
Mr. Sheng has had early career roots in the electronic design automation industry (CoWare and Synopsys). He has spearheaded the technology development on multicore programming tools at RWTH Aachen University from 2007 to 2013, which later turned into the foundation of Silexica. He has a proven record of successful consultation and collaboration with top tier technology companies on multicore design tools. Mr. Sheng is a co-founder of Silexica Software Solutions GmbH in Germany. He served as CTO during 2014-2016. Since 2017, as VP and GM of APAC, he was responsible for all aspects of Silexica sales and operations across the APAC region. In 2019 he joined Huawei Technologies. Mr. Sheng received BEng from Tsinghua University and MSc/PhD from RWTH Aachen University in Germany.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93855822245
Meeting ID: 938-5582-2245
Password: 429533
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
Robust Deep Neural Network Design under Fault Injection Attack
Location
Speaker:
Prof. Xu Qiang
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Deep neural networks (DNNs) have gained mainstream adoption in the past several years, and many artificial intelligence (AI) applications employ DNNs for safety- and security-critical tasks, e.g., biometric authentication and autonomous driving. In this talk, we first briefly discuss the security issues in deep learning. Then, we focus on fault injection attacks and introduce some of our recent works in this domain.
Biography:
Qiang Xu leads the CUhk REliable laboratory (CURE Lab.) and his research interests include fault-tolerant computing and trusted computing. He has published 150+ papers in these fields and received a number of best paper awards/nominations.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93862944206
Meeting ID: 938-6294-4206
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439