Home >> Events >> Seminars Archives
Seminars Archives
December 2022
01 December
2:30 pm - 3:30 pm
November 2022
24 November
4:00 pm - 5:00 pm
Towards Robust Autonomous Driving Systems
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2022/2023
Speaker:
Dr. Xi Zheng
Director of Intelligent Systems Research Group
Macquarie University, Australia
Abstract:
Autonomous driving has shown great potential to reform modern transportation. Yet its reliability and safety have drawn a lot of attention and concerns. Compared with traditional software systems, autonomous driving systems (ADSs) often use deep neural networks in tandem with logic-based modules. This new paradigm poses unique challenges for software testing. Despite the recent development of new ADS testing techniques, it is not clear to what extent those techniques have addressed the needs of ADS practitioners. To fill this gap, we have published a few works and I will present some of them. The first work is to reduce and prioritize test for multi-module autonomous driving systems (Accepted in FSE’22). The second work is to conduct comprehensive study to identify the current practices, needs and gaps in testing autonomous driving systems (Accepted also in FSE’22). The third work is to analyse the robustness issues in the deep learning driving models (Accepted in PerCom’20). The fourth work is to generate test cases from traffic rules for autonomous driving models (Accepted in TSE’22). I will also cover some ongoing and future work in autonomous driving systems.
Biography:
Dr. Xi Zheng received the Ph.D. in Software Engineering from the University of Texas at Austin in 2015. From 2005 to 2012, he was the Chief Solution Architect for Menulog Australia. He is currently the Director of Intelligent Systems Research Group, Director of International engagement in the School of Computing, Senior Lecturer (aka Associate Professor US) and Deputy Program Leader in Software Engineering, Macquarie University, Australia. His research interests include Internet of Things, Intelligent Software Engineering, Machine Learning Security, Human-in-the-loop AI, and Edge Intelligence. He has secured more than $1.2 million competitive funding in Australian Research Council (Linkage and Discovery) and Data61 (CRP) projects on safety analysis, model testing and verification, and trustworthy AI on autonomous vehicles. He also won a few awards including Deakin Industry Researcher (2016) and MQ Earlier Career Researcher (Runner-up 2020). He has a number of highly cited papers and best conference papers. He serves as PC members for CORE A* conferences including FSE (2022) and PerCom (2017-2023). He also serves as the PC chairs of IEEE CPSCom-2021, IEEE Broadnets-2022 and associate editor for Distributed Ledger Technologies.
Enquiries: Mr Jeff Liu at Tel. 3943 0624
23 November
11:00 am - 12:00 pm
A Survey of Cloud Database Systems
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2022/2023
Speaker:
Dr. C. Mohan
Distinguished Visiting Professor, Tsinghua University
Abstract:
In this talk, I will first introduce traditional (non-cloud) parallel and distributed database systems. Concepts like SQL and NoSQL systems, data replication, distributed and parallel query processing, and data recovery after different types of failures will be covered. Then, I will discuss how the emergence of the (public) cloud has introduced new requirements on parallel and distributed database systems, and how such requirements have necessitated fundamental changes to the architectures of such systems. I will illustrate the related developments by discussing some of the details of systems like Alibaba POLARDB, Microsoft Azure SQL DB, Microsoft Socrates, Azure Synapse POLARIS, Google Spanner, Google F1, CockroachDB, Amazon Aurora, Snowflake and Google AlloyDB.
Biography:
Dr. C. Mohan is currently a Distinguished Visiting Professor at Tsinghua University in China, a Visiting Researcher at Google, a Member of the inaugural Board of Governors of Digital University Kerala, and an Advisor of the Kerala Blockchain Academy (KBA) and the Tamil Nadu e-Governance Agency (TNeGA) in India. He retired in June 2020 from being an IBM Fellow at the IBM Almaden Research Center in Silicon Valley. He joined IBM Research (San Jose, California) in 1981 where he worked until May 2006 on several topics in the areas of database, workflow, and transaction management. From June 2006, he worked as the IBM India Chief Scientist, based in Bangalore, with responsibilities that relate to serving as the executive technical leader of IBM India within and outside IBM. In February 2009, at the end of his India assignment, Mohan resumed his research activities at IBM Almaden. Mohan is the primary inventor of the well-known ARIES family of database recovery and concurrency control methods, and the industry-standard Presumed Abort commit protocol. He was named an IBM Fellow, IBM’s highest technical position, in 1997 for being recognized worldwide as a leading innovator in transaction management. In 2009, he was elected to the United States National Academy of Engineering (NAE) and the Indian National Academy of Engineering (INAE). He received the 1996 ACM SIGMOD Edgar F. Codd Innovations Award in recognition of his innovative contributions to the development and use of database systems. In 2002, he was named an ACM Fellow and an IEEE Fellow. At the 1999 International Conference on Very Large Data Bases (VLDB), he was honored with the 10 Year Best Paper Award for the widespread commercial, academic and research impact of his ARIES work, which has been extensively covered in textbooks and university courses. From IBM, Mohan received 2 Corporate and 8 Outstanding Innovation/Technical Achievement Awards. He is an inventor on 50 patents. He was named an IBM Master Inventor in 1997. Mohan worked very closely with numerous IBM product and research groups, and his research results are implemented in numerous IBM and non-IBM prototypes and products like DB2, MQSeries, WebSphere, Informix, Cloudscape, Lotus Notes, Microsoft SQLServer, Sybase and System Z Parallel Sysplex. During the last many years, he focused on Blockchain, AI, Big Data and Cloud technologies (https://bit.ly/sigBcP, https://bit.ly/CMoTalks, https://bit.ly/CMgMDS). Since 2017, he has been an evangelist of permissioned blockchains and the myth buster of permissionless blockchains. During 1H2021, Mohan was the Shaw Visiting Professor at the National University of Singapore (NUS) where he taught a seminar course on distributed data and computing. In 2019, he became an Honorary Advisor to TNeGA of Chennai for its blockchain and other projects. In 2020, he joined the Advisory Board of KBA of India.
Since 2016, he has been a Distinguished Visiting Professor of China’s prestigious Tsinghua University in Beijing. In 2021, he was inducted as a member of the inaugural Board of Governors of the new Indian university Digital University Kerala (DUK). Mohan launched his consulting career by becoming a Consultant to Microsoft’s Data Team in October 2020. In March 2022, he became a consultant at Google with the title of Visiting Researcher. He has been on the advisory board of IEEE Spectrum and has been an editor of VLDB Journal, and Distributed and Parallel Databases. In the past, he has been a member of the IBM Academy of Technology’s Leadership Team, IBM’s Research Management Council, IBM’s Technical Leadership Team, IBM India’s Senior Leadership Team, the Bharti Technical Advisory Council, the Academic Senate of the International Institute of Information Technology in Bangalore, and the Steering Council of IBM’s Software Group Architecture Board. Mohan received his PhD in computer science from the University of Texas at Austin in 1981. In 2003, he was named a Distinguished Alumnus of IIT Madras from which he received a B.Tech. in chemical engineering in 1977. Mohan is a frequent speaker in North America, Europe and Asia. He has given talks in 43 countries. He is highly active on social media and has a huge following. More information can be found in the Wikipedia page at https://bit.ly/CMwIkP and his homepage at https://bit.ly/CMoDUK.
Enquiries: Mr Jeff Liu at Tel. 3943 0624
22 November
2:00 pm - 3:00 pm
EDA for Emerging Technologies
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2022/2023
Speaker:
Prof. Anupam Chattopadhyay
Associate Professor, NTU
Abstract:
The continued scaling of horizontal and vertical physical features of silicon-based complementary metal-oxide-semiconductor (CMOS) transistors, termed as “More Moore”, has a limited runway and would eventually be replaced with “Beyond CMOS” technologies. There has been a tremendous effort to follow Moore’s law but it is currently approaching atomistic and quantum mechanical physics boundaries. This has led to active research in other non-CMOS technologies such as memristive devices, carbon nanotube field-effect transistors, quantum computing, etc. Several of these technologies have been realized on practical devices with promising gains in yield, integration density, runtime performance, and energy efficiency. Their eventual adoption is largely reliant on the continued research of Electronic Design Automation (EDA) tools catering to these specific technologies. Indeed, some of these technologies present new challenges to the EDA research community, which are being addressed through a series of innovative tools and techniques. In this tutorial, we will particularly cover the two phases of EDA flow, logic synthesis, and technology mapping, for two types of emerging technologies, namely, in-memory computing and quantum computing.
Biography:
Anupam Chattopadhyay received his B.E. degree from Jadavpur University, India, MSc. from ALaRI, Switzerland, and Ph.D. from RWTH Aachen in 2000, 2002, and 2008 respectively. From 2008 to 2009, he worked as a Member of Consulting Staff in CoWare R&D, Noida, India. From 2010 to 2014, he led the MPSoC Architectures Research Group in RWTH Aachen, Germany as a Junior Professor. Since September 2014, Anupam was appointed as an Assistant Professor in SCSE, NTU, where he got promoted to Associate Professor with Tenure from August 2019. Anupam is an Associate Editor of IEEE Embedded Systems Letters and series editor of Springer Book Series on Computer Architecture and Design Methodologies. Anupam received Borcher’s plaque from RWTH Aachen, Germany for outstanding doctoral dissertation in 2008, nomination for the best IP award in the ACM/IEEE DATE Conference 2016 and nomination for the best paper award in the International Conference on VLSI Design 2018 and 2020. He is a fellow of Intercontinental Academia and a senior member of IEEE and ACM.
Enquiries: Mr Jeff Liu at Tel. 3943 0624
03 November
3:30 pm - 4:30 pm
Building Optimal Decision Trees
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2022/2023
Speaker:
Professor Peter J. Stuckey
Professor, Department of Data Science and Artificial Intelligence
Monash University
Abstract:
Decision tree learning is a widely used approach in machine learning, favoured in applications that require concise and interpretable models. Heuristic methods are traditionally used to quickly produce models with reasonably high accuracy. A commonly criticised point, however, is that the resulting trees may not necessarily be the best representation of the data in terms of accuracy and size. In recent years, this motivated the development of optimal classification tree algorithms that globally optimise the decision tree in contrast to heuristic methods that perform a sequence of locally optimal decisions.
In this talk I will explore the history of building decision trees, from greedy heuristic methods to modern optimal approaches.
In particular I will discuss a novel algorithm for learning optimal classification trees based on dynamic programming and search. Our algorithm supports constraints on the depth of the tree and number of nodes. The success of our approach is attributed to a series of specialised techniques that exploit properties unique to classification trees. Whereas algorithms for optimal classification trees have traditionally been plagued by high runtimes and limited scalability, we show in a detailed experimental study that our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances, providing several orders of magnitude improvements and notably contributing towards the practical realisation of optimal decision trees.
Biography:
Professor Peter J. Stuckey is a Professor in the Department of Data Science and Artificial Intelligence in the Faculty of Information Technology at Monash University. Peter Stuckey is a pioneer in constraint programming and logic programming. His research interests include: discrete optimization; programming languages, in particular declarative programing languages; constraint solving algorithms; path finding; bioinformatics; and constraint-based graphics; all relying on his expertise in symbolic and constraint reasoning. He enjoys problem solving in any area, having publications in e.g. databases, election science, system security, and timetabling, and working with companies such as Oracle and Rio Tinto on problems that interest them.
Peter Stuckey received a B.Sc and Ph.D both in Computer Science from Monash University in 1985 and 1988 respectively. Since then he has worked at IBM T.J. Watson Research Labs, the University of Melbourne and Monash University. In 2009 he was recognized as an ACM Distinguished Scientist. In 2010 he was awarded the Google Australia Eureka Prize for Innovation in Computer Science for his work on lazy clause generation. He was awarded the 2010 University of Melbourne Woodward Medal for most outstanding publication in Science and Technology across the university. In 2019 he was elected as an AAAI Fellow. and awarded the Association of Constraint Programming Award for Research Excellence. He has over 125 journal and 325 conference publications and 17,000 citations with an h-index of 62.
Enquiries: Mr. Jeff Liu at Tel. 3943 0624
October 2022
28 October
10:00 am - 11:00 am
Z3++: Improving the SMT solver Z3
Location
Zoom
Category
Seminar Series 2022/2023
Speaker:
Prof. CAI Shaowei
Institute of Software
Chinese Academy of Sciences
Abstract:
Satisfiability Modulo Theories (SMT) is the problem of deciding the satisfiability of a first order logic formula with respect to certain background theories. SMT solvers have become important formal verification engines, with applications in various domains. In this talk, I will introduce the basis of SMT solving and present our work on improving a famous SMT solver Z3, leading to Z3++, which has won 2 Gold Medals out of 6 from SMT Competition 2022.
Biography:
Shaowei Cai is a professor in Institute of Software, Chinese Academy of Sciences. He has obtained his PhD from Peking University in 2012, with Doctoral Dissertation Award. His research focus on constraint solving (particularly SAT, SMT, and integer programming), combinatorial optimization, and formal verification, as well as their applications in industries. He has won more than 10 Gold Medals from SAT and SMT Competitions, and the Best Paper Award of SAT 2021 conference.
Join Zoom Meeting:
https://cuhk.zoom.us/j/99411951727
Enquiries: Ms. Karen Chan at Tel. 3943 8439
17 October
2:00 pm - 3:00 pm
Attacks and Defenses in Logic Encryption
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2022/2023
Speaker:
Prof. Hai Zhou
Associate Professor, Department of Electrical and Computer Engineering
Northwestern University
Abstract:
With the increasing cost and complexity in semiconductor hardware designs, circuit IP protection has become an important and challenging problem in hardware security. Logic encryption is a promising technique that modifies a sensitive circuit to a locked one with a password, such that only authorized users can access it. During its history of more than 20 years, many different attacks and defenses have been designed and proposed. In this talk, after a brief introduction to logic encryption, I will present important attacking and defending techniques in the field. Especially, the focus will be on the few key attacks and defenses created in NuLogiCS group at Northwestern.
Biography:
Hai Zhou is the director of the NuLogiCS Research Group in the Electrical and Computer Engineering at Northwestern University and a member of the Center for Ultra Scale Computing and Information Security (CUCIS). His research interest is on Logical Methods for Computer Systems (LogiCS), where logics is used to construct reactive computer systems (in the form of hardware, software, or protocol) and to verify their properties (e.g. correctness, security, and efficiency). In other words, he is interested in algorithms, formal methods, optimization, and their applications to security, machine learning, and economics.
Enquiries: Ms. Karen Chan at Tel. 3943 8439
05 October
3:00 pm - 5:00 pm
Recent Advances in Backdoor Learning
Location
Zoom
Category
Seminar Series 2022/2023
Speaker:
Dr. Baoyuan WU
Associate Professor, School of Data Science
The Chinese University of Hong Kong, Shenzhen
Abstract:
In this talk, Dr. Wu will review the development of backdoor learning and his lastest works on backdoor attack and defense. The first is the backdoor attack with sample-specific triggers, which can bypass most existing defense methods, as they are mainly developed for defending against sample-agnostic triggers. Then, he will introduce two effective backdoor defense methods which could preclude the backdoor injection during the training process, through exploring some intrinsic properties of poisoned samples. Finally, he will introduce BackdoorBench, which is a comprehensive benchmark containing mainstream backdoor attack and defense methods, as well as 8,000 pairs of attack-defense evaluations, several interesting findings and analysis. It was recently released at “What is BackdoorBench? ”
Biography:
Dr. Baoyuan Wu is an Associate Professor of School of Data Science, the Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), and the director of the Secure Computing Lab of Big Data, Shenzhen Research Institute of Big Data (SRIBD). His research interests are AI security and privacy, machine learning, computer vision and optimization. He has published 50+ top-tier conference and journal papers, including TPAMI, IJCV, NeurIPS, CVPR, ICCV, ECCV, ICLR, AAAI. He is currently serving as an Associate Editor of Neurocomputing, Area Chair of NeurIPS 2022, ICLR 2022/2023, AAAI 2022.
Join Zoom Meeting:
https://cuhk.zoom.us/j/91408751707
Enquiries: Ms. Karen Chan at Tel. 3943 8439
September 2022
23 September
10:30 am - 11:30 am
Out-of-Distribution Generalization: Progress and Challenges
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2022/2023
Speaker:
Dr. Li Zhenguo
Director, AI Theory Lab
Huawei Noah’s Ark Lab, Hong Kong
Abstract:
Noah’s Ark Lab is the AI research center for Huawei, with the mission of making significant contribution to both the company and society through innovation in artificial intelligence (AI), data mining and related fields. Our AI theory team focuses on the fundamental research in machine learning, including cutting-edge theories and algorithms such as out-of-distribution (OoD) generalization and controllable generative modeling, and disruptive applications such as self-driving. In this talk, we will present some of our progresses in out-of-distribution generalization, including OoD-learnable theories and model selection, understanding and quantification of OoD properties of various benchmark datasets, and related applications. We will also highlight some key challenges for future studies.
Biography:
Zhenguo Li is currently the director of the AI Theory Lab in Huawei Noah’s Ark Lab, Hong Kong. Before joining Huawei Noah’s Ark lab, he was an associate research scientist in the department of electrical engineering, Columbia University, working with Prof. Shih-Fu Chang. He received BS and MS degrees in mathematics at Peking University, and PhD degree in machine learning at The Chinese University of Hong Kong, advised by Prof. Xiaoou Tang. His current research interests include machine learning and its applications.
Enquiries: Ms. Karen Chan at Tel. 3943 8439
15 September
5:00 pm - 6:30 pm
Innovative Robotic Systems and its Applications to Agile Locomotion and Surgery
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2022/2023
Speaker:
Prof. Au, Kwok Wai Samuel
Professor, Department of Mechanical and Automation Engineering, CUHK
Professor, Department of Surgery, CUHK
Co-Director, Chow Yuk Ho Technology Centre for Innovative Medicine, CUHK
Director, Multiscale Medical Robotic Center, InnoHK
Abstract:
Over the past decades, a wide range of bio-inspired legged robots have been developed that can run, jump, and climb over a variety of challenging surfaces. However, in terms of maneuverability they still lag far behind animals. Animals can effectively use their mechanical body and external appendages (such as tails) to achieve spectacular maneuverability, energy efficient locomotion, and robust stabilization to large perturbations which may not be easily attained in the existing legged robots. In this talk, we will present our efforts on the development of innovative legged robots with greater mobility/efficiency/robustness, comparable to its biological counterpart. We will discuss the fundamental challenges in legged robots and demonstrate the feasibility of developing such kinds of agile systems. We believe our solutions could potentially lead to more efficient legged robot design and give the legged robot animal-like mobility and robustness. Furthermore, we will also present our robotic development on surgery domain and show how these technologies can be integrated with legged robots to create novel teleoperated legged mobile manipulators for service and construction applications.
Biography:
Dr. Kwok Wai Samuel Au is currently a Professor of the Department of Mechanical and Automation Engineering and Department of Surgery (by courtesy) at CUHK, and the Founding Director of Multiscale Medical Robotics Center, InnoHK. In Sept 2019, Dr. Au found Cornerstone Robotics and has been serving as the president of the company, aiming to create affordable surgical robotic solution. Dr. Au received the B.Eng. and M.Phil degrees in Mechanical and Automation Engineering from CUHK in 1997 and 1999, respectively and completed his Ph.D. degree in Mechanical Engineering at MIT in 2007. During his PhD study, Prof. Hugh Herr, Dr. Au, and other colleagues from MIT Biomechatronics group co-invented the MIT Powered Ankle-foot Prosthesis.
Before joining CUHK(2016), he was the manager of Systems Analysis of the New Product Development Department at Intuitive Surgical, Inc. At Intuitive Surgical, he co-invented and was leading the software and control algorithm development for the FDA cleared da Vinci Si Single-Site surgical platform (2012), Single-Site Wristed Needle Driver (2014), and da Vinci Xi Single-Site surgical platform (2016). He was also a founding team member for the early development of Intuitive Surgical’s FDA cleared robot-assisted catheter system, da Vinci ION system from 2008 to 2012.
Dr. Au co-authored over 60 peer-reviewed manuscripts and conference journals, 17 granted US patents/EP, and 3 pending US Patents. He has won numerous awards including the first prize in the American Society of Mechanical Engineers (ASME) Student Mechanism Design Competition in 2007, Intuitive Surgical Problem Solving Award in 2010, and Intuitive Surgical Inventor Award in 2011.
Enquiries: Ms. Karen Chan at Tel. 3943 8439
April 2022
11 April
2:00 pm - 3:00 pm
Game-Theoretic Interactions: Unifying Attribution, Robustness, Generalization, Visual Concepts, and Aesthetics
Location
Zoom
Category
Seminar Series 2021/2022
Speaker:
Dr. Quanshi Zhang
Abstract:
The interpretability of deep neural networks has received increasing attention in recent years, and diverse methods of explainable AI (XAI) have been developed. Currently, most XAI methods are designed in an experimental manner without solid theoretic foundations, or simply fit explanation results to people’s cognition instead of objectively reflecting the true knowledge in the DNN. The lack of theoretic supports has hampered the future development of XAI. Therefore, in this talk, Dr. Quanshi Zhang will review several studies of explainable AI theories of his research group in recent years, which use the system of game-theoretic interactions to explain the attribution, the adversarial robustness, model generalization, visual concepts learned by the DNN, and the aesthetic level of images.
Biography:
Dr. Quanshi Zhang is an associate professor at Shanghai Jiao Tong University, China. He received the Ph.D. degree from the University of Tokyo in 2014. From 2014 to 2018, he was a post-doctoral researcher at the University of California, Los Angeles. His research interests are mainly machine learning and computer vision. In particular, he has made influential research in explainable AI (XAI) and received the ACM China Rising Star Award. He was the co-chairs of the workshops towards XAI in ICML 2021, AAAI 2019, and CVPR 2019. We is the speaker of the tutorials on XAI at IJCAI 2020 and IJCAI 2021.
Join Zoom Meeting:
https://cuhk.zoom.us/j/98782922295
Enquiries: Ms. Karen Chan at Tel. 3943 8439
March 2022
29 March
10:00 am - 11:00 am
Towards efficient NLP models
Location
Zoom
Category
Seminar Series 2021/2022
Speaker:
Dr. Zichao Yang
Abstract:
In recent years, advances in deep learning for NLP research have been mainly propelled by massive computation and large amounts of data. Despite the progress, those giant models still rely on in-domain data to work well in down-stream tasks, which is hard and costly to obtain in practice. In this talk, I am going to talk about my research efforts towards overcoming the challenge of learning with limited supervision by designing efficient NLP models. My research spans three directions towards this goal: designing structural neural networks models according to NLP data structures to take full advantage of labeled data, effective unsupervised models to alleviate the dependency on labeled corpus and data augmentation strategies which creates large amounts of labeled data at almost no cost.
Biography:
Zichao Yang is currently a research scientist working at Bytedance. Before that he obtained his Ph.D from CMU working with Eric Xing, Alex Smola and Taylor Berg-Kirkpatrick. His research interests lie in machine learning and deep learning with applications in NLP. He has published dozens of papers in top AI/ML conferences. He obtained his MPhil degree from CUHK and bachelor degree from Shanghai Jiao Tong University. He worked at Citadel Securities as a quantitative researcher, specializing in ML research for financial data, before joining Bytedance. He also interned in Google DeepMind, Google Brain and Microsoft Research during his Phd.
Join Zoom Meeting:
https://cuhk.zoom.us/j/94185450343
Enquiries: Ms. Karen Chan at Tel. 3943 8439
24 March
2:00 pm - 3:00 pm
How will Deep Learning Change Internet Video Delivery?
Location
Zoom
Category
Seminar Series 2021/2022
Speaker:
Prof. HAN Dongsu
Abstract:
Internet video has experienced tremendous growth over the last few decades and is still growing at a rapid pace. Internet video now accounts for 73% of Internet traffic and is expected to quadruple in the next five years. Augmented reality and virtual reality streaming, projected to increase twentyfold in five years, will also accelerate this trend.
In this talk, I will argue that advances in deep neural networks present new opportunities that can fundamentally change Internet video delivery. In particular, deep neural networks allow the content delivery network to easily capture the content of the video and thus enable content-aware video delivery. To demonstrate this, I will present NAS, a new Internet video delivery framework that integrates deep neural network based quality enhancements with adaptive streaming.
NAS incorporates a super-resolution deep neural network (DNN) and a deep re-inforcement neural network to optimize the user quality of experience (QoE). It outperforms the current state of the art, dramatically improving visual quality. It improves the average QoE by 43.08% using the same bandwidth budget or saving 17.13% of bandwidth while providing the same user QoE.
Finally, I will talk about our recent research progress in supporting live video and mobile devices in AI-assisted video delivery that demonstrate the possibility of new designs that tightly integrate deep learning into Internet video streaming.
Biography:
Dongsu Han (Member, IEEE) is currently an Associate Professor with the School of Electrical Engineering at KAIST. He received the B.S. degree in computer science from KAIST in 2003 and the Ph.D. degree in computer science from Carnegie Mellon University in 2012. His research interests include networking, distributed systems, and network/system security. He has received Best Paper Award and Community Award from USENIX NSDI. More details about his research can be found at http://ina.kaist.ac.kr.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93072774638
Enquiries: Ms. Karen Chan at Tel. 3943 8439
23 March
10:30 am - 11:30 am
Towards Predictable and Efficient Datacenter Storage
Location
Zoom
Category
Seminar Series 2021/2022
Speaker:
Dr. Huaicheng Li
Abstract:
The increasing complexity in storage software and hardware brings new challenges to achieve predictable performance and efficiency. On the one hand, emerging hardware break long-held system design principles and are held back by aged and inflexible system interfaces and usage models, requiring radical rethinking on the software stack to leverage new hardware capabilities for optimal performance. On the other hand, the computing landscape is becoming increasingly heterogeneous and complex, demanding explicit systems-level support to manage hardware-associated complexity and idiosyncrasy, which is unfortunately still largely missing.
In this talk, I will discuss my efforts to build low-latency and cost-efficient datacenter storage systems. By revisiting existing storage interface/abstraction designs and software/hardware responsibility divisions, I will present holistic storage stack designs for cloud datacenters, which deliver orders of magnitude of latency improvement and significantly improved cost-efficiency.
Biography:
Huaicheng is a postdoc at CMU in the Parallel Data Lab (PDL). He received his Ph.D. from University of Chicago. His interests are mainly in Operating Systems and Storage Systems, with a focus on building high-performance and cost-efficient storage infrastructure for datacenters. His research has been recognized by two best paper nominations at FAST (2017 and 2018) and has also made real impact, with production deployment in datacenters, code integration to Linux, and a storage research platform widely used by the research community.
Join Zoom Meeting:
https://cuhk.zoom.us/j/95132173578
Enquiries: Ms. Karen Chan at Tel. 3943 8439
22 March
10:00 am - 11:00 am
Local vs Global Structures in Machine Learning Generalization
Location
Zoom
Category
Seminar Series 2021/2022
Speaker:
Dr. Yaoqing Yang
Abstract:
Machine learning (ML) models are increasingly being deployed in safety-critical applications, making their generalization and reliability a problem of urgent societal importance. To date, our understanding of ML is still limited because (i) the narrow problem settings considered in studies and the (often) cherry-picked results lead to incomplete/conflicting conclusions on the failures of ML; (ii) focusing on low-dimensional intuitions results in a limited understanding of the global structure of ML problems. In this talk, I will present several recent results on “generalization metrics” to measure ML models. I will show that (i) generalization metrics such as the connectivity between local minima can quantify global structures of optimization loss landscapes, which can lead to more accurate predictions on test performance than existing metrics; (ii) carefully measuring and characterizing the different phases of loss landscape structures in ML can provide a more complete picture of generalization. Specifically, I show that different phases of learning require different ways to address failures in generalization. Furthermore, most conventional generalization metrics focus on the so-called generalization gap, which is indirect and of limited practical value. I will discuss novel metrics referred to as “shape metrics” that allow us to predict test accuracy directly instead of the generalization gap. I also show that one can use shape metrics to achieve improved compression and out-of-distribution robustness of ML models. I will discuss theoretical results and present large-scale empirical analyses for different quantity/quality of data, different model architectures, and different optimization hyperparameter settings to provide a comprehensive picture of generalization. I will also discuss practical applications of utilizing these generalization metrics to improve ML models’ training, efficiency, and robustness.
Biography:
Dr. Yaoqing Yang is a postdoctoral researcher at the RISE Lab at UC Berkeley. He received his PhD from Carnegie Mellon University and B.S. from Tsinghua University, China. He is currently focusing on machine learning, and his main contributions to machine learning are towards improving reliability and generalization in the face of uncertainty, both in the data and the compute platform. His PhD thesis laid the foundation for an exciting field of research—coded computing—where information-theoretic techniques are developed to address unreliability in computing platforms. His works have won the best paper finalist at ICDCS and have been published multiple times in NeurIPS, CVPR, and IEEE Transactions on Information Theory. He has worked as a research intern at Microsoft, MERL and Bell Labs, and two of his joint CVPR papers with MERL have both received more than 300 citations. He is also the recipient of the 2015 John and Claire Bertucci Fellowship.
Join Zoom Meeting:
https://cuhk.zoom.us/j/99128234597
Enquiries: Ms. Karen Chan at Tel. 3943 8439
17 March
10:00 am - 11:00 am
Scalable and Multiagent Deep Learning
Location
Zoom
Category
Seminar Series 2021/2022
Speaker:
Mr. Guodong Zhang
Abstract:
Deep learning has achieved huge successes over the last few years, largely due to three important ideas: deep models with residual connections, parallelism, and gradient-based learning. However, it was shown that (1) deep ResNets behave like ensembles of shallow networks; (2) naively increasing the scale of data parallelism leads to diminishing return; (3) gradient-based learning could converge to spurious fixed points in the multiagent setting.
In this talk, I will present some of my works on understanding and addressing these issues. First, I will give a general recipe for training very deep neural networks without shortcuts. Second, I will present a noisy quadratic model for neural network optimization, which qualitatively predicts scaling properties of a variety of optimizers and in particular suggests that second-order algorithms would benefit more from data parallelism. Third, I will describe a novel algorithm that finds desired equilibria and saves us from converging to spurious fixed points in multi-agent games. In the end, I will conclude with future directions towards building intelligent machines that can learn from experience efficiently and reason about their own decisions.
Biography:
Guodong Zhang is a PhD candidate in the machine learning group at the University of Toronto, advised by Roger Grosse. His research lies at the intersection between machine learning, optimization, and Bayesian statistics. In particular, his research focuses on understanding and improving algorithms for optimization, Bayesian inference, and multi-agent games in the context of deep learning. He has been recognized through the Apple PhD fellowship, Borealis AI fellowship, and many other scholarships. In the past, he has also spent time at Institute for Advanced Study of Princeton and industry research labs (including DeepMind, Google Brain, and Microsoft Research).
Join Zoom Meeting:
https://cuhk.zoom.us/j/95830950658
Enquiries: Ms. Karen Chan at Tel. 3943 8439
15 March
10:00 am - 11:00 am
Active Learning for Software Rejuvenation
Location
Zoom
Category
Seminar Series 2021/2022
Speaker:
Ms. Jiasi Shen
Abstract:
Software now plays a central role in numerous aspects of human society. Current software development practices involve significant developer effort in all phases of the software life cycle, including the development of new software, detection and elimination of defects and security vulnerabilities in existing software, maintenance of legacy software, and integration of existing software into more contexts, with the quality of the resulting software still leaving much to be desired. The goal of my research is to improve software quality and reduce costs by automating tasks that currently require substantial manual engineering effort.
I present a novel approach for automatic software rejuvenation, which takes an existing program, learns its core functionality as a black box, builds a model that captures this functionality, and uses the model to generate a new program. The new program delivers the same core functionality but is potentially augmented or transformed to operate successfully in different environments. This research enables the rejuvenation and retargeting of existing software and provides a powerful way for developers to express program functionality that adapts flexibly to a variety of contexts. In this talk, I will show how we applied these techniques to two classes of software systems, specifically database-backed programs and stream-processing computations, and discuss the broader implications of these approaches.
Biography:
Jiasi Shen is a Ph.D. candidate at MIT EECS advised by Professor Martin Rinard. She received her bachelor’s degree from Peking University. Her main research interests are in programming languages and software engineering. She was named an EECS Rising Star in 2020.
Join Zoom Meeting:
https://cuhk.zoom.us/j/91743099396
Enquiries: Ms. Karen Chan at Tel. 3943 8439
14 March
10:00 am - 11:00 am
Rethinking Efficiency and Security Challenges in Accelerated Machine Learning Services
Location
Zoom
Category
Seminar Series 2021/2022
Speaker:
Prof. Wen Wujie
Abstract:
Thanks to recent model innovation and hardware advancement, machine learning (ML) has now achieved extraordinary success in many fields ranging from daily image classification, object detection, to security- sensitive biometric authentication and autonomous vehicles. To facilitate fast and secure end-to-end machine learning services, extensive studies have been conducted on ML hardware acceleration and data or model-incurred adversarial attacks. Different from these existing efforts, in this talk, we will present a new understanding of the efficiency and security challenges in accelerated ML services. The talk starts with the development of the very first “machine vision” (NOT “human vision”) guided image compression framework tailored for fast cloud-based machine learning services with guaranteed accuracy, inspired by an insightful understanding about the difference between machine learning (or “machine vision”) and human vision on image perception. Then we will discuss “StegoNet”- a new breed stegomalware taking advantage of machine learning service as a stealthy channel to conceal malicious intent (malware). Unlike existing attacks focusing only on misleading ML outcomes, “StegoNet” for the first time achieves far more diversified adversarial goals without compromising ML service quality. Our research prospects will be also given at the end of this talk, offering the audiences an alternative thinking about developing efficient and secure machine learning services.
Biography:
Wujie Wen is an assistant professor in the Department of Electrical and Computer Engineering at Lehigh University. He received his Ph.D. from University of Pittsburgh in 2015. He earned his B.S. and M.S. degrees in electronic engineering from Beijing Jiaotong University and Tsinghua University, Beijing, China, in 2006 and 2010, respectively. He was an assistant professor in the ECE department of Florida International University, Miami, FL, during 2015-2019. Before joining academia, he also worked with AMD and Broadcom for various engineer and intern positions. His research interests include reliable and secure deep learning, energy-efficient computing, electronic design automation and emerging memory systems design. His works have been published widely across venues in design automation, security, machine learning/AI etc., including HPCA, DAC, ICCAD, DATE, ICPP, HOST, ACSAC, CVPR, ECCV, AAAI etc. He received best paper nominations from ASP-DAC2018, ICCAD2018, DATE2016 and DAC2014. Dr Wen served as the General Chair of ISVLSI 2019 (Miami), Technical Program Chair of ISVLSI 2018 (Hong Kong), as well as the program committee for many conferences such as DAC, ICCAD, DATE, etc. He is an associated editor of Neurocomputing and IEEE Circuit and Systems (CAS) Magazine. His research projects are currently sponsored by US National Science Foundation, Air Force Research Laboratory and Florida Center for Cybersecurity etc.
Join Zoom Meeting:
https://cuhk.zoom.us/j/98308617940
Enquiries: Ms. Karen Chan at Tel. 3943 8439
11 March
2:00 pm - 3:00 am
Artificial Intelligence in Health: from Methodology Development to Biomedical Applications
Location
Zoom
Category
Seminar Series 2021/2022
Speaker:
Prof. LI Yu
Abstract:
In this talk, I will give an overview of the research in our group. Essentially, we are developing new machine learning methods to resolve the problems in computational biology and health informatics, from sequence analysis, biomolecular structure prediction, and functional annotation to disease modeling, drug discovery, drug effect prediction, and combating antimicrobial resistance. We will show how to formulate problems in the biology and health field into machine learning problems, how to resolve them using cutting-edge machine learning techniques, and how the result could benefit the biology and healthcare field in return.
Biography:
Yu Li is an Assistant Professor in the Department of Computer Science and Engineering at CUHK. His main research interest is to develop novel and new machine learning methods, mainly deep learning methods, for solving the computational problems in healthcare and biology, understanding the principles behind the bio-world, and eventually improving people’s health and wellness. He obtained his PhD in computer science from KAUST in Saudi Arabia, in Oct 2020. He obtained MS degree in computer science from KAUST at 2016. Before that, he got the Bachelor degree in Biosciences from University of Science and Technology of China (USTC).
Join Zoom Meeting:
https://cuhk.zoom.us/j/98928672713
Enquiries: Ms. Karen Chan at Tel. 3943 8439
January 2022
27 January
10:30 am - 11:30 am
Deploying AI at Scale in Hong Kong Hospital Authority (HA)
Location
Zoom
Category
Seminar Series 2021/2022
Speaker:
Mr. Dennis Lee
Abstract:
With the ever increasing demand and aging population, it is envisioned that adoption of AI technology will support Hospital Authority to tackle various strategic service challenges and deliver improvements. HA has setup AI Strategy Framework two years ago and begun setup process & infrastructure to support AI development and delivery. The establishment of AI Lab and AI delivery center is aimed to flourish AI innovations by engaging internal and external collaboration for Proof of Concept development; and also to build data and integration pipeline to validate AI solution and integrate into the HA services at scale.
By leverage 3 platforms to (1) Improve awareness of HA staff (2) Match AI supply and Demand (3) data pipeline for timely prediction, we can gradually scale AI innovations and solution in Hospital Authority. Over the past year, many clinical and non-clinical Proof of Concept has been developed and validated. The AI Chest X-ray pilot project has been implemented for General Outpatient Clinics and Emergency Department with the aim to reduce report turnaround time and provide decision support for abnormal chest x-ray imaging.
Biography:
Mr. Dennis Lee currently serves as the Senior System Manager for Artificial Intelligence Systems of the Hong Kong Hospital Authority. He current work involve developing the Artificial Intelligence and Big Data Platform to streamline data acquisition for facilitating HA data analysis via Business Intelligence, to develop Hospital Command Center dashboards, and solution deployment for Artificial Intelligence. Mr. Lee has also been leading the Corporate Project management office and as program managers for several large scale system implementations.
Join Zoom Meeting:
https://cuhk.zoom.us/j/95162965909
Enquiries: Ms. Karen Chan at Tel. 3943 8439
19 January
11:00 am - 12:00 pm
Strengthening and Enriching Machine Learning for Cybersecurity
Location
Zoom
Category
Seminar Series 2021/2022
Speaker:
Mr. Wenbo Guo
Abstract:
Nowadays, security researchers are increasingly using AI to automate and facilitate security analysis. Although making some meaningful progress, AI has not maximized its capability in security yet due to two challenges. First, existing ML techniques have not reached security professionals’ requirements in critical properties, such as interpretability and adversary-resistancy. Second, Security data imposes many new technical challenges, which break the assumptions of existing ML Models and thus jeopardize their efficacy.
In this talk, I will describe my research efforts to address the above challenges, with a primary focus on strengthening the interpretability of blackbox deep learning models and deep reinforcement learning policies. Regarding deep neural networks, I will describe an explanation method for deep learning-based security applications and demonstrate how security analysts could benefit from this method to establish trust in blackbox models and conduct efficient finetuning. As for DRL policies, I will introduce a novel approach to draw critical states/actions of a DRL agent and show how to utilize the above explanations to scrutinize policy weaknesses, remediate policy errors, and even defend against adversarial attacks. Finally, I will conclude by highlighting my future plan towards strengthening the trustworthiness of advanced ML techniques and maximizing their capability in cyber defenses.
Biography:
Wenbo Guo is a Ph.D. Candidate at Penn State, advised by Professor Xinyu Xing. His research interests are machine learning and cybersecurity. His work includes strengthening the fundamental properties of machine learning models and designing customized machine learning models to handle security-unique challenges. He is a recipient of the IBM Ph.D. Fellowship (2020-2022), Facebook/Baidu Ph.D. Fellowship Finalist (2020), and ACM CCS Outstanding Paper Award (2018). His research has been featured by multiple mainstream media and has appeared in a diverse set of top-tier venues in security, machine learning, and data mining. Going beyond academic research, he also actively participates in many world-class cybersecurity competitions and has won the 2018 DEFCON/GeekPwn AI challenge finalist award.
Join Zoom Meeting:
https://cuhk.zoom.us/j/95859338221
Enquiries: Ms. Karen Chan at Tel. 3943 8439
December 2021
22 December
1:30 pm - 2:30 pm
Meta-programming: Optimising Designs for Multiple Hardware Platforms
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2021/2022
Speaker:
Prof. Wayne Luk
Abstract:
This talk describes recent research on meta-programming techniques for mapping high-level descriptions to multiple hardware platforms. The purpose is to enhance design productivity and maintainability. Our approach is based on decoupling functional concerns from optimisation concerns, allowing separate descriptions to be independently maintained by two types of experts: application experts focus on algorithmic behaviour, while platform experts focus on the mapping process. Our approach supports customisable optimisations to rapidly capture a wide range of mapping strategies targeting multiple hardware platforms, and reusable strategies to allow optimisations to be described once and applied to multiple applications. Examples will be provided to illustrate how the proposed approach can map a single high-level program into multi-core processors and reconfigurable hardware platforms.
Biography:
Wayne Luk is Professor of Computer Engineering with Imperial College London and the Director of the EPSRC Centre for doctoral training in High Performance Embedded and Distributed Systems. His research focuses on theory and practice of customizing hardware and software for specific application domains, such as computational finance, climate modelling, and genomic data analysis. He is a fellow of the Royal Academy of Engineering, IEEE, and BCS.
Enquiries: Ms. Karen Chan at Tel. 3943 8439
02 December
2:00 pm - 3:00 pm
Network Stack in the Cloud
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2021/2022
Speaker:
Prof. XU Hong
Abstract:
As cloud computing becomes ubiquitous, the network stack in this virtualized environment is becoming a focal point of research with unique challenges and opportunities. In this talk, I will introduce our efforts in this space.
First, from an architectural perspective, the network stack remains a part of the guest OS inside a VM in the cloud. I will argue that this legacy architecture is becoming a barrier to innovation/evolution. The tight coupling between the network stack and the guest OS causes many deployment troubles to tenants and management and efficiency problems to the cloud provider. I will present our vision of providing the network stack as a service as a way to address these issues. The idea is to decouple the network stack from the guest OS, and offer it as an independent entity implemented by the cloud provider. I will discuss the design and evaluation of a concrete framework called NetKernel to enable this vision. Then in the second part, I will focus on container communication, which is a common scenario in the cloud. I will present a new system called PipeDevice that adopts a hardware-software co-design approach to enable low-overhead intra-host container communication using commodity FPGA.
Biography:
Hong Xu is an Associate Professor in Department of Computer Science and Engineering, The Chinese University of Hong Kong. His research area is computer networking and systems, particularly big data systems and data center networks. From 2013 to 2020 he was with City University of Hong Kong. He received his B.Eng. from The Chinese University of Hong Kong in 2007, and his M.A.Sc. and Ph.D. from University of Toronto in 2009 and 2013, respectively. He was the recipient of an Early Career Scheme Grant from the Hong Kong Research Grants Council in 2014. He received three best paper awards, including the IEEE ICNP 2015 best paper award. He is a senior member of both IEEE and ACM.
Enquiries: Ms. Karen Chan at Tel. 3943 8439
November 2021
25 November
2:00 pm - 3:00 pm
Domain-Specific Network Optimization for Distributed Deep Learning
Location
Zoom
Category
Seminar Series 2021/2022
Speaker:
Prof. Kai Chen
Associate Professor
Department of Computer Science & Engineering, HKUST
Abstract:
Communication overhead poses a significant challenge to distributed DNN training. In this talk, I will overview existing efforts toward this challenge, study their advantages and shortcomings, and further present a novel solution exploiting the domain-specific characteristics of deep learning to optimize the communication overhead of distributed DNN training in a fine-grained manner. Our solution consists of several key innovations beyond prior work, including bounded-loss tolerant transmission, gradient-aware flow scheduling, and order-free per-packet load-balancing, etc., delivering up to 84.3% training acceleration over the best existing solutions. Our proposal by no means provides an ultimate answer to this research problem, instead, we hope it can inspire more critical thinkings on intersection between Networking and AI.
Biography:
Kai Chen is an Associate Professor at HKUST, the Director of Intelligent Networking Systems Lab (iSING Lab) and HKUST-WeChat joint Lab on Artificial Intelligence Technology (WHAT Lab), as well as the PC for a RGC Theme-based Project. He received his BS and MS from University of Science and Technology of China in 2004 and 2007, and PhD from Northwestern University in 2012, respectively. His research interests include Data Center Networking, Cloud Computing, Machine Learning Systems, and Privacy-preserving Computing. His work has been published in various top venues such as SIGCOMM, NSDI and TON, etc., including a SIGCOMM best paper candidate. He is the Steering Committee Chair of APNet, serves on the Program Committees of SIGCOMM, NSDI, INFOCOM, etc., and the Editorial Boards of IEEE/ACM Transactions on Networking, Big Data, and Cloud Computing.
Join Zoom Meeting:
https://cuhk.zoom.us/j/98448863119?pwd=QUJVdzgvU1dnakJkM29ON21Eem9ZZz09
Enquiries: Ms. Karen Chan at Tel. 3943 8439
24 November
2:00 pm - 3:00 pm
Integration of First-order Logic and Deep Learning
Location
Zoom
Category
Seminar Series 2021/2022
Speaker:
Prof. Sinno Jialin Pan
Provost’s Chair Associate Professor
School of Computer Science and Engineering
Nanyang Technological University
Abstract:
How to develop a loop to integrate existing knowledge to facilitate deep learning inference and then refine knowledge from the learning process is a crucial research problem. As first-order logic has been proven to be a powerful tool for knowledge representation and reasoning, interest in integrating firstorder logic into deep learning models has grown rapidly in recent years. In this talk, I will introduce our attempts to develop a unified integration framework of first-order logic and deep learning with applications to various joint inference tasks in NLP.
Biography:
Dr. Sinno Jialin Pan is a Provost’s Chair Associate Professor with the School of Computer Science and Engineering at Nanyang Technological University (NTU) in Singapore. He received his Ph.D. degree in computer science from the Hong Kong University of Science and Technology (HKUST) in 2011. Prior to joining NTU, he was a scientist and Lab Head with the Data Analytics Department at Institute for Infocomm Research in Singapore. He joined NTU as a Nanyang Assistant Professor in 2014. He was named to the list of “AI 10 to Watch” by the IEEE Intelligent Systems magazine in 2018. He serves as an Associate Editor for IEEE TPAMI, AIJ, and ACM TIST. His research interests include transfer learning and its real-world applications.
Join Zoom Meeting:
https://cuhk.zoom.us/j/97292230556?pwd=MDVrREkrWnFEMlF6aFRDQzJxQVlFUT09
Enquiries: Ms. Karen Chan at Tel. 3943 8439
18 November
9:15 am - 10:15 am
Smart Sensing and Perception in the AI Era
Location
Zoom
Category
Seminar Series 2021/2022
Speaker:
Dr. Jinwei Gu
R&D Executive Director
SenseBrain (aka SenseTime USA)
Abstract:
Smart sensing and perception refer to intelligent and efficient ways of measuring, modeling, and understanding of the physical world, which act as the eyes and ears of any AI-based system. Smart sensing and perception sit across the intersection of three related areas – computational imaging, representation learning, and scene understanding. Computational imaging refers to sensing the real world with optimally designed, task-specific, multi-modality sensors and optics which actively probes key visual information. Representation learning refers to learning the transformation from sensors’ raw output to some manifold embedding or feature spaces for further processing. Scene understanding includes both the low-level capture of a 3D scene of its physical properties, as well as high-level semantic perception and understanding of the scene. Advances in this area will not only benefit computer vision tasks but also result in better hardware, such as AI image sensors, AI ISP (Image Signal Processing) chips, and AI camera systems. In this talk, I will present several latest research results including high quality image restoration and accurate depth estimation from time-of-flight sensors or monocular videos, as well as some latest computational photography products in smart phones including under-display cameras, AI image sensors and AI ISP chips. I will also layout several open challenges and future research directions in this area.
Biography:
Jinwei Gu is the R&D Executive Director of SenseBrain (aka SenseTime USA). His current research focuses on low-level computer vision, computational photography, computational imaging, smart visual sensing and perception, and appearance modeling. He obtained his Ph.D. degree in 2010 from Columbia University, and his B.S and M.S. from Tsinghua University, in 2002 and 2005 respectively. Before joining
SenseTime, he was a senior research scientist in NVIDIA Research from 2015 to 2018. Prior to that, he was an assistant professor in Rochester Institute of Technology from 2010 to 2013, and a senior researcher in the media lab of Futurewei Technologies from 2013 to 2015. He serves as an associate editor for IEEE Transactions on Computational Imaging (TCI) and IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), an area chair for ICCV2019, ECCV2020, and CVPR2021, and industry chair for ICCP2020. He is an IEEE senior member since 2018. His research work has been successfully transferred to many products such as NVIDIA CoPilot SDK, DriveIX SDK, as well as super resolution, super night, portrait restoration, RGBW solution which are widely used in many flagship mobile phones.
Join Zoom Meeting:
https://cuhk.zoom.us/j/97322964334?pwd=cGRJdUx1bkxFaENJKzVwcHdQQm5sZz09
Enquiries: Ms. Karen Chan at Tel. 3943 8439
04 November
4:00 pm - 5:00 pm
The Role of AI for Next-generation Robotic Surgery
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2021/2022
Speaker:
Prof. DOU Qi
Abstract:
With advancements in information technologies and medicine, the operating room has undergone tremendous transformations evolving into a highly complicated environment. These achievements further innovate the surgery procedure and have great promise to enhance the patient’s safety. Within the new generation of operating theatre, the computer-assisted system plays an important role to provide surgeons with reliable contextual support. In this talk, I will present a series of deep learning methods towards interdisciplinary researches at artificial intelligence for surgical robotic perception, for automated surgical workflow analysis, instrument presence detection, surgical tool segmentation, surgical scene perception, etc. The proposed methods cover a wide range of deep learning topics including semi-supervised learning, relational graph learning, learning-based stereo depth estimation, reinforcement learning, etc. The challenges, up-to-date progresses and promising future directions of AI-powered context-aware operating theaters will also be discussed.
Biography:
Prof. DOU Qi is an Assistant Professor with the Department of Computer Science & Engineering, CUHK. Her research interests lie in innovating collaborative intelligent systems that support delivery of high-quality medical diagnosis, intervention and education for next-generation healthcare. Her team pioneers synergistic advancements across artificial intelligence, medical image analysis, surgical data science, and medical robotics, with an impact to support demanding clinical workflows such as robotic minimally invasive surgery.
Enquiries: Miss Karen Chan at Tel. 3943 8439
October 2021
29 October
2:00 pm - 3:00 pm
The Coming of Age of Microfluidic Biochips: Connection Biochemistry to Electronic Design Automation
Location
Room 407, 4/F, William M W Mong Engineering Building, CUHK
Category
Seminar Series 2021/2022
Speaker:
Prof. HO Tsung Yi
Abstract:
Advances in microfluidic technologies have led to the emergence of biochip devices for automating laboratory procedures in biochemistry and molecular biology. Corresponding systems are revolutionizing a diverse range of applications, e.g. point-of-care clinical diagnostics, drug discovery, and DNA sequencing–with an increasing market. However, continued growth (and larger revenues resulting from technology adoption by pharmaceutical and healthcare companies) depends on advances in chip integration and design-automation tools. Thus, there is a need to deliver the same level of design automation support to the biochip designer that the semiconductor industry now takes for granted. In particular, the design of efficient design automation algorithms for implementing biochemistry protocols to ensure that biochips are as versatile as the macro-labs that they are intended to replace. This talk will first describe technology platforms for accomplishing “biochemistry on a chip”, and introduce the audience to both the droplet-based “digital” microfluidics based on electrowetting actuation and flow-based “continuous” microfluidics based on microvalve technology. Next, the presenter will describe system-level synthesis includes operation scheduling and resource binding algorithms, and physical-level synthesis includes placement and routing optimizations. Moreover, control synthesis and sensor feedback-based cyberphysical adaptation will be presented. In this way, the audience will see how a “biochip compiler” can translate protocol descriptions provided by an end user (e.g., a chemist or a nurse at a doctor’s clinic) to a set of optimized and executable fluidic instructions that will run on the underlying microfluidic platform. Finally, present status and future challenges of open-source microfluidic ecosystem will be covered.
Biography:
Tsung-Yi Ho received his Ph.D. in Electrical Engineering from National Taiwan University in 2005. His research interests include several areas of computing and emerging technologies, especially in design automation of microfluidic biochips. He has been the recipient of the Invitational Fellowship of the Japan Society for the Promotion of Science (JSPS), the Humboldt Research Fellowship by the Alexander von Humboldt Foundation, the Hans Fischer Fellowship by the Institute of Advanced Study of the Technische Universität München, and the International Visiting Research Scholarship by the Peter Wall Institute of Advanced Study of the University of British Columbia. He was a recipient of the Best Paper Awards at the VLSI Test Symposium (VTS) in 2013 and IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems in 2015. He served as a Distinguished Visitor of the IEEE Computer Society for 2013-2015, a Distinguished Lecturer of the IEEE Circuits and Systems Society for 2016-2017, the Chair of the IEEE Computer Society Tainan Chapter for 2013-2015, and the Chair of the ACM SIGDA Taiwan Chapter for 2014-2015. Currently, he serves as the Program Director of both EDA and AI Research Programs of Ministry of Science and Technology in Taiwan, VP Technical Activities of IEEE CEDA, an ACM Distinguished Speaker, and Associate Editor of the ACM Journal on Emerging Technologies in Computing Systems, ACM Transactions on Design Automation of Electronic Systems, ACM Transactions on Embedded Computing Systems, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, and IEEE Transactions on Very Large Scale Integration Systems, Guest Editor of IEEE Design & Test of Computers, and the Technical Program Committees of major conferences, including DAC, ICCAD, DATE, ASP-DAC, ISPD, ICCD, etc. He is a Distinguished Member of ACM.
Enquiries: Miss Karen Chan at Tel. 3943 8439
20 October
3:00 pm - 4:00 pm
Towards Understanding Generalization in Generative Adversarial Networks
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2021/2022
Speaker:
Prof. FARNIA Farzan
Abstract:
Generative Adversarial Networks (GANs) represent a game between two machine players designed to learn the distribution of observed data.
Since their introduction in 2014, GANs have achieved state-of-the-art performance on a wide array of machine learning tasks. However, their success has been observed to heavily depend on the minimax optimization algorithm used for their training. This dependence is commonly attributed to the convergence speed of the underlying optimization algorithm. In this seminar, we focus on the generalization properties of GANs and present theoretical and numerical evidence that the minimax optimization algorithm also plays a key role in the successful generalization of the learned GAN model from training samples to unseen data. To this end, we analyze the generalization behavior of standard gradient-based minimax optimization algorithms through the lens of algorithmic stability. We leverage the algorithmic stability framework to compare the generalization performance of standard simultaneous-update and non-simultaneous-update gradient-based algorithms. Our theoretical analysis suggests the superiority of simultaneous-update algorithms in achieving a smaller generalization error for the trained GAN model.
Finally, we present numerical results demonstrating the role of simultaneous-update minimax optimization algorithms in the proper generalization of trained GAN models.
Biography:
Farzan Farnia is an Assistant Professor of Computer Science and Engineering at The Chinese University of Hong Kong. Prior to joining CUHK, he was a postdoctoral research associate at the Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, from 2019-2021. He received his master’s and PhD degrees in electrical engineering from Stanford University and his bachelor’s degrees in electrical engineering and mathematics from Sharif University of Technology. At Stanford, he was a graduate research assistant at the Information Systems Laboratory advised by Professor David Tse. Farzan’s research interests span statistical learning theory, information theory, and convex optimization. He has been the recipient of the Stanford Graduate Fellowship (Sequoia CapitalFellowship) between 2013-2016 and the Numerical Technology Founders Prize as the second top performer of Stanford’s electrical engineering PhD qualifying exams in 2014.
Enquiries: Miss Karen Chan at Tel. 3943 8439
07 October
2:30 pm - 3:30 pm
Complexity of Testing and Learning of Markov Chains
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2021/2022
Speaker:
Prof. CHAN Siu On
Assistant Professor
Department of Computer Science and Engineering, CUHK
Abstract:
This talk will summarize my works in two unrelated areas in complexity theory: distributional learning and extended formulation.
(1) Distributional Learning: Much of the work on distributional learning assumes the input samples are identically and independently distributed. A few recent works relax this assumption and instead assume the samples to be drawn as a trajectory from a Markov chain. Previous works by Wolfer and Kontorovich suggested that learning and identity test problems on ergodic chains can be reduced to the corresponding problems with i.i.d. samples. We show how to further reduce essentially every learning and identity testing problem on the (arguably most general) class of irreducible chans, by introducing the concept of k-cover time. The concept of k-cover time is a natural generalization of the usual notion of cover time.
The tight analysis of the sample complexity for reversible chains relies on a previous work by Ding-Lee-Peres. Their analysis relies on the so-called generalized second Ray-Knight isomorphism theorem, that connects the local time of a continuous-time reversible Markov chain to the Gaussian free field. It is natural to ask whether similar analysis can be generalized to general chains. We will discuss our ongoing work towards this goal.
(2) Extended formulation: Extended formulation lower bounds aim to show that linear programs (or other convex programs) need to be large in solving certain problems, such as constraint satisfaction. A natural open problem is whether refuting unsatisfiable 3-SAT instances requires linear programs of exponential size, and whether such a lower bound holds for every “downstream” NP-hard problem. I will discuss our ongoing work towards relating extended formulation lower bounds, using techniques from resolution lower bounds.
Biography:
Siu On CHAN graduated from the Chinese University of Hong Kong. He got his MSc at the University of Toronto and PhD at UC Berkeley. He was a postdoc at Microsoft Research New England. He is now an Assistant Professor at the Chinese University of Hong Kong. He is interested in the complexity of constraint satisfaction and learning algorithms. He won a Best Paper Award and a Best Student Paper Award at STOC 2013.
Enquiries: Miss Karen Chan at Tel. 3943 8439
September 2021
30 September
9:00 am - 10:00 am
Efficient Computing of Deep Neural Networks
Location
ERB LT
Category
Seminar Series 2021/2022
Speaker:
Prof. YU Bei
Abstract:
Deep neural networks (DNNs) are currently widely used for many artificial intelligence AI applications with state-of-the-art accuracy, but they come at the cost of high computational complexity. Therefore, techniques that enable efficient computing of deep neural networks to improve key metrics—such as energy efficiency, throughput, and latency—without sacrificing accuracy are critical. This talk provides a structured treatment of the key principles and techniques for enabling efficient computing of DNNs, including implementation level, model level, and compilation level techniques.
Biography:
Bei Yu is currently an Associate Professor at the Department of Computer Science and Engineering, The Chinese University of Hong Kong. He received PhD degree from Electrical and Computer Engineering, the University of Texas at Austin in 2014. His current research interests include machine learning with applications in VLSI CAD and computer vision. He has served as TPC Chair of 1st ACM/IEEE Workshop on Machine Learning for CAD (MLCAD), served in the program committees of DAC, ICCAD, DATE, ASPDAC, ISPD, the editorial boards of ACM Transactions on Design Automation of Electronic Systems (TODAES), Integration, the VLSI Journal. He is Editor of the IEEE TCCPS Newsletter.
Prof. Yu received seven Best Paper Awards from ASPDAC 2021 & 2012, ICTAI 2019, Integration, the VLSI Journal in 2018, ISPD 2017, SPIE Advanced Lithography Conference 2016, ICCAD 2013, six other Best Paper Award Nominations (DATE 2021, ICCAD 2020, ASPDAC 2019, DAC 2014, ASPDAC 2013, and ICCAD 2011), six ICCAD/ISPD contest awards.
Enquiries: Miss Karen Chan at Tel. 3943 8439
24 September
2:00 pm - 3:00 pm
Some Recent Results in Database Theory by Yufei Tao and His Team
Location
Room 407
Category
Seminar Series 2021/2022
Speaker:
Prof. TAO Yufei
Abstract:
This talk will present some results obtained by Yufei Tao and his students in recent years. These results span several active fields in database research nowadays – machine learning, crowdsourcing, massively parallel computation, and graph processing – and provide definitive answers to a number of important problems by establishing matching upper and lower bounds. The talk will be theoretical in nature but will assume only undergraduate-level knowledge of computer science, and is therefore suitable for a general audience.
Biography:
Yufei Tao is a Professor at the Department of Computer Science and Engineering, the Chinese University of Hong Kong. He received two SIGMOD Best Paper Awards (2013 and 2015) and a PODS Best Paper Award (2018). He served as a PC co-chair of ICDE 2014 and the PC chair of PODS 2020, and gave an invited keynote speech at ICDT 2016. He was elected an ACM Fellow in 2020 for his contributions to algorithms on large-scale data. Yufei’s research aims to develop “small-and-sweet”
algorithms: (i) small: easy to implement for deployment in practice, and (ii) sweet: having non-trivial theoretical guarantees. He particularly enjoys working on problems that arise at the cross-intersection of databases, machine learning, and theoretical computer science.
Enquiries: Miss Karen Chan at Tel. 3943 8439
17 September
9:30 am - 10:30 am
Generation, Reasoning and Rewriting in Natural Dialogue System
Location
Room 801, 8/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2021/2022
Speaker:
Prof. WANG Liwei
Abstract:
Natural Dialogue Systems, including recent eye-catching multimodal (vision + language) dialog systems, need a better understanding of utterances to generate reliable and meaningful language. In this talk, I will introduce several research works that my LaVi Lab (multimodal Language and Vision Lab) has done together with our collaborators in this area. In Particular, I will discuss those essential components in natural dialog systems, including controllable language generation, language reasoning, and utterance rewriting, published in recent top NLP and AI conferences.
Biography:
Prof. WANG Liwei received his Ph.D. from the Computer Science Department at University of Illinois at Urbana Champaign (UIUC) in 2018. After that, he joined Tencent AI Lab, NLP group at Bellevue, US as a senior researcher, leading multiple projects in multimodal (language and vision) learning and NLP. In Dec 2020, Dr. Wang joined the Computer Science and Engineering Department at CUHK as an assistant professor. In the meanwhile, he is also serving as the Editorial Board of IJCV and program committee in top NLP conferences. Recently, his team won 2020 BAAI-JD Multimodal Dialogue Challenge and also the Referit3D CVPR 2021 challenge. The research goal of Prof. Wang’s LaVi Lab is to build multi-modal interactive AI systems that can not only understand and recreate the visual world but also communicate like human beings using natural language.
Enquiries: Miss Karen Chan at Tel. 3943 8439
July 2021
23 July
3:00 pm - 4:00 pm
Towards SmartNICs in Data Center Systems
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Dr. Bojie Li
Senior Engineer
Huawei 2012 Labs
Abstract:
In modern data centers, the performance of general-purpose processors lags behind network, storage, and customized computing hardware. However, network and storage infrastructure mainly uses software processing on general-purpose processors, which becomes a bottleneck. We leverage SmartNICs to accelerate network functions, data structure, and communication primitives in cloud data centers, thus achieving full-stack acceleration of network and storage. In this talk, we will also propose a new SmartNIC architecture which is tightly integrated with the host CPU, enabling a large, disaggregated memory with the SmartNICs being a programmable data plane.
Biography:
Dr. Bojie Li is a Senior Engineer with Huawei 2012 Labs. In 2019, he obtained Ph.D. in Computer Science from University of Science and Technology of China (USTC) and Microsoft Research Asia (MSRA). His research interest is data center network and systems. He has published papers in SIGCOMM, SOSP, NSDI, ATC, and PLDI. He has received the ACM China Doctoral Dissertation Award and Microsoft Research Asia PhD Fellowship.
Join Zoom Meeting:
https://cuhk.zoom.us/j/92830629838
Enquiries: Miss Karen Chan at Tel. 3943 8439
20 July
11:00 am - 12:00 pm
Structurally Stable Assemblies: Theory, Algorithms, and Applications
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Dr. SONG Peng
Assistant Professor
Pillar of Information Systems Technology and Design
Singapore University of Technology and Design
Abstract:
An assembly with rigid parts is structurally stable if it can preserve its form under external forces without collapse. Structural stability is a necessary condition for using assemblies in practice such as furniture and architecture. However, designing structurally stable assemblies remains a challenging task for general and expert users since slight variation on the geometry of an individual part may affect the whole assembly’s structural stability. In this talk, I will introduce our attempts in the past years in advancing the theory and algorithms for computational design and fabrication of structurally stable assemblies. The key technique is to analyze structural stability in the kinematic space by utilizing static-kinematic duality and to ensure structural stability with geometry optimization using a two-stage approach (i.e., kinematic design and geometry realization). Our technique can handle assemblies that are structurally stable in different degrees, namely stable under a single external force, a set of external forces, and arbitrary external forces. The usefulness of these structurally stable assemblies has been demonstrated in applications like personalized puzzles, interlocking furniture, and free-form discrete architecture.
Biography:
Peng Song is an Assistant Professor at the Pillar of Information Systems Technology and Design, Singapore University of Technology and Design (SUTD), where he directs the Computer Graphics Laboratory (CGL). Prior to joining SUTD in 2019, he was a research scientist at EPFL, Switzerland. He received his PhD from Nanyang Technological University, Singapore in 2013, his master and bachelor degrees both from Harbin Institute of Technology, China in 2010 and 2007 respectively. His research is in the area of computer graphics, with a focus on computational fabrication and geometry processing. He serves as a co-organizer of a weekly web series on Computational Fabrication, and a program committee member of several leading conferences in computer graphics including SIGGRAPH Asia and Pacific Graphics.
Join Zoom Meeting:
https://cuhk.zoom.us/j/98242753532
Enquiries: Miss Karen Chan at Tel. 3943 8439
May 2021
12 May
2:00 pm - 3:00 pm
Towards Trustworthy Full-Stack AI
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Dr. Fang Chengfang
Abstract:
Due to lack of security consideration at the early development of AI algorithms, most AI systems are not robust against adversarial manipulation.
In critical applications such as healthcare, autonomous driving, and malware detection, security risks can be devastating, and thus attract numerous research efforts.
In this seminar, I will introduce some of the AI security and privacy research topics from an industry point of view, including the risk analysis throughout AI lifecycle and the pipeline of defense, in the hopes of providing a more complete picture on top of academic research to the audience.
Biography:
Chengfang Fang obtained his Ph.D. degree from National University of Singapore before joining Huawei in 2013. He has been working on security and privacy protection in several areas including machine learning, internet of things, mobile device and biometrics for more than 10 years. He has published over 20 research papers and obtained 15 patents in this domain. He is currently a principal researcher of Trustworthiness Technology Lab in Huawei Singapore Research Center.
Join Zoom Meeting:
https://cuhk.zoom.us/j/92800336791
Enquiries: Miss Karen Chan at Tel. 3943 8439
10 May
2:00 pm - 3:00 pm
High Performance Fluid Simulation and its Applications
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Dr. Xiaopei Liu
Assistant Professor
School of Information Science and Technology
Shanghai Tech University
Abstract:
Efficient and accurate high-resolution fluid simulation in complex environment is desirable in many practical applications, e.g., the aerodynamic shape design of airplanes and cars, as well as the production of special effects in movies and games. However, this has been a challenging problem for a very long time, and yet not well solved. In this talk, I will introduce our attempts in the past years in advancing the computational techniques for high-performance fluid simulations by developing statistical kinetic model with variational principles, in a single-phase flow scenario where strong turbulence and complex geometric objects exist. I will also introduce how the general idea can be extended to multiphase flow simulations in order to allow both large density ratio and high Reynolds number. To improve computational efficiency, I will further introduce our GPU optimization and machine learning techniques that are designed as both low-level and high-level accelerations. Rendering and visualization of fluid flow data will also be briefly covered. Finally, validations in real scenarios and demonstrations of results in different applications, such as the aerodynamic simulations over aircrafts, cars and architectures for shape design purposes, the blood flow simulations inside coronary arteries for clinical diagnosis, the simulation of visual flow phenomena for movies and games, will all be shown in this talk, with a new application for learning the control policy of a fish-like underwater robot with our fast simulator.
Biography:
Dr. Xiaopei Liu is now an assistant professor at School of Information Science and Technology, Shanghai Tech University, affiliated with Visual and Data Intelligence (VDI) center. He obtained his PhD degree on computer science and engineering from The Chinese University of Hong Kong (CUHK), and worked as a postdoctoral Research Fellow at Nanyang Technological University (NTU) in Singapore, where he started the multi-disciplinary research on fluid simulation and visualization, both on classical and quantum fluids. Most of his publications are top journals and conferences, which cover multiple disciplines, such as ACM TOG, ACM SIGGRAPH/SIGGRAPH Asia, IEEE TVCG, APS PRD, AIP POF, etc. Dr. Xiaopei Liu is now working on high-performance fluid simulation in complex environment, with applications to visual effects, computational design & fabrication, medical diagnosis, robot learning, as well as fundamental science. He is also conducting research on simulation-based UAV design optimization & autonomous navigation.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93649176456
Enquiries: Miss Karen Chan at Tel. 3943 8439
06 May
10:30 am - 11:30 am
Dynamic Voltage Scaling: from Low Power to Security
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Dr. Qu Gang
Abstract:
Dynamic voltage scaling (DVS) is one of the most effective and widely used techniques for low power design. It adjusts system operating voltage and clock frequency based on the real time application’s computation and deadline information in order to reduce the power and energy consumption. In this talk, I will share our research results on DVS and the lessons I have learned in three different periods of my research career. First, in the late 1990’s, as a graduate student, we formulated the problem of DVS for energy minimization and derived a series of optimal solutions under different system settings to guide the practice of DVS enabled system design. Then in 2000, I became an assistant professor and we studied how to apply DVS to scenarios where the traditional execution-time-for-energy tradeoff does not exist. Finally, in the past five years, we developed DVS-based attacks to break the trusted execution environment in model computing platforms. I will also show our work on enhancing system security by DVS through examples of device authentication and countermeasures to machine learning model inversion attacks. It is my hope that this talk can shed light on how to find a research topic and make your contributions.
Biography:
Gang Qu received his B.S. in mathematics from the University of Science and Technology of China (USTC) and Ph.D. in computer science from the University of California, Los Angeles (UCLA). He is currently a professor in the Department of Electrical and Computer Engineering at the University of Maryland, College Park, where he leads the Maryland Embedded Systems and Hardware Security Lab (MeshSec) and the Wireless Sensor Laboratory. His research activities are on trusted integrated circuit design, hardware security, energy efficient system design and wireless sensor networks. He has focused recently on applications in the Internet of Things, cyber-physical systems, and machine learning. He has published more than 250 conference papers and journal articles on these topics with several best paper awards. Dr. Qu is an enthusiastic teacher. He has taught and co-taught various security courses, including a popular MOOC on Hardware Security through Coursera. Dr. Qu has served 17 times as the general or program chair/co-chair for international conferences and workshops. He is currently on the editorial board of IEEE TCAD, TETC, ACM TODAES, JCST, Integration, and HSS. Dr. Qu is a fellow of IEEE.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96878058667
Enquiries: Miss Karen Chan at Tel. 3943 8439
March 2021
15 March
9:45 am - 10:45 am
Prioritizing Computation and Analyst Resources in Large-scale Data Analytics
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Ms. Kexin RONG
PhD student, Department of Computer Science
Stanford University
Abstract:
Data volumes are growing exponentially, fueled by an increased number of automated processes such as sensors and devices. Meanwhile, the computational power available for processing this data – as well as analysts’ ability to interpret it – remain limited. As a result, database systems must evolve to address these new bottlenecks in analytics. In my work, I ask: how can we adapt classic ideas from database query processing to modern compute- and analyst-limited data analytics?
In this talk, I will discuss the potential for this kind of systems development through the lens of several practical systems I have developed. By drawing insights from database query optimization, such as pushing workload- and domain-specific filtering, aggregation, and sampling into core analytics workflows, we can dramatically improve the efficiency of analytics at scale. I will illustrate these ideas by focusing on two systems — one designed to optimize visualizations for streaming infrastructure and application telemetry and one designed for high-volume seismic waveform analysis — both of which have been field-tested at scale. I will also discuss lessons from production deployments at companies including Datadog, Microsoft, Google and Facebook.
Biography:
Kexin Rong is a Ph.D. student in Computer Science at Stanford University, co-advised by Professor Peter Bailis and Professor Philip Levis. She designs and builds systems to enable data analytics at scale, supporting applications including scientific analysis, infrastructure monitoring, and analytical queries on big data clusters. Prior to Stanford, she received her bachelor’s degree in Computer Science from California Institute of Technology.
Join Zoom Meeting:
https://cuhk.zoom.us/j/97794511231?pwd=Qjg2RlArcUNrbHBwUmxNSW4yTVIxZz09
Enquiries: Miss Caroline TAI at Tel. 3943 8440
12 March
9:45 am - 10:45 am
Toward a Deeper Understanding of Generative Adversarial Networks
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Dr. Farzan FARNIA
Postdoctoral Research Associate
Laboratory for Information and Decision Systems, MIT
Abstract:
While modern adversarial learning frameworks achieve state-of-the-art performance on benchmark image, sound, and text datasets, we still lack a solid understanding of their robustness, generalization, and convergence behavior. In this talk, we aim to bridge this gap between theory and practice using a principled analysis of these frameworks through the lens of optimal transport and information theory. We specifically focus on the Generative Adversarial Network (GAN) framework which represents a game between two machine players for learning the distribution of data. In the first half of the talk, we study equilibrium in GAN games for which we show the classical Nash equilibrium may not exist. We then introduce a new equilibrium notion for GAN problems, called proximal equilibrium, through which we develop a GAN training algorithm with improved stability. We provide several numerical results on large-scale datasets supporting our proposed training method for GANs. In the second half of the talk, we attempt to understand why GANs often fail in learning multi-modal distributions. We focus our study on the benchmark Gaussian mixture models and demonstrate the failures of standard GAN architectures under this simple class of multi-modal distributions. Leveraging optimal transport theory, we design a novel architecture for the GAN players which is tailored to mixtures of Gaussians. We theoretically and numerically show the significant gain achieved by our designed GAN architecture in learning multi-modal distributions. We conclude the talk by discussing some open research challenges in adversarial learning.
Biography:
Farzan Farnia is a postdoctoral research associate at the Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, where he is co-supervised by Professor Asu Ozdaglar and Professor Ali Jadbabaie. Prior to joining MIT, Farzan received his master’s and PhD degrees in electrical engineering from Stanford University and his bachelor’s degrees in electrical engineering and mathematics from Sharif University of Technology. At Stanford, he was a graduate research assistant at the Information Systems Laboratory advised by Professor David Tse. Farzan’s research interests include statistical learning theory, optimal transport theory, information theory, and convex optimization. He has been the recipient of the Stanford Graduate Fellowship (Sequoia Capital fellowship) from 2013-2016 and the Numerical Technology Founders Prize as the second top performer of Stanford’s electrical engineering PhD qualifying exams in 2014.
Join Zoom Meeting:
https://cuhk.zoom.us/j/99476583146?pwd=QVdsaTJLYU1ab2c0ODV0WmN6SzN2Zz09
Enquiries: Miss Caroline TAI at Tel. 3943 8440
11 March
9:00 am - 10:00 am
Sensitive Data Analytics with Local Differential Privacy
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Mr. Tianhao WANG
PhD student, Department of Computer Science
Purdue University
Abstract:
When collecting sensitive information, local differential privacy (LDP) can relieve users’ privacy concerns, as it allows users to add noise to their private information before sending data to the server. LDP has been adopted by big companies such as Google and Apple for data collection and analytics. My research focuses on improving the ecosystem of LDP. In this talk, I will first share my research on the fundamental tools in LDP, namely the frequency oracles (FOs), which estimate the frequency of each private value held by users. We proposed a framework that unifies different FOs and optimizes them. Our optimized FOs improve the estimation accuracy of Google’s and Apple’s implementations by 50% and 90%, respectively, and serve as the state-of-the-art tools for handling more advanced tasks. In the second part of my talk, I will present our work on extending the functionality of LDP, namely, how to make a database system that satisfies LDP while still supporting a variety of analytical queries.
Biography:
Tianhao Wang is a Ph.D. candidate in the department of computer science, Purdue University, advised by Prof. Ninghui Li. He received his B.Eng. degree from software school, Fudan University in 2015. His research area is security and privacy, with a focus on differential privacy and applied cryptography. He is a member of DPSyn, which won several international differential privacy competitions. He is a recipient of the Bilsland Dissertation Fellowship and the Emil Stefanov Memorial Fellowship.
Join Zoom Meeting:
https://cuhk.zoom.us/j/94878534262?pwd=Z2pjcDUvQVlETzNoVWpQZHBQQktWUT09
Enquiries: Miss Caroline TAI at Tel. 3943 8440
11 March
3:15 pm - 4:15 pm
Toward Reliable NLP Systems via Software Testing
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Dr. Pinjia HE
Postdoctoral researcher, Computer Science Department
ETH Zurich
Abstract:
NLP systems such as machine translation have been increasingly utilized in our daily lives. Thus, their reliability becomes critical; mistranslations by Google Translate, for example, can lead to misunderstanding, financial loss, threats to personal safety and health, etc. On the other hand, due to their complexity, such systems are difficult to get right. Because of their nature (i.e., based on large, complex neural networks), traditional reliability techniques are challenging to be applied. In this talk, I will present my recent work that has spearheaded the testing of machine translation systems, toward building reliable NLP systems. In particular, I will describe three complementary approaches which collectively found 1,000+ diverse translation errors in the widely-used Google Translate and Bing Microsoft Translator. I will also describe my work on LogPAI, an end-to-end log management framework powered by AI algorithms for traditional software reliability, and conclude the talk with my vision for making both traditional and intelligent software such as NLP systems more reliable.
Biography:
Pinjia HE has been a postdoctoral researcher in the Computer Science Department at ETH Zurich after receiving his PhD degree from The Chinese University of Hong Kong (CUHK) in 2018. He has research expertise in software engineering and artificial intelligence, and is particularly passionate about making both traditional and intelligent software reliable. His research on automated log analysis and machine translation testing appeared in top computer science venues, such as ICSE, ESEC/FSE, ASE, and TDSC. The LogPAI project led by him has been starred 2,000+ times on GitHub and downloaded 30,000+ times by 380+ organizations, and won a Most Influential Paper (MIP) award at ISSRE. He also won a 2016 Excellent Teaching Assistantship at CUHK. He has served on program committees of MET’21, DSML’21, ECOOP’20 Artifact, and ASE’19 Demo, and reviewed for top journals and conferences (e.g., TSE, TOSEM, ICSE, KDD, and IJCAI). According to Google Scholar, he has an h-index of 14 and 1,200+ citations.
Join Zoom Meeting:
https://cuhk.zoom.us/j/98498351623?pwd=UHFFUU1QbExYTXAxTWxCMk9BbW9mUT09
Enquiries: Miss Caroline TAI at Tel. 3943 8440
03 March
2:00 pm - 3:00 pm
Edge AI – A New Battlefield for Hardware Security Research
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. CHANG Chip Hong
Associate Professor
Nanyang Technological University (NTU) of Singapore
Abstract:
The flourishing of Internet of Things (IoT) has rekindled on-premise computing to allow data to be analyzed closer to the source. To support edge Artificial Intelligence (AI), hardware accelerators, open-source AI model compilers and commercially available toolkits have evolved to facilitate the development and deployment of applications that use AI at its core. This “model once, run optimized anywhere” paradigm shift in deep learning computations introduces new attack surfaces and threat models that are methodologically different from existing software-based attack and defense mechanisms. Existing adversarial examples modify the input samples presented to an AI application either digitally or physically to cause a misclassification. Nevertheless, these input-based perturbations are not robust or stealthy on multi-view target. To generate a good adversarial example for misclassifying a real-world target of variational viewing angle, lighting and distance, a decent number of pristine samples of the target object are required. The feasible perturbations are substantial and visually perceptible. Edge AI also poses a difficult catchup for existing adversarial example detectors that are designed based on sophisticated offline analyses with the assumption that the deep learning model is implemented on a general purpose 32-bit floating-point CPU or GPU cluster. This talk will first present a new glitch injection attack on edge DNN accelerator capable of misclassifying a target under variational viewpoints. The attack pattern for each target of interest consists of sparse instantaneous glitches, which can be derived from just one sample of the target. The second part of this talk will present a new hardware-oriented approach for in-situ detection of adversarial inputs feeding through a spatial DNN accelerator architecture or a third-party DNN Intellectual Property (IP) implemented on the edge. With negligibly small hardware overhead, the glitch injection circuit and the trained shallow binary tree detector can be easily implemented alongside with a deep learning model on an edge AI accelerator hardware.
Biography:
Prof. Chip Hong Chang is an Associate Professor at the Nanyang Technological University (NTU) of Singapore. He held concurrent appointments at NTU as Assistant Chair of Alumni of the School of EEE from 2008 to 2014, Deputy Director of the Center for High Performance Embedded Systems from 2000 to 2011, and Program Director of the Center for Integrated Circuits and Systems from 2003 to 2009. He has coedited five books, and have published 13 book chapters, more than 100 international journal papers (>70 are in IEEE), more than 180 refereed international conference papers (mostly in IEEE), and have delivered over 40 colloquia and invited seminars. His current research interests include hardware security and trustable computing, low-power and fault-tolerant computing, residue number systems, and application-specific digital signal processing algorithms and architectures. Dr. Chang currently serves as the Senior Area Editor of IEEE Transactions on Information Forensic and Security (TIFS), and Associate Editor of the IEEE Transactions on Circuits and Systems-I (TCAS-I) and IEEE Transactions on Very Large Scale Integration (TVLSI) Systems. He was the Associate Editor of the IEEE TIFS and IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD) from 2016 to 2019, IEEE Access from 2013 to 2019, IEEE TCAS-I from 2010 to 2013, Integration, the VLSI Journal from 2013 to 2015, Springer Journal of Hardware and System Security from 2016 to 2020 and Microelectronics Journal from 2014 to 2020. He also guest edited eight journal special issues including IEEE TCAS-I, IEEE Transactions on Dependable and Secure Computing (TDSC), IEEE TCAD and IEEE Journal on Emerging and Selected Topics in Circuits and Systems (JETCAS). He has held key appointments in the organizing and technical program committees of more than 60 international conferences (mostly IEEE), including the General Co-Chair of 2018 IEEE Asia-Pacific Conference on Circuits and Systems and the inaugural Workshop Chair and Steering Committee of the ACM CCS satellite workshop on Attacks and Solutions in Hardware Security. He is the 2018-2019 IEEE CASS Distinguished Lecturer, a Fellow of the IEEE and the IET.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93797957554?pwd=N2J0VjBmUFh6N0ZENVY0U1RvS0Zhdz09
Meeting ID: 937 9795 7554
Password: 607354
Enquiries: Miss Caroline TAI at Tel. 3943 8440
February 2021
02 February
2:00 pm - 3:00 pm
Design Exploration of DNN Accelerators using FPGA and Emerging Memory
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Dr. Guangyu SUN
Associate Professor
Center for Energy-efficient Computing and Applications (CECA)
Peking University
Abstract:
Deep neural networks (DNN) have been successfully used in the fields, such as computer vision and natural language processing. In order to improve the processing efficiency, various hardware accelerators have been proposed for DNN applications. In this talk, I will first review our works about design space exploration and design automation for DNN accelerators on FPGA platforms. Then, I will quickly introduce the potential and challenges of using emerging memory for energy-efficient DNN inference. After that, I will try to provide some advices for graduate study.
Biography:
Dr. Guangyu Sun is an associate professor at Center for Energy-efficient Computing and Applications (CECA) in Peking University. He received his B.S. and M.S degrees from Tsinghua University, Beijing, in 2003 and 2006, respectively. He received his Ph.D. degree in Computer Science from the Pennsylvania State University in 2011. His research interests include computer architecture, acceleration system, and design automation for modern applications. He has published 100+ journals and refereed conference papers in these areas. He is an associate editor of ACM TECS and ACM JETC.
Join Zoom Meeting:
https://cuhk.zoom.us/j/95836460304?pwd=UkRwSldjNWdUWlNvNnN2TTlRZ1ZUdz09
Meeting ID: 958 3646 0304
Password: 964279
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
January 2021
29 January
2:00 pm - 3:00 pm
In-Memory Computing – an algorithm –architecture co-design approach towards the POS/w era
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. LI Jiang
Associate Professor
Department of computer science and engineering
Shanghai Jiao Tong University
Abstract:
The rapid rising computing power over the past decade has supported the advance of Artificial Intelligence. Still, in the post-Moore era, AI chips with traditional CMOS process and Van-Neumann architectures face huge bottlenecks in memory walls and energy efficiency wall. In-memory computing architecture based on emerging memristor technology has become a very competitive computing paradigm to deliver two order-of-magnitude higher energy efficiency. The memristor process has apparent advantages in power consumption, multi-bit, and cost. However, it faces challenges of the low manufacturing scalability and process variation, which lead to the instability of computation and limited capability of accommodate large and complex neural networks. This talk will introduce the algorithm and architecture co-optimization approach to solve the above challenges.
Biography:
Li Jiang is an associate professor from Dept. of CSE, Shanghai Jiao Tong University. He received the B.S. degree from the Dept. of CS&E, Shanghai Jiao Tong University in 2007, the MPhil and the Ph.D. degree from the Dept. of CS&E, the Chinese University of Hong Kong in 2010 and 2013 respectively. He has published more than 50 peer-reviewd papers in top-tier computer architecture and EDA conferences and journals, including a best paper nomination in ICCAD. According to the IEEE Digital Library, five papers ranked in the top 5% of citations of all papers collected at its conferences. The achievements have been highly recognized and cited by academic and industry experts, including Academician Zheng Nanning, Academician William Dally, Prof. Chengming Hu, and many ACM/IEEE fellows. Some of the achievements have been introduced into the IEEE P1838 standard, and a number of technologies have been put into commercial use in cooperation with TSMC, Huawei and Alibaba. He got best Ph.D. Dissertation award in ATS 2014, and was in the final list of TTTC’s E. J. McCluskey Doctoral Thesis Award. He received ACM Shanghai Rising Star award, and CCF VLSI early career award. He received 2020 CCF distinguished Speaker. He serves as co-chair and TPC member in several international and national conferences, such as MICRO, DATE, ASP-DAC, ITC-Asia, ATS , CFTC, CTC and etc. He is an associate Editor of IET Computers Digital Techniques, VLSI the Integration Journal.
Join Zoom Meeting:
https://cuhk.zoom.us/j/95897084094?pwd=blZlanFOczF4aWFvM2RuTDVKWFlZZz09
Meeting ID: 958 9708 4094
Password: 081783
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
December 2020
14 December
2:00 pm - 3:00 pm
Speed up DNN Model Training: An Industrial Perspective
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Mr. Mike Hong
CTO of BirenTech
Abstract:
Training large DNN models is compute-intensive, often taking days, weeks or even months to complete. Therefore, how to speed it up has attracted lots of attention from both academia and industry. In this talk, we shall cover a number of accelerated DNN training techniques from an industrial perspective, including various optimizers, large batch training, distributed computation and all-reduce network topology.
Biography:
Mike Hong has been working on GPU architecture design for 26 years and is currently serving as the CTO of BirenTech, an intelligent chip design company that has attracted more than 200 million US$ series A round financing since founded in 2019. Before joining Biren, Mike was the Chief Architect in S3, Principal Architect for Tesla architecture in NVIDIA, GPU team leader and the Chief Architect in HiSilicon. Mike holds more than 50 US patents including the texture compression patent which is the industrial standard for all the PCs, Macs and game consoles.
Join Zoom Meeting:
https://cuhk.zoom.us/j/92074008389?pwd=OE1EbjBzWk9oejh5eUlZQ1FEc0lOUT09
Meeting ID: 920 7400 8389
Password: 782536
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
03 December
11:00 am - 12:00 pm
Artificial Intelligence for Radiotherapy in the Era of Precision Medicine
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. CAI Jing
Professor of Department of Health Technology and Informatics
The Hong Kong Polytechnic University (PolyU)
Abstract:
Artificial Intelligence (AI) is evolving rapidly and promises to transform the world in an unprecedented way. The tremendous possibilities that AI can bring to radiation oncology have triggered a flood of activities in the field. Particularly, with the support of big data and accelerated computation, deep learning is taking off with tremendous algorithmic innovations and powerful neural network models. AI technology has great promises in improving radiation therapy from treatment planning to treatment assessment. It can aid radiation oncologists in reaching unbiased consensus treatment planning, help train junior radiation oncologists, update practitioners, reduce professional costs, and improve quality assurance in clinical trials and patient care. It can significantly reduce physicians’ time and efforts required to contour, plan, and review. Given the promising learning tools and massive computational resources that are becoming readily available, AI will dramatically change the landscape of radiation oncology research and practice soon. This presentation will give an overview of the recent advances in AI for radiation oncology applications, followed with a set of examples of AI applications in various aspects of radiation therapy, including but not limited to, organ segmentation, target volume delineation, treatment planning, quality assurance, response assessment, outcome prediction, etc. A number of examples of AI applications in radiotherapy will be illustrated in the presentation. For example, I will present a new approach to derive the lung functional images for function-guided radiation therapy, using a deep convolutional neural network to learn and exploit the underlying functional in-formation in the CT image and generate functional perfusion image. I will demonstrate a novel method for pseudo-CT generation from multi-parametric MR images using multi-channel multi-path generative adversarial network (MCMP-GAN) for MRI-based radiotherapy application. I will also show promising capability of MRI-based radiomics features for pre-treatment identification of adaptive radiation therapy eligibility in nasopharyngeal carcinoma (NPC) patients.
Biography:
Prof. CAI Jing earned his PhD in Engineering Physics in 2006 and then completed his clinical residency in Medical Physics in 2009 from the University of Virginia, USA. He entered the ranks of academia as Assistant Professor at Duke University in 2009, and was promoted to Associate Professor in 2014. He joined the Hong Kong Polytechnic University in 2017, and is currently a full Professor and the funding Programme Leader of Medical Physics MSc Programme in the Department of Health Technology and Informatics. He is board certified in Therapeutic Radiological Physics by American Board of Radiography (ABR) since 2010. He is the PI/Co-PI for more than 20 external research funds, including 5 NIH, 3 GRF, 3 HMRF and 1 ITSP grants, with a total funding of more than 40M HK Dollars. He has published over 100 journal papers and 200 conference papers/abstracts, and has mentored over 60 trainees as the supervisor. He serves on the editorial boards for several prestigious journals in the fields of medical physics and radiation oncology. He was elected to Fellow of American Association of Physicists in Medicine (AAPM) in 2018.
Join Zoom Meeting:
https://cuhk.zoom.us/j/92068646609?pwd=R0ZRR1VXSmVQOUkyQnZrd0t4dW0wUT09
Meeting ID: 920-6864-6609
Password: 076760
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
October 2020
30 October
2:00 pm - 3:00 pm
Closing the Loop of Human and Robot
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. LU Cewu
Research Professor at Shanghai Jiao Tong University (SJTU)
Abstract:
This talk is toward closing the loop of Human and Robot. We present our recent research of human activity understanding and robot learning. For Human side, we present our recent research “Human Activity Knowledge Engine (HAKE)” which largely improves human activity understanding. The improvements of Alphapose (famous pose estimator) are also introduced. For robot side, we discuss our understanding of robot task and new insight “Primitive model”. Thus, GraspNet – first dynamic grasping benchmark dataset is proposed, a novel end-to-end grasping deep learning approach is also introduced. A 3D point-level semantic embedding method for object manipulation will be discussed. Finally, we will discuss how to further close the Loop of Human and Robot.
Biography:
Cewu Lu is a Research Professor at Shanghai Jiao Tong University (SJTU). Before he joined SJTU, he was a research fellow at Stanford University working under Prof. Fei-Fei Li and Prof. Leonidas J. Guibas. He got the his PhD degree from The Chinese Univeristy of Hong Kong, supervised by Prof. Jiaya Jia. He is selected as young 1000 talent plan. Prof. Lu Cewu is selected as MIT TR35 – “MIT Technology Review, 35 Innovators Under 35” (China), and Qiushi Outstanding Young Scholar (求是杰出青年学者),which is the only one AI awardee in recent 3 years. Prof. Lu serves as an Area Chair for CVPR 2020 and reviewer for 《nature》. Prof. Lu has published about 100 papers in top AI journal and conference, including 9 papers being ESI high cited paper. His research interests fall mainly in Computer Vision and Robotics Learning.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96062514495?pwd=aEp4aEl5UVhjOW1XemdpWVNZTVZOZz09
Meeting ID: 960-6251-4495
Password: 797809
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
22 October
3:00 pm - 4:00 pm
Detecting Vulnerabilities using Patch-Enhanced Vulnerability Signatures
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. HUO Wei
Professor of Institute of Information Technology (IIE)
Chinese Academy of Sciences (CAS)
Abstract:
Recurring vulnerabilities widely exist and remain undetected in real-world systems, which are often resulted from reused code base or shared code logic. However, the potentially small differences between vulnerable functions and their patched functions as well as the possibly large differences between vulnerable functions and target functions to be detected bring challenges to the current solutions. I shall introduce a novel approach to detect recurring vulnerabilities with low false positives and low false negatives. The evaluation on ten open-source systems has shown that the approach proposed significantly outperformed state-of-the-art clone-based and function matching-based recurring vulnerability detection approaches, with 23 CVE identifiers assigned.
Biography:
Wei HUO is a full professor within Institute of Information Technology (IIE), Chinese Academy of Sciences (CAS). He focuses on software security, vulnerability detection, program analysis, etc. He leads the group of VARAS (Vulnerability Analysis and Risk Assessment System). He has published multi papers at top venues in computer security and software engineering, including ASE, ICSE, Usenix Security. Besides, his group has uncovered hundreds of 0-day vulnerabilities in popular software and firmware, with 100+ CVEs assigned.
Join Zoom Meeting:
https://cuhk.zoom.us/j/97738806643?pwd=dTIzcWhUR2pRWjBWaG9tZkdkRS9vUT09
Meeting ID: 977-3880-6643
Password: 131738
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
15 October
9:30 am - 10:30 am
Computational Fabrication and Assembly: from Optimization and Search to Learning
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. FU Chi Wing Philip
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Computational fabrication is an emerging research topic in computer graphics, beginning roughly a decade ago with the need to develop computational solutions for efficient 3D printing and later for 3D fabrication and object assembly at large. In this talk, I will introduce a series of research works in this
area with a particular focus on the following two recent ones:
(i) Computational LEGO Technic assembly, in which we model the component bricks, their connection mechanisms, and the input user sketch for computation, and then further develop an optimization model with necessary constraints and our layout modification operator to efficiently search for an optimum LEGO Technic assembly. Our results not only match the input sketch with coherently-connected LEGO Technic bricks but also respect the intended symmetry and structural integrity of the designs.
(ii) TilinGNN, the first neural optimization approach to solve a classical instance of the tiling problem, in which we formulate and train a neural network model to maximize the tiling coverage on target shapes, while avoiding overlaps and holes between the tiles in a self-supervised manner. In short, we model the tiling problem as a discrete problem, in which the network is trained to predict the goodness of each candidate tile placement, allowing us to iteratively select tile placements and assemble a tiling
on the target shape.
In the end, I will try to present also some of the results from my other research works in the areas of point cloud processing, 3D vision, and augmented reality.
Biography:
Chi-Wing Fu is an associate professor in the department of computer science and engineering at the Chinese University of Hong Kong (CUHK). His research interests are in computer graphics, vision, and human-computer interaction, or more specifically in computation fabrication, 3D computer vision, and user interaction. Chi-Wing obtained his B.Sc. and M.Phil. from the CUHK and his Ph.D. from Indiana University, Bloomington. Before re-joining the CUHK in early 2016, he was an associate professor with tenure at the school of computer science and engineering at Nanyang Technological University, Singapore.
Join Zoom Meeting:
https://cuhk.zoom.us/j/99943410200
Meeting ID: 999 4341 0200
Password: 492333
Enquiries: Miss Caroline Tai at Tel. 3943 8440
14 October
2:00 pm - 3:00 pm
Bioinformatics: Turning experimental data into biomedical hypotheses, knowledge and applications
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. YIP Yuk Lap Kevin
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Contemporary biomedical research relies heavily on high-throughput technologies that examine many objects, their individual activities or their mutual interactions in a single experiment. The data produced are usually high-dimensional, noisy and biased. An important aim of bioinformatics is to extract useful information from such data for developing both conceptual understandings of the biomedical phenomena and downstream applications. This requires the integration of knowledge from multiple disciplines, such as data properties from the biotechnology, molecular and cellular mechanisms from biology, evolutionary principles from genetics, and patient-, disease- and drug-related information from medicine. Only with these inputs can the data analysis goals be meaningfully formulated as computational problems and properly solved. Computational findings also need to be subsequently validated and functionally tested by additional experiments, possibly iterating back-and-forth between data production and data analysis many times before a conclusion can be drawn. In this seminar, I will use my own research to explain how bioinformatics can help create new biomedical hypotheses, knowledge and applications, with a focus on recent works that use machine learning methods to study basic molecular mechanisms and specific human diseases.
Biography:
Kevin Yip is an associate professor in Department of Computer Science and Engineering at The Chinese University of Hong Kong (CUHK). He obtained his bachelor degree in computer engineering and master degree in computer science from The University of Hong Kong, and his PhD degree in computer science from Yale University. Before joining CUHK, he has worked as a researcher in HKU-Pasteur Institute, Yale Center for Medical Informatics, and Department of Molecular Biophysics and Biochemistry at Yale University. Since his master study, Dr. Yip has been conducting research in bioinformatics, with special interests in modeling gene regulatory
mechanisms and studying how their perturbations are related to human diseases. Dr. Yip has participated in several international research consortia, including Encyclopedia of DNA Elements (ENCODE), model organism ENCODE (modENCODE), and International Human Epigenomics Consortium (IHEC). Locally, Dr. Yip has been collaborating with scientists and clinicians in the quest of understanding the mechanisms that underlie different human diseases, such as hepatocellular carcinoma, nasopharyngeal carcinoma, type II diabetes, and Hirschsprung’s disease. Dr. Yip received the title of Outstanding Fellow from Faculty of Engineering and the Young Researcher Award from CUHK in 2019.
Join Zoom Meeting:
https://cuhk.zoom.us/j/98458448644
Meeting ID: 984 5844 8644
Password: 945709
Enquiries: Miss Caroline Tai at Tel. 3943 8440
14 October
3:30 pm - 4:30 pm
Dependable Storage Systems
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. LEE Pak Ching Patrick
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Making large-scale storage systems dependable against failures is critical yet non-trivial in the face of the ever-increasing amount of data. In this talk, I will present my work on dependable storage systems, with the primary goal of improving the fault tolerance, recovery, security, and performance of different types of storage architectures. To make a case, I will present new theoretical and applied findings on erasure coding, a low-cost redundancy technique for fault-tolerant storage. I will present general techniques and code constructions for accelerating the repair of storage failures, and further propose a unified framework for readily deploying a variety of erasure coding solutions in state-of-the-art distributed storage systems. I will also introduce my other work on the dependability of applied distributed systems, in the areas of encrypted deduplication, key-value stores, network measurement, and stream processing. Finally, I will highlight the industrial impact of our work beyond publications.
Biography:
Patrick P. C. Lee is now an Associate Professor in the Department of Computer Science and Engineering at the Chinese University of Hong Kong. His research interests are in various applied/systems topics on improving the dependability of large-scale software systems, including storage systems, distributed systems and networks, and cloud computing. He now serves as an Associate Editor in IEEE/ACM Transactions on Networking and ACM Transactions on Storage. He served as a TPC co-chair of APSys 2020, and as a TPC member of several major systems and networking conferences. He received the best paper awards at CoNEXT 2008, TrustCom 2011, and SRDS 2020. For details, please refer to his personal homepage: http://www.cse.cuhk.edu.hk/~pclee.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96195753407
Meeting ID: 961 9575 3407
Password: 892391
Enquiries: Miss Caroline Tai at Tel. 3943 8440
13 October
2:00 pm - 3:00 pm
From Combating Errors to Embracing Errors in Computing Systems
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. Xu Qiang
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Faults are inevitable in any computing systems, and they may occur due to environmental disturbance, circuit aging, or malicious attacks. On the one hand, designers try all means to prevent, contain, and control faults to achieve error-free computation, especially for those safety-critical applications. On the other hand, many applications in the big data era (e.g., search engine and recommended systems) that require lots of computing power are often error-tolerant. In this talk, we present some techniques developed at our group over the past several years, including error-tolerant solutions that combat all sorts of hardware faults and approximate computing techniques that embrace errors in computing systems for energy savings.
Biography:
Qiang Xu is an associate professor of Computer Science & Engineering at the Chinese University of Hong Kong. He leads the CUhk REliable laboratory (CURE Lab.), and his research interests include electronic design automation, fault-tolerant computing and trusted computing. Dr. Xu has published 150+ papers at referred journals and conference proceedings, and received two Best Paper Awards and five Best Paper Award Nominations. He is currently serving as an associate editor for IEEE Transaction on Computer-Aided Design of Integrated Circuits and Systems and Integration, the VLSI Journal.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96930968459
Meeting ID: 969 3096 8459
Password: 043377
Enquiries: Miss Caroline Tai at Tel. 3943 8440
12 October
9:30 am - 10:30 am
Memory/Storage Optimization for Small/Big Systems
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. Zili SHAO
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Memory/storage optimization is one of the most critical issues in computer systems. In this talk, I will first summarize our work in optimizing memory/storage systems for embedded and big data applications. Then, I will present an approach by deeply integrating device and application to optimize flash-based key-value caching – one of the most important building blocks in modern web infrastructures and high-performance data-intensive applications. I will also introduce our recent work in optimizing unique address checking for IoT blockchains.
Biography:
Zili Shao is an Associate Professor at Department of Computer Science and Engineering, The Chinese University of Hong Kong. He received his Ph.D. degree from The University of Texas at Dallas in 2005. Before joining CUHK in 2018, he was with Department of Computing, The Hong Kong Polytechnic University, where he started in 2005. His current research interests include embedded software and systems, storage systems and related industrial applications.
Join Zoom Meeting:
https://cuhk.zoom.us/j/95131164721
Meeting ID: 951 3116 4721
Password: 793297
Enquiries: Miss Caroline Tai at Tel. 3943 8440
12 October
11:00 am - 12:00 pm
VLSI Mask Optimization: From Shallow To Deep Learning
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. YU Bei
Assistant Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
The continued scaling of integrated circuit technologies, along with the increased design complexity, has exacerbated the challenges associated with manufacturability and yield. In today’s semiconductor manufacturing, lithography plays a fundamental role in printing design patterns on silicon. However, the growing complexity and variation of the manufacturing process have tremendously increased the lithography modeling and simulation cost. Both the role and the cost of mask optimization – now indispensable in the design process – have increased. Parallel to these developments are the recent advancements in machine learning which have provided a far-reaching data-driven perspective for problem solving. In this talk, we shed light on the recent deep learning based approaches that have provided a new lens to examine traditional mask optimization challenges. We present hotspot detection techniques, leveraging advanced learning paradigms, which have demonstrated unprecedented efficiency. Moreover, we demonstrate the role deep learning can play in optical proximity correction (OPC) by presenting its successful application in our full-stack mask optimization framework.
Biography:
Bei Yu is currently an Assistant Professor at the Department of Computer Science and Engineering, The Chinese University of Hong Kong. He received the Ph.D degree from Electrical and Computer Engineering, University of Texas at Austin, USA in 2014, and the M.S. degree in Computer Science from Tsinghua University, China in 2010. His current research interests include machine learning and combinatorial algorithm with applications in VLSI computer aided design (CAD). He has served as TPC Chair of 1st ACM/IEEE Workshop on Machine Learning for CAD (MLCAD), served in the program committees of DAC, ICCAD, DATE, ASPDAC, ISPD, the editorial boards of ACM Transactions on Design Automation of Electronic Systems (TODAES), Integration, the VLSI Journal, and IET Cyber-Physical Systems: Theory & Applications. He is Editor of IEEE TCCPS Newsletter.
Dr. Yu received six Best Paper Awards from International Conference on Tools with Artificial Intelligence (ICTAI) 2019, Integration, the VLSI Journal in 2018, International Symposium on Physical Design (ISPD) 2017, SPIE Advanced Lithography Conference 2016, International Conference on Computer-Aided Design (ICCAD) 2013, Asia and South Pacific Design Automation Conference (ASPDAC) 2012, four other Best Paper Award Nominations (ASPDAC 2019, DAC 2014, ASPDAC 2013, and ICCAD 2011), six ICCAD/ISPD contest awards, IBM Ph.D. Scholarship in 2012, SPIE Education Scholarship in 2013, and EDAA Outstanding Dissertation Award in 2014.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96114730370
Meeting ID: 961 1473 0370
Password: 984602
Enquiries: Miss Caroline Tai at Tel. 3943 8440
09 October
4:00 pm - 5:00 pm
Local Versus Global Security in Computation
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. Andrej BOGDANOV
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Secret sharing schemes are at the heart of cryptographic protocol design. In this talk I will present my recent discoveries about the informational and computational complexity of secret sharing and their relevance to secure multiparty computation:
- The share size in the seminal threshold secret sharing scheme of Shamir and Blakley from the 1970s is essentially optimal.
- Secret reconstruction can sometimes be carried out in the computational model of bounded-depth circuits, without resorting to modular linear algebra.
- Private circuits that are secure against local information leakage are also secure against limited but natural forms of global leakage.
I will also touch upon some loosely related results in cryptography, pseudorandomness, and coding theory.
Biography:
Andrej Bogdanov is associate professor of Computer Science and Engineering and director of the Institute of Theoretical Computer Science and Communications at the Chinese University of Hong Kong. His research interests are in cryptography, pseudorandomness, and sublinear-time algorithms.
Andrej obtained his B.S. and M. Eng. degrees from MIT in 2001 and his Ph.D. from UC Berkeley in 2005. Before joining CUHK in 2008 he was a postdoctoral associate at the Institute for Advanced Study in Princeton, at DIMACS (Rutgers University), and at ITCS (Tsinghua University). He was a visiting professor at the Tokyo Institute of Technology in 2013 and a long-term program participant at the UC Berkeley Simons Institute for the Theory of Computing in 2017.
Join Zoom Meeting:
https://cuhk.zoom.us/j/94008322629
Meeting ID: 940 0832 2629
Password: 524278
Enquiries: Miss Caroline Tai at Tel. 3943 8440
08 October
3:00 pm - 4:00 pm
A Compiler Infrastructure for Embedded Multicore SoCs
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Dr. Sheng Weihua
Chief Expert
Software Tools and Engineering at Huawei
Abstract:
Compilers play a pivotal role in the software development process for microprocessors, by automatically translating high-level programming languages into machinespecific executable code. For a long time, while processors were scalar, compilers were regarded as a black box among the software community, due to their successful internal encapsulation of machine-specific details. Over a decade ago, major computing processor manufacturers began to compile multiple (simple) cores into a single chip, namely multicores, to retain scaling according to Moore’s law. The embedded computing industry followed suit, introducing multicores years later, amid aggressive marketing campaigns aimed at highlighting the number of processors for product differentiation in consumer electronics. While the transition from scalar (uni)processors to multicores is an evolutionary step in terms of hardware, it has given rise to fundamental changes in software development. The performance “free lunch”, having ridden on the growth of faster processors, is over. Compiler technology does not develop and scale for multicore architectures, which contributes considerably to the software crisis in the multicore age. This talk addresses the challenges associated with developing compilers for multicore SoCs (Systems-On-Chip) software development, focusing on embedded systems, such as wireless terminals and modems. It also captures a trajectory from research towards a commercial prototyping, shedding light on some lessons on how to do research effectively.
Biography:
Mr. Sheng has had early career roots in the electronic design automation industry (CoWare and Synopsys). He has spearheaded the technology development on multicore programming tools at RWTH Aachen University from 2007 to 2013, which later turned into the foundation of Silexica. He has a proven record of successful consultation and collaboration with top tier technology companies on multicore design tools. Mr. Sheng is a co-founder of Silexica Software Solutions GmbH in Germany. He served as CTO during 2014-2016. Since 2017, as VP and GM of APAC, he was responsible for all aspects of Silexica sales and operations across the APAC region. In 2019 he joined Huawei Technologies. Mr. Sheng received BEng from Tsinghua University and MSc/PhD from RWTH Aachen University in Germany.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93855822245
Meeting ID: 938-5582-2245
Password: 429533
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
07 October
3:00 pm - 4:00 pm
Robust Deep Neural Network Design under Fault Injection Attack
Location
Zoom
Category
Seminar Series 2020/2021
Speaker:
Prof. Xu Qiang
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Deep neural networks (DNNs) have gained mainstream adoption in the past several years, and many artificial intelligence (AI) applications employ DNNs for safety- and security-critical tasks, e.g., biometric authentication and autonomous driving. In this talk, we first briefly discuss the security issues in deep learning. Then, we focus on fault injection attacks and introduce some of our recent works in this domain.
Biography:
Qiang Xu leads the CUhk REliable laboratory (CURE Lab.) and his research interests include fault-tolerant computing and trusted computing. He has published 150+ papers in these fields and received a number of best paper awards/nominations.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93862944206
Meeting ID: 938-6294-4206
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
May 2020
15 May
9:30 am - 11:00 am
The Coming of Age of Microfluidic Biochips: Connection Biochemistry to Electronic Design Automation
Location
Zoom
Category
Seminar Series 2019/2020
Speaker:
Prof. Tsung-yi HO
Professor
Department of Computer Science
National Tsing Hua University
Abstract:
Advances in microfluidic technologies have led to the emergence of biochip devices for automating laboratory procedures in biochemistry and molecular biology. Corresponding systems are revolutionizing a diverse range of applications, e.g., point-of-care clinical diagnostics, drug discovery, and DNA sequencing–with an increasing market. However, continued growth (and larger revenues resulting from technology adoption by pharmaceutical and healthcare companies) depends on advances in chip integration and design-automation tools. Thus, there is a need to deliver the same level of design automation support to the biochip designer that the semiconductor industry now takes for granted. In particular, the design of efficient design automation algorithms for implementing biochemistry protocols to ensure that biochips are as versatile as the macro-labs that they are intended to replace. This talk will first describe technology platforms for accomplishing “biochemistry on a chip”, and introduce the audience to both the droplet-based “digital” microfluidics based on electrowetting actuation and flow-based “continuous” microfluidics based on microvalve technology. Next, system-level synthesis includes operation scheduling and resource binding algorithms, physical-level synthesis includes placement and routing optimizations, and control synthesis and sensor feedback-based cyberphysical adaptation will be presented. In this way, the audience will see how a “biochip compiler” can translate protocol descriptions provided by an end user (e.g., a chemist or a nurse at a doctor’s clinic) to a set of optimized and executable fluidic instructions that will run on the underlying microfluidic platform. Finally, recent advances in open-source microfluidic ecosystem will be covered.
Biography:
Tsung-Yi Ho received his Ph.D. in Electrical Engineering from National Taiwan University in 2005. He is a Professor with the Department of Computer Science of National Tsing Hua University, Hsinchu, Taiwan. His research interests include several areas of computing and emerging technologies, especially in design automation of microfluidic biochips. He has been the recipient of the Invitational Fellowship of the Japan Society for the Promotion of Science (JSPS), the Humboldt Research Fellowship by the Alexander von Humboldt Foundation, the Hans Fischer Fellowship by the Institute of Advanced Study of the Technische Universität München, and the International Visiting Research Scholarship by the Peter Wall Institute of Advanced Study of the University of British Columbia. He was a recipient of the Best Paper Awards at the VLSI Test Symposium (VTS) in 2013 and IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems in 2015. He served as a Distinguished Visitor of the IEEE Computer Society for 2013-2015, a Distinguished Lecturer of the IEEE Circuits and Systems Society for 2016-2017, the Chair of the IEEE Computer Society Tainan Chapter for 2013-2015, and the Chair of the ACM SIGDA Taiwan Chapter for 2014-2015. Currently, he serves as the Program Director of both EDA and AI Research Programs of Ministry of Science and Technology in Taiwan, the VP Technical Activities of IEEE CEDA, an ACM Distinguished Speaker, and an Associate Editor of the ACM Journal on Emerging Technologies in Computing Systems, ACM Transactions on Design Automation of Electronic Systems, ACM Transactions on Embedded Computing Systems, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, and IEEE Transactions on Very Large Scale Integration Systems, a Guest Editor of IEEE Design & Test of Computers, and the Technical Program Committees of major conferences, including DAC, ICCAD, DATE, ASP-DAC, ISPD, ICCD, etc. He is a Distinguished Member of ACM.
Join Zoom Meeting:
https://cuhk.zoom.us/j/94385618900
https://cuhk.zoom.com.cn/j/94385618900(Mainland China)
Enquiries: Miss Caroline Tai at Tel. 3943 8440
13 May
2:30 pm - 4:00 pm
Towards Understanding Biomolecular Structure and Function with Deep Learning
Location
Zoom
Category
Seminar Series 2019/2020
Speaker:
Mr. Yu LI
PhD student
King Abdullah University of Science & Technology (KAUST)
Abstract:
Biomolecules, existing in high-order structural forms, are indispensable for the normal functioning of our bodies. To demystify those critical biological processes, we need to investigate biomolecular structures and functions. In this talk, we showcase our efforts in that research direction using deep learning. First, we proposed a deep learning guarded Bayesian inference framework for reconstructing super-resolved structure images from the super-resolved fluorescence microscopy data. This framework enables us to observe the overall biomolecular structures in living cells with super-resolution in almost real-time. Then, we zoom in on a particular biomolecule, RNA, predicting its secondary structure. For this one of the oldest problems in bioinformatics, we proposed an unrolled deep learning method, which can bring us with 20% performance improvement, regarding the F1 score. Finally, by leveraging the physiochemical features and deep learning, we proposed the first-of-its-kind framework to investigate the interaction between RNA and RNA-binding proteins (RBP). This framework can provide us with both the interaction details and high-throughput binding prediction results. Extensive in vitro and in vivo biological experiments demonstrate the effectiveness of the proposed method.
Biography:
Yu Li is a PhD student at KAUST in Saudi Arabia, majoring in Computer Science, under the supervision of Prof. Xin Gao. He is a member of Computational Bioscience Research Center (CBRC) at KAUST. His main research interest is developing novel and new machine learning methods, mainly deep learning methods, for solving the computational problems in biology and understanding the principles behind the bio-world. He obtained MS degree in CS from KAUST at 2016. Before that, he got the Bachelor degree in Biosciences from University of Science and Technology of China (USTC).
Join Zoom Meeting:
https://cuhk.zoom.us/j/91295938758
https://cuhk.zoom.com.cn/j/91295938758(Mainland China)
Enquiries: Miss Caroline Tai at Tel. 3943 8440
07 May
3:30 pm - 4:30 pm
High-Performance Data Analytics Frameworks
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2017/2018
Speaker:
Prof. James CHENG
Assistant Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
ABSTRACT:
Distributed data analytics frameworks lie at the heart of modern computing infrastructures in many organizations. In this talk, I’ll introduce my work on large-scale data analytics frameworks, including systems designed for specialized workloads (e.g. graph analytics, machine learning, high dimensional similarity search) and those for general workloads. I will also show some applications of these systems and their impact.
BIOGRAPHY:
James Cheng obtained his B.Eng. and Ph.D. degrees from the Hong Kong University of Science and Technology. His research focuses on distributed computing frameworks, large-scale graph analytics, and distributed machine learning.
Enquiries: Ms. Crystal Tam at tel. 3943 8439
April 2020
23 April
9:00 am - 10:30 am
How To Preserve Privacy In Learning?
Location
Zoom
Category
Seminar Series 2019/2020
Speaker:
Mr. Di WANG
PhD student
State University of New York
Buffalo
Abstract:
Recent research showed that most of the existing learning models are vulnerable to various privacy attacks. Thus, a major challenge facing the machine learning community is how to learn effectively from sensitive data. An effective way for this problem is to enforce differential privacy during the learning process. As a rigorous scheme for privacy preserving, Differential Privacy (DP) has now become a standard for private data analysis. Despite its rapid development in theory, DP’s adoption to the machine learning community remains slow due to various challenges from the data, the privacy models and the learning tasks. In this talk, I will use the Empirical Risk Minimization (ERM) problem as an example and show how to overcome these challenges. Particularly, I will first talk about how to overcome the high dimensionality challenge from the data for Sparse Linear Regression in the local DP (LDP) model. Then, I will discuss the challenge from the non-interactive LDP model and show a series of results to reduce the exponential sample complexity of ERM. Next, I will present techniques on achieving DP for ERM with non-convex loss functions. Finally, I will discuss some future research along these directions.
Biography:
Di Wang is currently a PhD student in the Department of Computer Science and Engineering at the State University of New York (SUNY) at Buffalo. Before that, he obtained his BS and MS degrees in mathematics from Shandong University and the University of Western Ontario, respectively. During his PhD studies, he has been invited as a visiting student to the University of California, Berkeley, Harvard University, and Boston University. His research areas include differentially private machine learning, adversarial machine learning, interpretable machine learning, robust estimation and optimization. He has received the SEAS Dean’s Graduate Achievement Award and the Best CSE Graduate Research Award from SUNY Buffalo.
Join Zoom Meeting:
https://cuhk.zoom.us/j/98545048742
https://cuhk.zoom.com.cn/j/98545048742(Mainland China)
Meeting ID: 985 4504 8742
Enquiries: Miss Caroline Tai at Tel. 3943 8440
16 April
9:00 am - 10:30 am
Transfer Learning for Language Understanding and Generation
Location
Zoom
Category
Seminar Series 2019/2020
Speaker:
Mr. Di JIN
PhD student
MIT
Abstract:
Deep learning models have been increasingly prevailing in various Natural Language Processing (NLP) tasks, and even surpassed human-level performance in some of them. However, the performance of these models would degrade significantly on low-resource data, even worse than conventional shallow models in some cases. In this work, we combat with the curse of data-inefficiency with the help of transfer learning for both language understanding and generation tasks. First, I will introduce MMM, a Multi-stage Multi-task learning framework for the Multi-choice Question Answering (MCQA) task, which brings in around 10% of performance improvement on 5 MCQA low-resource datasets. Second, an iterative back-translation (IBT) schema is proposed to boost the performance of machine translation models on zero-shot domains (with no labeled data) by adapting from the source domain with large-scale labeled data.
Biography:
Di Jin is a fifth year PhD student at MIT working with Prof. Peter Szolovits. He works on Natural Language Processing (NLP) and its applications into biomedical and clinical domains. Previous works focused on sequential sentence classification, transfer learning for low-resource data, adversarial attacking and defense, and text editing/rewriting.
Join Zoom Meeting:
https://cuhk.zoom.us/j/834299320
https://cuhk.zoom.com.cn/j/834299320(Mainland China)
Meeting ID: 834 299 320
Find your local number: https://cuhk.zoom.us/u/abeVNXWmN
Enquiries: Miss Caroline Tai at Tel. 3943 8440
November 2019
15 November
4:00 pm - 5:00 pm
Coupling Decentralized Key-Value Stores with Erasure Coding
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2019/2020
Speaker:
Prof. Patrick Lee Pak Ching
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Modern decentralized key-value stores often replicate and distribute data via consistent hashing for availability and scalability. Compared to replication, erasure coding is a promising redundancy approach that provides availability guarantees at much lower cost. However, when combined with consistent hashing, erasure coding incurs a lot of parity updates during scaling (i.e., adding or removing nodes) and cannot efficiently handle degraded reads caused by scaling. We propose a novel erasure coding model called FragEC, which incurs no parity updates during scaling. We further extend consistent hashing with multiple hash rings to enable erasure coding to seamlessly address degraded reads during scaling. We realize our design as an in-memory key-value store called ECHash, and conduct testbed experiments on different scaling workloads in both local and cloud environments. We show that ECHash achieves better scaling performance (in terms of scaling throughput and degraded read latency during scaling) over the baseline erasure coding implementation, while maintaining high basic I/O and node repair performance.
Speaker’s Bio:
Patrick Lee is now an Associate Professor at CUHK CSE. Please refer to http://www.cse.cuhk.edu.hk/~pclee.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
08 November
3:00 pm - 4:00 pm
Complexity Management in the Design of Cyber-Physical Systems
Category
Seminar Series 2019/2020
Speaker:
Prof. Hermann KOPETZ
Professor Emeritus
Technical University of Vienna
Abstract:
The human effort required to understand, design, and maintain a software system depends on the complexity of the artifact. After a short introduction into the different facets of complexity, this talk deals with the characteristics of multi-level models and the appearance of emergent phenomena. The focus of the core section of the talk is a discussion of simplification principles in the design of Cyber-Physical Systems. The most widely used simplification principle, divide and conquer, partitions a large system horizontally, temporally, or vertically into nearly independent parts that are small enough in order that their behavior can be understood considering the limited capacity of the human cognitive appparatus. The most effective—and difficult—simplification principle is the new conceptualization of the emergent properties of interacting parts.
A more detailed discussion of the topic is contained in the upcoming book: Simplicity is Complex, Foundations of Cyber-Physical System Design that will be published by Springer Verlag in the summer of 2019.
Speaker’s Bio:
Hermann Kopetz received a PhD degree in Physics sub auspiciis praesidentis from the University in Vienna in 1968 and is since 2011 professor emeritus at the Technical University of Vienna. He is the chief architect of the time-triggered technology for dependable embedded Systems and a co-founder of the company TTTech. The time-triggered technology is deployed in leading aerospace, automotive and industrial applications. Kopetz is a Life Fellow of the IEEE and a full member of the Austrian Academy of Science. He received a Dr. honoris causa degree from the University Paul Sabatier in Toulouse in 2007. Kopetz served as the chairman of the IEEE Computer Society Technical Committee on Dependable Computing and Fault Tolerance and in program committees of many scientific conferences. He is a founding member and a former chairman of IFIP WG 10.4. Kopetz has written a widely used textbook on Real-Time Systems (that has been translated to Chinese) and published more than 200 papers and 30 patents.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
October 2019
25 October
4:00 pm - 5:00 pm
Scalable Bioinformatics Methods For Single Cell Data
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2019/2020
Speaker:
Dr. Joshua Ho
Associate Professor
School of Biomedical Sciences
University of Hong Kong
Abstract:
Single cell RNA-seq and other high throughput technologies have revolutionised our ability to interrogate cellular heterogeneity, with broad applications in biology and medicine. Standard bioinformatics pipelines are designed to process individual data sets containing thousands of single cells. Nonetheless, data sets are increasing in size, and some biological questions can only be addressed by performing large-scale data integration. There is a need to develop scalable bioinformatics tools that can handle large data sets (e.g., with >1 million cells). Our laboratory has been developing scalable bioinformatics tools that make use of modern cloud computing technology, fast heuristic algorithms, and virtual reality visualisation to support scalable data processing, analysis, and exploration of large single cell data. In this talk, we will describe some of these tools and their applications.
Speaker’s Bio:
Dr Joshua Ho is an Associate Professor in the School of Biomedical Sciences at the University of Hong Kong (HKU). Dr Ho completed his BSc (Hon 1, Medal) and PhD in Bioinformatics from the University of Sydney, and undertook postdoctoral research at the Harvard Medical School. His research focuses on advanced bioinformatics technology, ranging from scalable single cell analytics, metagenomic data analysis, and digital healthcare technology (such as mobile health, wearable devices, and healthcare artificial intelligence). Dr Ho has over 80 publications, including first or senior-author papers in leading journals such as Nature, Genome Biology, Nucleic Acids Research and Science Signaling. His research excellence has been recognized by the 2015 NSW Ministerial Award for Rising Star in Cardiovascular Research, the 2015 Australian Epigenetics Alliance’s Illumina Early Career Research Award, and the 2016 Young Tall Poppy Science Award.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
24 October
11:30 am - 12:30 pm
Temporal Logic Semantics for Teleo-Reactive Robotic Agent Programs
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2019/2020
Speaker:
Prof. Keith L. Clark
Emeritus Professor
Imperial College London
Abstract:
Teleo-Reactive (TR) robotic agent programs comprise sequences of guarded action rules clustered into named parameterised procedures. Their ancestry goes back to the first cognitive robot, Shakey. Like Shakey, a TR programmed robotic agent has a deductive Belief Store comprising constantly changing predicate logic percept facts, and fixed knowledge facts and rules for querying the percepts. In this paper we introduce TR programming using a simple example expressed in the teleo-reactive programming language TeleoR, which is a syntactic extension of QuLog, a typed logic programming language used for the agent’s Belief Store. The example program illustrates key properties that a TeleoR program should have. We give formal definitions of these key properties, and an informal operational semantics of the evaluation of a TeleoR procedure call. We then formally express the key properties in LTL. Finally we show how their LTL formalisation can be used to prove key properties of TeleoR procedures using the example TeleoR program.
Speaker’s Bio:
Keith Clark has Bachelor degrees in both mathematics and philosophy and a PhD in Computational Logic. He is one of the founders of Logic Programming. His early research was primarily in the theory and practice of LP. His paper: “Negation as Failure” (1978), giving a semantics to Prolog’s negation operator, has over 3000 citations.
In 1981, inspired by Hoare’s CSP, with a PhD student Steve Gregory, he introduced the concepts of committed choice non-determinism and stream communicating and-parallel sub-proofs into logic programming. This restriction of the LP concept was then adopted by the Japanese Fifth Generation Project. This had the goal of building multi-processor knowledge using computers. Unfortunately, the restrictions men it is not a natural tool for building KP applications, and the FGP project failed. Since 1990 his research emphasis has been on the design, implementation and application of multi-threaded rule based programming languages, with a strong declarative component, for multi-agent and cognitive robotic applications.
He has had visiting positions at Stanford University, UC Santa Cruz, Syracuse University and Uppsala University amongst others. He is currently an Emeritus Professor at Imperial, and an Honorary Professor at University of Queensland and the University of New Soul Wales. He has consulted for the Japanese Fifth Generation Project, Hewlett Packard, IBM, Fujitsu and two start-ups. With colleague Frank McCabe, he founded the company Logic Programming Associates in 1980. This produced and marketed Prolog systems for micro-computers, offering training and consultancy on their use. The star product was MacProlog, with primitives for exploiting the Mac GUI for AI applications.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
15 October
11:00 am - 12:00 pm
LEC: Learning Driven Data-path Equivalence Checking
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2019/2020
Speaker:
Dr. Jiang Long
Apple silicon division
Abstract:
In LEC system, we present a learning-based framework to solve the data-path equivalence checking problem in a high-level synthesis design flow, which is gaining popularity in modern day SoC design process where CPU cores are accompanied by dedicated accelerators for computation intensive applications. In such a context, the data-path logic is no longer a ‘pure’ data computation logic but rather an arbitrary sea-of-logic, where highly optimized computation intensive arithmetic components are surrounded by a web of custom control logic. In such a setting, the state-of-art SAT-sweeping framework at the Boolean level is no longer effective as the specification and implementation under comparison may not have any internal structural similarities. LEC employs an open architecture, iterative compositional proof strategies, and a learning framework to locate, isolate and reverse engineer the true bottlenecks in order to reason about their equivalence relation at a higher level. The effectiveness of LEC procedures is demonstrated by benchmarking results on a set of realistic industrial problems.
Speaker’s Bio:
Jiang graduated from Computer Science Department at Jilin University, Changchun, China in 1992. In 1996, Jiang entered the graduate program in Computer Science at Tsinghua University, Beijing, China. A year later, from 1997 to 1999, Jiang studied in Computer Science Department at University of Texas at Austin as a graduate student. It is during the years at UT-Austin, Jiang developed an interest and focused in the field of formal verification of digital systems ever since. Between 2000 and 2014, Jiang worked on EDA formal verification tool development at Synopsys Inc and later at Mentor Graphics Corporation. Since March 2014, Jiang worked at Apple silicon division on SoC design formal verification and currently focusing on verification methodology and tool development for Apple CPU design and verification. While working in industry, between 2008 and 2017, Jiang completed his PhD degree at EECS Department in University of California at Berkeley in the area of logic synthesis and verification. Jiang ‘s dissertation work is on reasoning about high-level constructs for hardware and software formal verification in the context of high-level synthesis design flow.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
11 October
11:00 am - 12:00 pm
From 7,000X Model Compression to 100X Acceleration – Achieving Real-Time Execution of ALL DNNs on Mobile Devices
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2019/2020
Speaker:
Prof. Yanzhi Wang
Department of Electrical and Computer Engineering
Northeastern University
Abstract:
This presentation focuses on two recent contributions on model compression and acceleration of deep neural networks (DNNs). The first is a systematic, unified DNN model compression framework based on the powerful optimization tool ADMM (Alternating Direction Methods of Multipliers), which applies to non-structured and various types of structured weight pruning as well as weight quantization technique of DNNs. It achieves unprecedented model compression rates on representative DNNs, consistently outperforming competing methods. When weight pruning and quantization are combined, we achieve up to 6,635X weight storage reduction without accuracy loss, which is two orders of magnitude higher than prior methods. Our most recent results conducted a comprehensive comparison between non-structured and structured weight pruning with quantization in place, and suggest that non-structured weight pruning is not desirable at any hardware platform.
However, using mobile devices as an example, we show that existing model compression techniques, even assisted by ADMM, are still difficult to translate into notable acceleration or real-time execution of DNNs. Therefore, we need to go beyond the existing model compression schemes, and develop novel schemes that are desirable for both algorithm and hardware. Compilers will act as the bridge between algorithm and hardware, maximizing parallelism and hardware performance. We develop a combination of pattern pruning and connectivity pruning, which is desirable at all of theory, algorithm, compiler, and hardware levels. We achieve 18.9ms inference time of large-scale DNN VGG-16 on smartphone without accuracy loss, which is 55X faster than TensorFlow-Lite. We can potentially enable 100X faster and real-time execution of all DNNs using the proposed framework.
Speaker’s Bio:
Prof. Yanzhi Wang is currently an assistant professor in the Department of Electrical and Computer Engineering at Northeastern University. He has received his Ph.D. Degree in Computer Engineering from University of Southern California (USC) in 2014, and his B.S. Degree with Distinction in Electronic Engineering from Tsinghua University in 2009.
Prof. Wang’s current research interests mainly focus on DNN model compression and energy-efficient implementation (on various platforms). His research maintains the highest model compression rates on representative DNNs since 09/2018. His work on AQFP superconducting based DNN acceleration is by far the highest energy efficiency among all hardware devices. His work has been published broadly in top conference and journal venues (e.g., ASPLOS, ISCA, MICRO, HPCA, ISSCC, AAAI, ICML, CVPR, ICLR, IJCAI, ECCV, ICDM, ACM MM, DAC, ICCAD, FPGA, LCTES, CCS, VLDB, ICDCS, TComputer, TCAD, JSAC, TNNLS, Nature SP, etc.), and has been cited around 5,000 times. He has received four Best Paper Awards, has another eight Best Paper Nominations and three Popular Paper Awards.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
September 2019
19 September
2:30 pm - 3:30 pm
Facilitating Programming for Data Science via DSLs and Machine Learning
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2019/2020
Speaker:
Prof. Artur Andrzejak
University of Heidelberg
Germany
Abstract:
Data processing and analysis becomes relevant for a growing number of domains and applications, ranging from natural science to industrial applications. Given the variety of scenarios and the need for flexibility, each project typically require custom programming. This task might pose a challenge for the domain specialists (typically non-developers), and frequently becomes a major cost and time factor in crafting a solution. This problem even aggravates if performance or scalability are important, due to increased complexity of developing parallel/distributed software.
This talk focuses on selected solutions of these challenges. In particular, we will discuss a tool NLDSL [1] for accelerated implementation of Domain Specific Languages (DSLs) for libraries following the “fluent interface” programming model. We showcase how this solution facilitates script development in context of popular data science frameworks/libraries like (Python) Pandas, scikit-learn, Apache Spark, or Matplotlib. The key elements are “no overhead” integration of DSL and Python code, DLS-level code recommendations, and support for adding ad-hoc DSL elements tailored to even small application domains.
We will also discuss solutions utilizing machine learning. One of them are code fragment recommenders. Here frequently used code fragments (snippets) are extracted from Stackoveflow/GitHub, generified, and stored in a database. During development they are recommended to users based on textual queries, selection of relevant data, user interaction history, and other inputs.
Another work attempts to combine the approach for Python code completion via neural attention and pointer networks by Jian Li et al. [2] with probabilistic models for code [3]. Our study shows some promising improvement of accuracy.
If time permits, we will also take a quick look at alternative approaches for accelerated programming in context of data analysis: natural language interfaces for code development (e.g. bots), and the emerging technologies for program synthesis.
[1] Artur Andrzejak, Kevin Kiefer, Diego Costa, Oliver Wenz, Agile Construction of Data Science DSLs (Tool Demo), ACM SIGPLAN Int. Conf. on Generative Programming: Concepts & Experiences (GPCE), 21-22 October 2019, Athens, Greece.
[2] Jian Li, Yue Wang, Michael R. Lyu, and Irwin King, Code completion with neural attention and pointer networks. In Proc. 27th International Joint Conference on Artificial Intelligence (IJCAI’18), 2018, AAAI Press.
[3] Pavol Bielik, Veselin Raychev, and Martin Vechev. PHOG: Probabilistic model for code. In Prof. 33rd International Conference on Machine Learning, 20–22 June 2016, New York, USA.
Speaker’s Bio:
Artur Andrzejak has received a PhD degree in computer science from ETH Zurich in 2000 and a habilitation degree from FU Berlin in 2009. He was a postdoctoral researcher at the HP Labs Palo Alto from 2001 to 2002 and a researcher at ZIB Berlin from 2003 to 2010. He was leading the CoreGRID Institute on System Architecture (2004 to 2006) and acted as a Deputy Head of Data Mining Department at I2R Singapore in 2010. Since 2010 he is a W3-professor at University of Heidelberg and leads there the Parallel and Distributed Systems group. His research interests include scalable data analysis, reliability of complex software systems, and cloud computing. To find out more about his research group, visit http://pvs.ifi.uni-heidelberg.de/.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
13 September
4:00 pm - 5:00 pm
How To Do High Quality Research And Write Acceptable Papers?
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2019/2020
Speaker:
Prof. Michael R. Lyu
Professor and Chairman
Computer Science & Engineering Department
The Chinese University of Hong Kong
Abstract:
Publish or Perish. This is the pressure of most academic researchers. Even if your advisor(s) do not ask you to publish a certain number of papers as the graduation requirement, performing high quality research is still essential. In this talk I will share my experience in the question all graduate students will ask, “How to do high quality research and write acceptable papers?”
Speaker’s Bio:
Michael Rung-Tsong Lyu is a Professor and Chairman of Computer Science and Engineering Department at The Chinese University of Hong Kong. He worked at the Jet Propulsion Laboratory, the University of Iowa, Bellcore, and Bell Laboratories. His research interests include software reliability engineering, distributed systems, fault-tolerant computing, service computing, multimedia information retrieval, and machine learning. He has published 500 refereed journal and conference papers in these areas, which recorded 30000 Google Scholar citations and h-index of 85. He served as an Associate Editor of IEEE Transactions on Reliability, IEEE Transactions on Knowledge and Data Engineering (TKDE), Journal of Information Science and Engineering, and IEEE Transactions on Services Computing. He is currently on the editorial boards of ACM Transactions on Software Engineering and Methodology (TOSEM), IEEE Access, and Software Testing, Verification and Reliability Journal (STVR). He was elected to IEEE Fellow (2004), AAAS Fellow (2007), Croucher Senior Research Fellow (2008), IEEE Reliability Society Engineer of the Year (2010), ACM Fellow (2015), and received the Overseas Outstanding Contribution Award from China Computer Federation in 2018. Prof. Lyu received his B.Sc. from National Taiwan University, his M.Sc. from University of California, Santa Barbara, and his Ph.D. in Computer Science from University of California, Los Angeles.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
11 September
2:30 pm - 3:30 pm
Scrumptious Sandwich Problems: A Tasty Retrospective for After Lunch
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2019/2020
Speaker:
Prof. Martin Charles Golumbic
University of Haifa
Abstract:
Graph sandwich problems are a prototypical example of checking consistency when faced with only partial data. A sandwich problem for a graph with respect to a graph property $\Pi$ is a partially specified graph, i.e., only some of the edges and non-edges are given, and the question to be answered is, can this graph be completed to a graph which has the property $\Pi$? The graph sandwich problem was investigated for a large number of families of graphs in a 1995 paper by Golumbic, Kaplan and Shamir, and over 200 subsequent papers by many researchers have been published since.
In some cases, the problem is NP-complete such as for interval graphs, comparability graphs, chordal graphs and others. In other cases, the sandwich problem can be solved in polynomial time such as for threshold graphs, cographs, and split graphs. There are also interesting special cases of the sandwich problem, most notably the probe graph problem where the unspecified edges are confined to be within a subset of the vertices. Similar sandwich problems can also be defined for hypergraphs, matrices, posets and Boolean functions, namely, completing partially specified structures such that the result satisfies a desirable property. In this talk, we will present a survey of results that we and others have obtained in this area during the past decade.
Speaker’s Bio:
Martin Charles Golumbic is Emeritus Professor of Computer Science and Founder of the Caesarea Edmond Benjamin de Rothschild Institute for Interdisciplinary Applications of Computer Science at the University of Haifa. He is the founding Editor-in-Chief of the journal “Annals of Mathematics and Artificial Intelligence” and is or has been a member of the editorial boards of several other journals including “Discrete Applied Mathematics”, “Constraints” and “AI Communications”. His current area of research is in combinatorial mathematics interacting with real world problems in computer science and artificial intelligence.
Professor Golumbic received his Ph.D. in mathematics from Columbia University in 1975 under the direction of Samuel Eilenberg. He has held positions at the Courant Institute of Mathematical Sciences of New York University, Bell Telephone Laboratories, the IBM Israel Scientific Center and Bar-Ilan University. He has also had visiting appointments at the Université de Paris, the Weizmann Institute of Science, Ecole Polytechnique Fédérale de Lausanne, Universidade Federal do Rio de Janeiro, Rutgers University, Columbia University, Hebrew University, IIT Kharagpur and Tsinghua University.
He is the author of the book “Algorithmic Graph Theory and Perfect Graphs” and coauthor of the book “Tolerance Graphs”. He has written many research articles in the areas of combinatorial mathematics, algorithmic analysis, expert systems, artificial intelligence, and programming languages, and has been a guest editor of special issues of several journals. He is the editor of the books “Advances in Artificial Intelligence, Natural Language and Knowledge-based Systems”, and “Graph Theory, Combinatorics and Algorithms: Interdisciplinary Applications”. His most recent book is “Fighting Terror Online: The Convergence of Security, Technology, and the Law”, published by Springer-Verlag.
Prof. Golumbic and was elected as Foundation Fellow of the Institute of Combinatorics and its Applications in 1995, and has been a Fellow of the European Artificial Intelligence society ECCAI since 2005. He is a member of the Academia Europaea, honoris causa — elected 2013. Martin Golumbic has been the chairman of over fifty national and international symposia. He a member of the Phi Beta Kappa, Pi Mu Epsilon, Phi Kappa Phi, Phi Eta Sigma honor societies and is married and the father of four bilingual, married daughters and has seven granddaughters and five grandsons.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
August 2019
22 August
11:00 am - 12:00 pm
Bitcoin, blockchains and DLT applications
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2018/2019
Speaker:
Prof. Stefano Bistarelli
Department of Mathematics and Informatics
University of Perugia
Italy
Abstract:
Nowadays there are more than 1 thousand and an half cryptocurrencies and (public) blockchains with an overall capitalization of more than 300 Billions of USD. The most famous cryptocurrency (and blockchain) is Bitcoin, described in a white-paper written under the pseudonym of “Satoshi Nakamoto”. His invention is an open-source, peer-to-peer digital currency (being electronic, with no physical manifestation). Money transactions do not require a third-party intermediary, such as credit cards issuers. The Bitcoin network is completely decentralised, with all parts of transactions performed by the users of the system. A complete transaction record of every Bitcoin and every Bitcoin user’s encrypted identity is maintained on a public ledger. The seminar will introduce bitcoin and blockchain with a deep view of transactions and some insight on specific application (e-voting).
Speaker’s Bio:
Stefano Bistarelli is Associate Professor of Computer Science at the Department of Mathematics and Informatics at the University of Perugia (Italy) since November 2008. Previously he was Associate Professor at the Department of Sciences at the University “G. d’Annunzio” in Chieti-Pescara since September 2005 and assistant professor in the same department since September 2002. He is also research associate of the Institute of Computer Science and Telematics (IIT) at the CNR (Italian National Research Council) in Pisa since 2002. He obtained his Ph.D. in Computer Science in 2001 that was awarded as the best Theoretical Computer Science and Artificial Intelligence Thesis (awarded respectively by the Italian Chapter of the European Association of Theoretical Computer Science (EATCS) and by the Italian Association for Artificial Intelligence (AI*IA)). In the same year he was also nominated by the IIT-CNR for the Cor Baayen European award and selected as the candidate for Italy for the award. He was PostDocs at University of Padua and at the IIT-CNR in Pisa and visiting researcher at the Chinese University of Hong Kong and at the UCC in Cork. Some collaborations, invited talks or visits involved also others research centres (INRIA, Paris; IC-Park, London; Department of Information Systems and Languages, Barcelona; ILLC, Amsterdam; Computer Science Institute LMU, Monaco; EPFL, Losanna; S.R.I, San Francisco). He has organized and served in the PC of several workshops in the constraints and security fields; he was also chair of the Constraint track at FLAIR and currently of the same track at the SAC ACM symposium. His research interests are related to (soft) constraint programming and solving. He also works on Computer Security and recently on QoS. On these topics he has published more then 100 articles, a book and edited a special issue of a journal on soft constraints. He is also in the editorial board of the electronic version of the Open AI Journal (Bentham Open).
Enquiries: Ms. Shirley Lau at tel. 3943 8439
19 August
11:00 am - 12:00 pm
Integrating Reasoning on Combinatorial Optimisation Problems into Machine Learning
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2018/2019
Speaker:
Dr. Emir Demirovic
School of Computing and Information Systems
University of Melbourne
Australia
Abstract:
We study the predict+optimise problem, where machine learning and combinatorial optimisation must interact to achieve a common goal. These problems are important when optimisation needs to be performed on input parameters that are not fully observed but must instead be estimated using machine learning. Our aim is to develop machine learning algorithms that take into account the underlying combinatorial optimisation problem. While a plethora of sophisticated algorithms and approaches are available in machine learning and optimisation respectively, an established methodology for solving problems which require both machine learning and combinatorial optimisation remains an open question. In this talk, we introduce the problem, discuss its difficulties, and present our progress based on our papers from CPAIOR’19 and IJCAI’19.
Speaker’s Bio:
Dr. Emir Demirovic is an associate lecturer and postdoctoral researcher (research fellow) at the University of Melbourne in Australia. He received his PhD from the Vienna University of Technology (TU Wien) and worked at a production planning and scheduling company MCP for seven months. Dr. Demirovic’s primary research interest lies in solving complex real-world problems through combinatorial optimisation and combinatorial machine learning, which combines optimisation and machine learning. His work includes both developing general-purpose algorithms and applications. An example of such a problem is to design algorithms to generate high-quality timetables for high schools based on the curriculum, teacher availability, and pedagogical requirements. Another example is to optimise a production plan while only having an estimate of costs rather than precise numbers.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
13 August
11:00 am - 12:00 pm
Machine learning with problematic datasets in diverse applications
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2018/2019
Speaker:
Prof. Chris Willcocks
Durham University
UK
Abstract:
Machine learning scientists often ask the question “What was the distribution from which the dataset was generated from?” and subsequently “How do we learn to transform observations from what we are given, to what is required by the task?”. This seminar highlights successful research where our group took explicit steps to deal with problematic datasets in several different applications, from building robust medical diagnosis systems with a very limited amount of poorly labeled data, to how we hid secret messages in plain sight in tweets without changing the underlying message, how we captured plausible interpolations and successful dockings of proteins despite significant dataset bias, through to recent advances in meta learning to tackle the evolving task distribution in the ongoing anti-counterfeiting arms race.
Speaker’s Bio:
Chris G. Willcocks is a recently appointed Assistant Professor in the Innovative Computing Group at the Department of Computer Science at Durham University in the UK, where he currently teaches the year 3 Machine Learning and year 2 Cyber Security sub-modules. Before 2016, he worked on industrial machine learning projects for P&G, Dyson, Unilever, and the British Government in the areas of Computational Biology, Security, Anti-Counterfeiting and Medical Image Computing. In 2016, he founded the Durham University research spinout company Intogral Limited, where he successfully led research and development commercialisation through to Series A investment, deploying ML models used by large multinationals in diverse markets in Medicine, Pharmaceutics, and Security. Since returning to academia, he has recently published in top journals in Pattern Analysis, Medical Imaging, and Information Security, where his theoretical interests are in Variational Bayesian methods, Riemannian Geometry, Level-set methods, and Meta Learning.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
06 August
4:00 pm - 5:00 pm
Abusing Native App-like Features in Web Applications
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2018/2019
Speaker:
Prof. Sooel Son
Assistant Professor KAIST School of Computing (SoC) and Graduate School of Information Security (GSIS)
Abstract:
Progressive Web App (PWA) is a new generation of Web application designed to provide native app-like browsing experiences even when a browser is offline. PWAs make full use of new HTML5 features which include push notification, cache, and service worker to provide short-latency and rich Web browsing experiences. We conduct the first systematic study of the security and privacy aspects unique to PWAs. We identify security flaws in main browsers as well as design flaws in popular third-party push services, that exacerbate the phishing risk. We introduce a new side-channel attack that infers the victim’s history of visited PWAs. The proposed attack exploits the offline browsing feature of PWAs using a cache. We demonstrate a cryptocurrency mining attack which abuses service workers.
Speaker’s Bio:
Sooel Son is an assistant professor at KAIST School of Computing (SoC) and Graduate School of Information Security (GSIS). He received his Computer Science PhD from The University of Texas at Austin. Before KAIST, he worked on building frameworks that identify invasive Android applications at Google. His research focuses on Web security and privacy problems. He is interested in analyzing Web applications, finding Web vulnerabilities, and implementing new systems to find such vulnerabilities.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
July 2019
24 July
2:30 pm - 3:30 pm
How Physical Synthesis Flows
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2018/2019
Speaker:
Dr. Patrick Groeneveld
Stanford University
Abstract:
In this talk we will analyze how form follows function in physical design. Analyzing recent mobile chips and chips for self-driving cars we can reason about the structure of advanced billion transistor systems. The strength and weaknesses of the hierarchical abstractions will be matched with the sweet spots of the core physical synthesis algorithms. These algorithms are chained in a physical design flow that consists of hundreds of steps, each of which may have unexpected interactions. Trading off multiple conflicting objectives such as area, speed and power is sometimes more an art than a science. The presentation will present the underlying principles that eventually lead to design closure.
Speaker’s Bio:
Before working at Cadence and Synopsys, Patrick Groeneveld was Chief Technologist at Magma Design Automation where he was part of the team that developed a groundbreaking RTL-to-GDS2 synthesis product. Patrick was also a Full Professor of Electrical Engineering at Eindhoven University. He is currently teaching at in the EE department at Stanford University and also serves as finance chair in the Executive Committee of the Design Automation Conference. Patrick received his MSc and PhD degrees from Delft University of Technology in the Netherlands. In his spare time, Patrick enjoys flying airplanes, running, electric vehicles, tinkering and reading useless information.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
22 July
11:00 am - 12:00 pm
From Automated Privacy Leak Analysis to Privacy Leak Prevention for Mobile Apps
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2018/2019
Speaker:
Dr. Sencun Zhu
Associate Professor
Pennsylvania State University
Abstract:
With the enormous popularity of smartphones, millions of mobile apps are developed to provide rich functionalities for users by accessing certain personal data, leading to great privacy concerns. To address this problem, many approaches have been proposed to detecting privacy disclosures in mobile apps, but they largely fail to automatically determine whether the privacy disclosures are necessary for the functionality of apps. In this talk, we will introduce LeakDoctor, an analysis system that integrates dynamic response differential analysis with static response taint analysis toautomatically diagnose privacy leaks by judging if a privacy disclosure from an app is necessary for some functionality of the app. Furthermore, we will present the design, implementation, and evaluation of a context-aware real-time mediation system that bridges the semantic gap between GUI foreground interaction and background access, to protect mobile apps from leaking users’ private information.
Speaker’s Bio:
Dr. Sencun Zhu is an associate professor of Department of Computer Science and Engineering at The Pennsylvania State University (PSU). He received the B.S. degree in precision instruments from Tsinghua University, , the M.S. degree in signal processing from the University of Science and Technology of China, Graduate School at Beijing, and the Ph.D. degree in information technology from George Mason University in 1996, 1999, and 2004, respectively. His research interests include wireless and mobile security, software and network security, fraud detection, and user online safety and privacy. His research has been funded by National Science Foundation, National Security Agency, and Army Research Office/Lab. He received NSF Career Award in 2007 and a Google Faculty Research Award in 2013. More details of his research can be found in http://www.cse.psu.edu/~sxz16/.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
10 July
2:00 pm - 3:00 pm
Building Error-Resilient Machine Learning Systems for Safety-Critical Applications
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2018/2019
Speaker:
Prof. Karthik Pattabiraman
Associate Professor
ECE Department and CS Department (affiliation)
University of British Columbia (UBC)
Abstract:
Machine learning (ML) has increasingly been adopted in safety-critical systems such as Autonomous vehicles (AVs) and home robotics. In these domains, reliability and safety are important considerations, and hence it is critical to ensure the resilience of ML systems to faults and errors. On the other hand, soft errors are increasing in commodity computer systems due to the effects of technology scaling and manufacturing variations in hardware design. Further, traditional solutions for hardware faults such as Triple-Modular Redundancy are prohibitively expensive in terms of energy consumption, and are hence not practical in this domain. Therefore, there is a compelling need to ensure the resilience of ML applications to soft errors on commodity hardware platforms. In this talk, I will describe two of the projects we worked on in my group at UBC to ensure the error-resilience of ML applications deployed in the AV domain. I will also talk about some of the challenges in this area, and the work we’re doing to address these challenges.
This is joint work with my students, Nvidia Research, and Los Alamos National Labs.
Speaker’s Bio:
Karthik Pattabiraman received his M.S and PhD. degrees from the University of Illinois at Urbana-Champaign (UIUC) in 2004 and 2009 respectively. After a post-doctoral stint at Microsoft Research (MSR), Karthik joined the University of British Columbia (UBC) in 2010, where he is now an associate professor of electrical and computer engineering. Karthik’s research interests are in building error-resilient software systems, and in software engineering and security. Karthik has won distinguished paper/runner up awards at the IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 2018, the IEEE International Conference on Software Testing (ICST), 2013, the IEEE/ACM International Conference on Software Engineering (ICSE), 2014, He is a recipient of the distinguished alumni early career award from UIUC’s Computer Science department in 2018, the NSERC Discovery Accelerator Supplement (DAS) award in 2015, and the 2018 Killam Faculty Research Prize, and 2016 Killam Faculty Research Fellowship at UBC. He also won the William Carter award in 2008 for best PhD thesis in the area of fault-tolerant computing. Karthik is a senior member of the IEEE, and the vice-chair of the IFIP Working Group on Dependable Computing and Fault-Tolerance (10.4). Find out more about him at: http://blogs.ubc.ca/karthik
Enquiries: Ms. Shirley Lau at tel. 3943 8439
08 July
2:30 pm - 3:30 pm
Declarative Programming in Software-defined Networks: Past. Present, and the Road Ahead
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2018/2019
Speaker:
Dr. Loo Boon Thau
Professor of Computer and Information Science Department
University of Pennsylvania
Abstract:
Declarative networking is a technology that has transformed the way software-defined networking programs are written and deployed. Instead of writing low level code, network operators can write high level specifications that can be verified and compiled into actual implementations. This talk describes 15 years of research in declarative networking, tracing its roots as a domain specific language, to its role in verification, debugging of networks, and commercial use as a declarative network analytics engine. The talk concludes with a peek into the future of declarative networking programming, in the area of examples-guided network synthesis, and infrastructure-aware declarative query processing.
Speaker’s Bio:
Boon Thau Loo is a Professor in the Computer and Information Science (CIS) department at the University of Pennsylvania. He holds a secondary appointment in the Electrical and Systems Engineering (ESE) department. He is also the Associate Dean of the Master’s and Professional Programs, where he oversees all masters programs at the School of Engineering and Applied Science. He is also currently the interim director of the Distributed Systems Laboratory (DSL), an inter-disciplinary systems research lab bringing together researchers in networking, distributed systems, and security. He received his Ph.D. degree in Computer Science from the University of California at Berkeley in 2006. Prior to his Ph.D, he received his M.S. degree from Stanford University in 2000, and his B.S. degree with highest honors from University of California-Berkeley in 1999. His research focuses on distributed data management systems, Internet-scale query processing, and the application of data-centric techniques and formal methods to the design, analysis and implementation of networked systems. He was awarded the 2006 David J. Sakrison Memorial Prize for the most outstanding dissertation research in the Department of EECS at University of California-Berkeley, and the 2007 ACM SIGMOD Dissertation Award. He is a recipient of the NSF CAREER award (2009), the Air Force Office of Scientific Research (AFOSR) Young Investigator Award (2012) and Penn’s Emerging Inventor of the year award (2018). He has published 100+ peer reviewed publications and has supervised twelve Ph.D. dissertations. His graduated Ph.D. students include 3 tenure-track faculty members and winners of 4 dissertation awards.
In addition to his academic work, he actively participates in entrepreneurial activities involving technology transfer. He is the Chief Scientist at Termaxia, a software-defined storage startup based in Philadelphia that he co-founded in 2015. Termaxia offers low-power high-performance software-defined storage solutions targeting the exabyte-scale storage market, with customers in the US, China, and Southeast Asia. Prior to Termaxia, he co-founded Gencore Systems (Netsil) in 2014, a cloud performance analytics company that spun out of his research team at Penn, commercializing his research on the Scalanytics declarative analytics platform. The company was successfully acquired by Nutanix Inc in 2018. He has also published several papers with industry partners (e.g AT&T, HP Labs, Intel, LogicBlox, Microsoft) applying research on real-world systems that result in actual production deployment and patents.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
haha!
Load More
Seminars Archives
Towards Robust Autonomous Driving Systems
Location
Speaker:
Dr. Xi Zheng
Director of Intelligent Systems Research Group
Macquarie University, Australia
Abstract:
Autonomous driving has shown great potential to reform modern transportation. Yet its reliability and safety have drawn a lot of attention and concerns. Compared with traditional software systems, autonomous driving systems (ADSs) often use deep neural networks in tandem with logic-based modules. This new paradigm poses unique challenges for software testing. Despite the recent development of new ADS testing techniques, it is not clear to what extent those techniques have addressed the needs of ADS practitioners. To fill this gap, we have published a few works and I will present some of them. The first work is to reduce and prioritize test for multi-module autonomous driving systems (Accepted in FSE’22). The second work is to conduct comprehensive study to identify the current practices, needs and gaps in testing autonomous driving systems (Accepted also in FSE’22). The third work is to analyse the robustness issues in the deep learning driving models (Accepted in PerCom’20). The fourth work is to generate test cases from traffic rules for autonomous driving models (Accepted in TSE’22). I will also cover some ongoing and future work in autonomous driving systems.
Biography:
Dr. Xi Zheng received the Ph.D. in Software Engineering from the University of Texas at Austin in 2015. From 2005 to 2012, he was the Chief Solution Architect for Menulog Australia. He is currently the Director of Intelligent Systems Research Group, Director of International engagement in the School of Computing, Senior Lecturer (aka Associate Professor US) and Deputy Program Leader in Software Engineering, Macquarie University, Australia. His research interests include Internet of Things, Intelligent Software Engineering, Machine Learning Security, Human-in-the-loop AI, and Edge Intelligence. He has secured more than $1.2 million competitive funding in Australian Research Council (Linkage and Discovery) and Data61 (CRP) projects on safety analysis, model testing and verification, and trustworthy AI on autonomous vehicles. He also won a few awards including Deakin Industry Researcher (2016) and MQ Earlier Career Researcher (Runner-up 2020). He has a number of highly cited papers and best conference papers. He serves as PC members for CORE A* conferences including FSE (2022) and PerCom (2017-2023). He also serves as the PC chairs of IEEE CPSCom-2021, IEEE Broadnets-2022 and associate editor for Distributed Ledger Technologies.
Enquiries: Mr Jeff Liu at Tel. 3943 0624
A Survey of Cloud Database Systems
Location
Speaker:
Dr. C. Mohan
Distinguished Visiting Professor, Tsinghua University
Abstract:
In this talk, I will first introduce traditional (non-cloud) parallel and distributed database systems. Concepts like SQL and NoSQL systems, data replication, distributed and parallel query processing, and data recovery after different types of failures will be covered. Then, I will discuss how the emergence of the (public) cloud has introduced new requirements on parallel and distributed database systems, and how such requirements have necessitated fundamental changes to the architectures of such systems. I will illustrate the related developments by discussing some of the details of systems like Alibaba POLARDB, Microsoft Azure SQL DB, Microsoft Socrates, Azure Synapse POLARIS, Google Spanner, Google F1, CockroachDB, Amazon Aurora, Snowflake and Google AlloyDB.
Biography:
Dr. C. Mohan is currently a Distinguished Visiting Professor at Tsinghua University in China, a Visiting Researcher at Google, a Member of the inaugural Board of Governors of Digital University Kerala, and an Advisor of the Kerala Blockchain Academy (KBA) and the Tamil Nadu e-Governance Agency (TNeGA) in India. He retired in June 2020 from being an IBM Fellow at the IBM Almaden Research Center in Silicon Valley. He joined IBM Research (San Jose, California) in 1981 where he worked until May 2006 on several topics in the areas of database, workflow, and transaction management. From June 2006, he worked as the IBM India Chief Scientist, based in Bangalore, with responsibilities that relate to serving as the executive technical leader of IBM India within and outside IBM. In February 2009, at the end of his India assignment, Mohan resumed his research activities at IBM Almaden. Mohan is the primary inventor of the well-known ARIES family of database recovery and concurrency control methods, and the industry-standard Presumed Abort commit protocol. He was named an IBM Fellow, IBM’s highest technical position, in 1997 for being recognized worldwide as a leading innovator in transaction management. In 2009, he was elected to the United States National Academy of Engineering (NAE) and the Indian National Academy of Engineering (INAE). He received the 1996 ACM SIGMOD Edgar F. Codd Innovations Award in recognition of his innovative contributions to the development and use of database systems. In 2002, he was named an ACM Fellow and an IEEE Fellow. At the 1999 International Conference on Very Large Data Bases (VLDB), he was honored with the 10 Year Best Paper Award for the widespread commercial, academic and research impact of his ARIES work, which has been extensively covered in textbooks and university courses. From IBM, Mohan received 2 Corporate and 8 Outstanding Innovation/Technical Achievement Awards. He is an inventor on 50 patents. He was named an IBM Master Inventor in 1997. Mohan worked very closely with numerous IBM product and research groups, and his research results are implemented in numerous IBM and non-IBM prototypes and products like DB2, MQSeries, WebSphere, Informix, Cloudscape, Lotus Notes, Microsoft SQLServer, Sybase and System Z Parallel Sysplex. During the last many years, he focused on Blockchain, AI, Big Data and Cloud technologies (https://bit.ly/sigBcP, https://bit.ly/CMoTalks, https://bit.ly/CMgMDS). Since 2017, he has been an evangelist of permissioned blockchains and the myth buster of permissionless blockchains. During 1H2021, Mohan was the Shaw Visiting Professor at the National University of Singapore (NUS) where he taught a seminar course on distributed data and computing. In 2019, he became an Honorary Advisor to TNeGA of Chennai for its blockchain and other projects. In 2020, he joined the Advisory Board of KBA of India.
Since 2016, he has been a Distinguished Visiting Professor of China’s prestigious Tsinghua University in Beijing. In 2021, he was inducted as a member of the inaugural Board of Governors of the new Indian university Digital University Kerala (DUK). Mohan launched his consulting career by becoming a Consultant to Microsoft’s Data Team in October 2020. In March 2022, he became a consultant at Google with the title of Visiting Researcher. He has been on the advisory board of IEEE Spectrum and has been an editor of VLDB Journal, and Distributed and Parallel Databases. In the past, he has been a member of the IBM Academy of Technology’s Leadership Team, IBM’s Research Management Council, IBM’s Technical Leadership Team, IBM India’s Senior Leadership Team, the Bharti Technical Advisory Council, the Academic Senate of the International Institute of Information Technology in Bangalore, and the Steering Council of IBM’s Software Group Architecture Board. Mohan received his PhD in computer science from the University of Texas at Austin in 1981. In 2003, he was named a Distinguished Alumnus of IIT Madras from which he received a B.Tech. in chemical engineering in 1977. Mohan is a frequent speaker in North America, Europe and Asia. He has given talks in 43 countries. He is highly active on social media and has a huge following. More information can be found in the Wikipedia page at https://bit.ly/CMwIkP and his homepage at https://bit.ly/CMoDUK.
Enquiries: Mr Jeff Liu at Tel. 3943 0624
EDA for Emerging Technologies
Location
Speaker:
Prof. Anupam Chattopadhyay
Associate Professor, NTU
Abstract:
The continued scaling of horizontal and vertical physical features of silicon-based complementary metal-oxide-semiconductor (CMOS) transistors, termed as “More Moore”, has a limited runway and would eventually be replaced with “Beyond CMOS” technologies. There has been a tremendous effort to follow Moore’s law but it is currently approaching atomistic and quantum mechanical physics boundaries. This has led to active research in other non-CMOS technologies such as memristive devices, carbon nanotube field-effect transistors, quantum computing, etc. Several of these technologies have been realized on practical devices with promising gains in yield, integration density, runtime performance, and energy efficiency. Their eventual adoption is largely reliant on the continued research of Electronic Design Automation (EDA) tools catering to these specific technologies. Indeed, some of these technologies present new challenges to the EDA research community, which are being addressed through a series of innovative tools and techniques. In this tutorial, we will particularly cover the two phases of EDA flow, logic synthesis, and technology mapping, for two types of emerging technologies, namely, in-memory computing and quantum computing.
Biography:
Anupam Chattopadhyay received his B.E. degree from Jadavpur University, India, MSc. from ALaRI, Switzerland, and Ph.D. from RWTH Aachen in 2000, 2002, and 2008 respectively. From 2008 to 2009, he worked as a Member of Consulting Staff in CoWare R&D, Noida, India. From 2010 to 2014, he led the MPSoC Architectures Research Group in RWTH Aachen, Germany as a Junior Professor. Since September 2014, Anupam was appointed as an Assistant Professor in SCSE, NTU, where he got promoted to Associate Professor with Tenure from August 2019. Anupam is an Associate Editor of IEEE Embedded Systems Letters and series editor of Springer Book Series on Computer Architecture and Design Methodologies. Anupam received Borcher’s plaque from RWTH Aachen, Germany for outstanding doctoral dissertation in 2008, nomination for the best IP award in the ACM/IEEE DATE Conference 2016 and nomination for the best paper award in the International Conference on VLSI Design 2018 and 2020. He is a fellow of Intercontinental Academia and a senior member of IEEE and ACM.
Enquiries: Mr Jeff Liu at Tel. 3943 0624
Building Optimal Decision Trees
Location
Speaker:
Professor Peter J. Stuckey
Professor, Department of Data Science and Artificial Intelligence
Monash University
Abstract:
Decision tree learning is a widely used approach in machine learning, favoured in applications that require concise and interpretable models. Heuristic methods are traditionally used to quickly produce models with reasonably high accuracy. A commonly criticised point, however, is that the resulting trees may not necessarily be the best representation of the data in terms of accuracy and size. In recent years, this motivated the development of optimal classification tree algorithms that globally optimise the decision tree in contrast to heuristic methods that perform a sequence of locally optimal decisions.
In this talk I will explore the history of building decision trees, from greedy heuristic methods to modern optimal approaches.
In particular I will discuss a novel algorithm for learning optimal classification trees based on dynamic programming and search. Our algorithm supports constraints on the depth of the tree and number of nodes. The success of our approach is attributed to a series of specialised techniques that exploit properties unique to classification trees. Whereas algorithms for optimal classification trees have traditionally been plagued by high runtimes and limited scalability, we show in a detailed experimental study that our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances, providing several orders of magnitude improvements and notably contributing towards the practical realisation of optimal decision trees.
Biography:
Professor Peter J. Stuckey is a Professor in the Department of Data Science and Artificial Intelligence in the Faculty of Information Technology at Monash University. Peter Stuckey is a pioneer in constraint programming and logic programming. His research interests include: discrete optimization; programming languages, in particular declarative programing languages; constraint solving algorithms; path finding; bioinformatics; and constraint-based graphics; all relying on his expertise in symbolic and constraint reasoning. He enjoys problem solving in any area, having publications in e.g. databases, election science, system security, and timetabling, and working with companies such as Oracle and Rio Tinto on problems that interest them.
Peter Stuckey received a B.Sc and Ph.D both in Computer Science from Monash University in 1985 and 1988 respectively. Since then he has worked at IBM T.J. Watson Research Labs, the University of Melbourne and Monash University. In 2009 he was recognized as an ACM Distinguished Scientist. In 2010 he was awarded the Google Australia Eureka Prize for Innovation in Computer Science for his work on lazy clause generation. He was awarded the 2010 University of Melbourne Woodward Medal for most outstanding publication in Science and Technology across the university. In 2019 he was elected as an AAAI Fellow. and awarded the Association of Constraint Programming Award for Research Excellence. He has over 125 journal and 325 conference publications and 17,000 citations with an h-index of 62.
Enquiries: Mr. Jeff Liu at Tel. 3943 0624
Z3++: Improving the SMT solver Z3
Location
Speaker:
Prof. CAI Shaowei
Institute of Software
Chinese Academy of Sciences
Abstract:
Satisfiability Modulo Theories (SMT) is the problem of deciding the satisfiability of a first order logic formula with respect to certain background theories. SMT solvers have become important formal verification engines, with applications in various domains. In this talk, I will introduce the basis of SMT solving and present our work on improving a famous SMT solver Z3, leading to Z3++, which has won 2 Gold Medals out of 6 from SMT Competition 2022.
Biography:
Shaowei Cai is a professor in Institute of Software, Chinese Academy of Sciences. He has obtained his PhD from Peking University in 2012, with Doctoral Dissertation Award. His research focus on constraint solving (particularly SAT, SMT, and integer programming), combinatorial optimization, and formal verification, as well as their applications in industries. He has won more than 10 Gold Medals from SAT and SMT Competitions, and the Best Paper Award of SAT 2021 conference.
Join Zoom Meeting:
https://cuhk.zoom.us/j/99411951727
Enquiries: Ms. Karen Chan at Tel. 3943 8439
Attacks and Defenses in Logic Encryption
Location
Speaker:
Prof. Hai Zhou
Associate Professor, Department of Electrical and Computer Engineering
Northwestern University
Abstract:
With the increasing cost and complexity in semiconductor hardware designs, circuit IP protection has become an important and challenging problem in hardware security. Logic encryption is a promising technique that modifies a sensitive circuit to a locked one with a password, such that only authorized users can access it. During its history of more than 20 years, many different attacks and defenses have been designed and proposed. In this talk, after a brief introduction to logic encryption, I will present important attacking and defending techniques in the field. Especially, the focus will be on the few key attacks and defenses created in NuLogiCS group at Northwestern.
Biography:
Hai Zhou is the director of the NuLogiCS Research Group in the Electrical and Computer Engineering at Northwestern University and a member of the Center for Ultra Scale Computing and Information Security (CUCIS). His research interest is on Logical Methods for Computer Systems (LogiCS), where logics is used to construct reactive computer systems (in the form of hardware, software, or protocol) and to verify their properties (e.g. correctness, security, and efficiency). In other words, he is interested in algorithms, formal methods, optimization, and their applications to security, machine learning, and economics.
Enquiries: Ms. Karen Chan at Tel. 3943 8439
Recent Advances in Backdoor Learning
Location
Speaker:
Dr. Baoyuan WU
Associate Professor, School of Data Science
The Chinese University of Hong Kong, Shenzhen
Abstract:
In this talk, Dr. Wu will review the development of backdoor learning and his lastest works on backdoor attack and defense. The first is the backdoor attack with sample-specific triggers, which can bypass most existing defense methods, as they are mainly developed for defending against sample-agnostic triggers. Then, he will introduce two effective backdoor defense methods which could preclude the backdoor injection during the training process, through exploring some intrinsic properties of poisoned samples. Finally, he will introduce BackdoorBench, which is a comprehensive benchmark containing mainstream backdoor attack and defense methods, as well as 8,000 pairs of attack-defense evaluations, several interesting findings and analysis. It was recently released at “What is BackdoorBench? ”
Biography:
Dr. Baoyuan Wu is an Associate Professor of School of Data Science, the Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), and the director of the Secure Computing Lab of Big Data, Shenzhen Research Institute of Big Data (SRIBD). His research interests are AI security and privacy, machine learning, computer vision and optimization. He has published 50+ top-tier conference and journal papers, including TPAMI, IJCV, NeurIPS, CVPR, ICCV, ECCV, ICLR, AAAI. He is currently serving as an Associate Editor of Neurocomputing, Area Chair of NeurIPS 2022, ICLR 2022/2023, AAAI 2022.
Join Zoom Meeting:
https://cuhk.zoom.us/j/91408751707
Enquiries: Ms. Karen Chan at Tel. 3943 8439
Out-of-Distribution Generalization: Progress and Challenges
Location
Speaker:
Dr. Li Zhenguo
Director, AI Theory Lab
Huawei Noah’s Ark Lab, Hong Kong
Abstract:
Noah’s Ark Lab is the AI research center for Huawei, with the mission of making significant contribution to both the company and society through innovation in artificial intelligence (AI), data mining and related fields. Our AI theory team focuses on the fundamental research in machine learning, including cutting-edge theories and algorithms such as out-of-distribution (OoD) generalization and controllable generative modeling, and disruptive applications such as self-driving. In this talk, we will present some of our progresses in out-of-distribution generalization, including OoD-learnable theories and model selection, understanding and quantification of OoD properties of various benchmark datasets, and related applications. We will also highlight some key challenges for future studies.
Biography:
Zhenguo Li is currently the director of the AI Theory Lab in Huawei Noah’s Ark Lab, Hong Kong. Before joining Huawei Noah’s Ark lab, he was an associate research scientist in the department of electrical engineering, Columbia University, working with Prof. Shih-Fu Chang. He received BS and MS degrees in mathematics at Peking University, and PhD degree in machine learning at The Chinese University of Hong Kong, advised by Prof. Xiaoou Tang. His current research interests include machine learning and its applications.
Enquiries: Ms. Karen Chan at Tel. 3943 8439
Innovative Robotic Systems and its Applications to Agile Locomotion and Surgery
Location
Speaker:
Prof. Au, Kwok Wai Samuel
Professor, Department of Mechanical and Automation Engineering, CUHK
Professor, Department of Surgery, CUHK
Co-Director, Chow Yuk Ho Technology Centre for Innovative Medicine, CUHK
Director, Multiscale Medical Robotic Center, InnoHK
Abstract:
Over the past decades, a wide range of bio-inspired legged robots have been developed that can run, jump, and climb over a variety of challenging surfaces. However, in terms of maneuverability they still lag far behind animals. Animals can effectively use their mechanical body and external appendages (such as tails) to achieve spectacular maneuverability, energy efficient locomotion, and robust stabilization to large perturbations which may not be easily attained in the existing legged robots. In this talk, we will present our efforts on the development of innovative legged robots with greater mobility/efficiency/robustness, comparable to its biological counterpart. We will discuss the fundamental challenges in legged robots and demonstrate the feasibility of developing such kinds of agile systems. We believe our solutions could potentially lead to more efficient legged robot design and give the legged robot animal-like mobility and robustness. Furthermore, we will also present our robotic development on surgery domain and show how these technologies can be integrated with legged robots to create novel teleoperated legged mobile manipulators for service and construction applications.
Biography:
Dr. Kwok Wai Samuel Au is currently a Professor of the Department of Mechanical and Automation Engineering and Department of Surgery (by courtesy) at CUHK, and the Founding Director of Multiscale Medical Robotics Center, InnoHK. In Sept 2019, Dr. Au found Cornerstone Robotics and has been serving as the president of the company, aiming to create affordable surgical robotic solution. Dr. Au received the B.Eng. and M.Phil degrees in Mechanical and Automation Engineering from CUHK in 1997 and 1999, respectively and completed his Ph.D. degree in Mechanical Engineering at MIT in 2007. During his PhD study, Prof. Hugh Herr, Dr. Au, and other colleagues from MIT Biomechatronics group co-invented the MIT Powered Ankle-foot Prosthesis.
Before joining CUHK(2016), he was the manager of Systems Analysis of the New Product Development Department at Intuitive Surgical, Inc. At Intuitive Surgical, he co-invented and was leading the software and control algorithm development for the FDA cleared da Vinci Si Single-Site surgical platform (2012), Single-Site Wristed Needle Driver (2014), and da Vinci Xi Single-Site surgical platform (2016). He was also a founding team member for the early development of Intuitive Surgical’s FDA cleared robot-assisted catheter system, da Vinci ION system from 2008 to 2012.
Dr. Au co-authored over 60 peer-reviewed manuscripts and conference journals, 17 granted US patents/EP, and 3 pending US Patents. He has won numerous awards including the first prize in the American Society of Mechanical Engineers (ASME) Student Mechanism Design Competition in 2007, Intuitive Surgical Problem Solving Award in 2010, and Intuitive Surgical Inventor Award in 2011.
Enquiries: Ms. Karen Chan at Tel. 3943 8439
Game-Theoretic Interactions: Unifying Attribution, Robustness, Generalization, Visual Concepts, and Aesthetics
Location
Speaker:
Dr. Quanshi Zhang
Abstract:
The interpretability of deep neural networks has received increasing attention in recent years, and diverse methods of explainable AI (XAI) have been developed. Currently, most XAI methods are designed in an experimental manner without solid theoretic foundations, or simply fit explanation results to people’s cognition instead of objectively reflecting the true knowledge in the DNN. The lack of theoretic supports has hampered the future development of XAI. Therefore, in this talk, Dr. Quanshi Zhang will review several studies of explainable AI theories of his research group in recent years, which use the system of game-theoretic interactions to explain the attribution, the adversarial robustness, model generalization, visual concepts learned by the DNN, and the aesthetic level of images.
Biography:
Dr. Quanshi Zhang is an associate professor at Shanghai Jiao Tong University, China. He received the Ph.D. degree from the University of Tokyo in 2014. From 2014 to 2018, he was a post-doctoral researcher at the University of California, Los Angeles. His research interests are mainly machine learning and computer vision. In particular, he has made influential research in explainable AI (XAI) and received the ACM China Rising Star Award. He was the co-chairs of the workshops towards XAI in ICML 2021, AAAI 2019, and CVPR 2019. We is the speaker of the tutorials on XAI at IJCAI 2020 and IJCAI 2021.
Join Zoom Meeting:
https://cuhk.zoom.us/j/98782922295
Enquiries: Ms. Karen Chan at Tel. 3943 8439
Towards efficient NLP models
Location
Speaker:
Dr. Zichao Yang
Abstract:
In recent years, advances in deep learning for NLP research have been mainly propelled by massive computation and large amounts of data. Despite the progress, those giant models still rely on in-domain data to work well in down-stream tasks, which is hard and costly to obtain in practice. In this talk, I am going to talk about my research efforts towards overcoming the challenge of learning with limited supervision by designing efficient NLP models. My research spans three directions towards this goal: designing structural neural networks models according to NLP data structures to take full advantage of labeled data, effective unsupervised models to alleviate the dependency on labeled corpus and data augmentation strategies which creates large amounts of labeled data at almost no cost.
Biography:
Zichao Yang is currently a research scientist working at Bytedance. Before that he obtained his Ph.D from CMU working with Eric Xing, Alex Smola and Taylor Berg-Kirkpatrick. His research interests lie in machine learning and deep learning with applications in NLP. He has published dozens of papers in top AI/ML conferences. He obtained his MPhil degree from CUHK and bachelor degree from Shanghai Jiao Tong University. He worked at Citadel Securities as a quantitative researcher, specializing in ML research for financial data, before joining Bytedance. He also interned in Google DeepMind, Google Brain and Microsoft Research during his Phd.
Join Zoom Meeting:
https://cuhk.zoom.us/j/94185450343
Enquiries: Ms. Karen Chan at Tel. 3943 8439
How will Deep Learning Change Internet Video Delivery?
Location
Speaker:
Prof. HAN Dongsu
Abstract:
Internet video has experienced tremendous growth over the last few decades and is still growing at a rapid pace. Internet video now accounts for 73% of Internet traffic and is expected to quadruple in the next five years. Augmented reality and virtual reality streaming, projected to increase twentyfold in five years, will also accelerate this trend.
In this talk, I will argue that advances in deep neural networks present new opportunities that can fundamentally change Internet video delivery. In particular, deep neural networks allow the content delivery network to easily capture the content of the video and thus enable content-aware video delivery. To demonstrate this, I will present NAS, a new Internet video delivery framework that integrates deep neural network based quality enhancements with adaptive streaming.
NAS incorporates a super-resolution deep neural network (DNN) and a deep re-inforcement neural network to optimize the user quality of experience (QoE). It outperforms the current state of the art, dramatically improving visual quality. It improves the average QoE by 43.08% using the same bandwidth budget or saving 17.13% of bandwidth while providing the same user QoE.
Finally, I will talk about our recent research progress in supporting live video and mobile devices in AI-assisted video delivery that demonstrate the possibility of new designs that tightly integrate deep learning into Internet video streaming.
Biography:
Dongsu Han (Member, IEEE) is currently an Associate Professor with the School of Electrical Engineering at KAIST. He received the B.S. degree in computer science from KAIST in 2003 and the Ph.D. degree in computer science from Carnegie Mellon University in 2012. His research interests include networking, distributed systems, and network/system security. He has received Best Paper Award and Community Award from USENIX NSDI. More details about his research can be found at http://ina.kaist.ac.kr.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93072774638
Enquiries: Ms. Karen Chan at Tel. 3943 8439
Towards Predictable and Efficient Datacenter Storage
Location
Speaker:
Dr. Huaicheng Li
Abstract:
The increasing complexity in storage software and hardware brings new challenges to achieve predictable performance and efficiency. On the one hand, emerging hardware break long-held system design principles and are held back by aged and inflexible system interfaces and usage models, requiring radical rethinking on the software stack to leverage new hardware capabilities for optimal performance. On the other hand, the computing landscape is becoming increasingly heterogeneous and complex, demanding explicit systems-level support to manage hardware-associated complexity and idiosyncrasy, which is unfortunately still largely missing.
In this talk, I will discuss my efforts to build low-latency and cost-efficient datacenter storage systems. By revisiting existing storage interface/abstraction designs and software/hardware responsibility divisions, I will present holistic storage stack designs for cloud datacenters, which deliver orders of magnitude of latency improvement and significantly improved cost-efficiency.
Biography:
Huaicheng is a postdoc at CMU in the Parallel Data Lab (PDL). He received his Ph.D. from University of Chicago. His interests are mainly in Operating Systems and Storage Systems, with a focus on building high-performance and cost-efficient storage infrastructure for datacenters. His research has been recognized by two best paper nominations at FAST (2017 and 2018) and has also made real impact, with production deployment in datacenters, code integration to Linux, and a storage research platform widely used by the research community.
Join Zoom Meeting:
https://cuhk.zoom.us/j/95132173578
Enquiries: Ms. Karen Chan at Tel. 3943 8439
Local vs Global Structures in Machine Learning Generalization
Location
Speaker:
Dr. Yaoqing Yang
Abstract:
Machine learning (ML) models are increasingly being deployed in safety-critical applications, making their generalization and reliability a problem of urgent societal importance. To date, our understanding of ML is still limited because (i) the narrow problem settings considered in studies and the (often) cherry-picked results lead to incomplete/conflicting conclusions on the failures of ML; (ii) focusing on low-dimensional intuitions results in a limited understanding of the global structure of ML problems. In this talk, I will present several recent results on “generalization metrics” to measure ML models. I will show that (i) generalization metrics such as the connectivity between local minima can quantify global structures of optimization loss landscapes, which can lead to more accurate predictions on test performance than existing metrics; (ii) carefully measuring and characterizing the different phases of loss landscape structures in ML can provide a more complete picture of generalization. Specifically, I show that different phases of learning require different ways to address failures in generalization. Furthermore, most conventional generalization metrics focus on the so-called generalization gap, which is indirect and of limited practical value. I will discuss novel metrics referred to as “shape metrics” that allow us to predict test accuracy directly instead of the generalization gap. I also show that one can use shape metrics to achieve improved compression and out-of-distribution robustness of ML models. I will discuss theoretical results and present large-scale empirical analyses for different quantity/quality of data, different model architectures, and different optimization hyperparameter settings to provide a comprehensive picture of generalization. I will also discuss practical applications of utilizing these generalization metrics to improve ML models’ training, efficiency, and robustness.
Biography:
Dr. Yaoqing Yang is a postdoctoral researcher at the RISE Lab at UC Berkeley. He received his PhD from Carnegie Mellon University and B.S. from Tsinghua University, China. He is currently focusing on machine learning, and his main contributions to machine learning are towards improving reliability and generalization in the face of uncertainty, both in the data and the compute platform. His PhD thesis laid the foundation for an exciting field of research—coded computing—where information-theoretic techniques are developed to address unreliability in computing platforms. His works have won the best paper finalist at ICDCS and have been published multiple times in NeurIPS, CVPR, and IEEE Transactions on Information Theory. He has worked as a research intern at Microsoft, MERL and Bell Labs, and two of his joint CVPR papers with MERL have both received more than 300 citations. He is also the recipient of the 2015 John and Claire Bertucci Fellowship.
Join Zoom Meeting:
https://cuhk.zoom.us/j/99128234597
Enquiries: Ms. Karen Chan at Tel. 3943 8439
Scalable and Multiagent Deep Learning
Location
Speaker:
Mr. Guodong Zhang
Abstract:
Deep learning has achieved huge successes over the last few years, largely due to three important ideas: deep models with residual connections, parallelism, and gradient-based learning. However, it was shown that (1) deep ResNets behave like ensembles of shallow networks; (2) naively increasing the scale of data parallelism leads to diminishing return; (3) gradient-based learning could converge to spurious fixed points in the multiagent setting.
In this talk, I will present some of my works on understanding and addressing these issues. First, I will give a general recipe for training very deep neural networks without shortcuts. Second, I will present a noisy quadratic model for neural network optimization, which qualitatively predicts scaling properties of a variety of optimizers and in particular suggests that second-order algorithms would benefit more from data parallelism. Third, I will describe a novel algorithm that finds desired equilibria and saves us from converging to spurious fixed points in multi-agent games. In the end, I will conclude with future directions towards building intelligent machines that can learn from experience efficiently and reason about their own decisions.
Biography:
Guodong Zhang is a PhD candidate in the machine learning group at the University of Toronto, advised by Roger Grosse. His research lies at the intersection between machine learning, optimization, and Bayesian statistics. In particular, his research focuses on understanding and improving algorithms for optimization, Bayesian inference, and multi-agent games in the context of deep learning. He has been recognized through the Apple PhD fellowship, Borealis AI fellowship, and many other scholarships. In the past, he has also spent time at Institute for Advanced Study of Princeton and industry research labs (including DeepMind, Google Brain, and Microsoft Research).
Join Zoom Meeting:
https://cuhk.zoom.us/j/95830950658
Enquiries: Ms. Karen Chan at Tel. 3943 8439
Active Learning for Software Rejuvenation
Location
Speaker:
Ms. Jiasi Shen
Abstract:
Software now plays a central role in numerous aspects of human society. Current software development practices involve significant developer effort in all phases of the software life cycle, including the development of new software, detection and elimination of defects and security vulnerabilities in existing software, maintenance of legacy software, and integration of existing software into more contexts, with the quality of the resulting software still leaving much to be desired. The goal of my research is to improve software quality and reduce costs by automating tasks that currently require substantial manual engineering effort.
I present a novel approach for automatic software rejuvenation, which takes an existing program, learns its core functionality as a black box, builds a model that captures this functionality, and uses the model to generate a new program. The new program delivers the same core functionality but is potentially augmented or transformed to operate successfully in different environments. This research enables the rejuvenation and retargeting of existing software and provides a powerful way for developers to express program functionality that adapts flexibly to a variety of contexts. In this talk, I will show how we applied these techniques to two classes of software systems, specifically database-backed programs and stream-processing computations, and discuss the broader implications of these approaches.
Biography:
Jiasi Shen is a Ph.D. candidate at MIT EECS advised by Professor Martin Rinard. She received her bachelor’s degree from Peking University. Her main research interests are in programming languages and software engineering. She was named an EECS Rising Star in 2020.
Join Zoom Meeting:
https://cuhk.zoom.us/j/91743099396
Enquiries: Ms. Karen Chan at Tel. 3943 8439
Rethinking Efficiency and Security Challenges in Accelerated Machine Learning Services
Location
Speaker:
Prof. Wen Wujie
Abstract:
Thanks to recent model innovation and hardware advancement, machine learning (ML) has now achieved extraordinary success in many fields ranging from daily image classification, object detection, to security- sensitive biometric authentication and autonomous vehicles. To facilitate fast and secure end-to-end machine learning services, extensive studies have been conducted on ML hardware acceleration and data or model-incurred adversarial attacks. Different from these existing efforts, in this talk, we will present a new understanding of the efficiency and security challenges in accelerated ML services. The talk starts with the development of the very first “machine vision” (NOT “human vision”) guided image compression framework tailored for fast cloud-based machine learning services with guaranteed accuracy, inspired by an insightful understanding about the difference between machine learning (or “machine vision”) and human vision on image perception. Then we will discuss “StegoNet”- a new breed stegomalware taking advantage of machine learning service as a stealthy channel to conceal malicious intent (malware). Unlike existing attacks focusing only on misleading ML outcomes, “StegoNet” for the first time achieves far more diversified adversarial goals without compromising ML service quality. Our research prospects will be also given at the end of this talk, offering the audiences an alternative thinking about developing efficient and secure machine learning services.
Biography:
Wujie Wen is an assistant professor in the Department of Electrical and Computer Engineering at Lehigh University. He received his Ph.D. from University of Pittsburgh in 2015. He earned his B.S. and M.S. degrees in electronic engineering from Beijing Jiaotong University and Tsinghua University, Beijing, China, in 2006 and 2010, respectively. He was an assistant professor in the ECE department of Florida International University, Miami, FL, during 2015-2019. Before joining academia, he also worked with AMD and Broadcom for various engineer and intern positions. His research interests include reliable and secure deep learning, energy-efficient computing, electronic design automation and emerging memory systems design. His works have been published widely across venues in design automation, security, machine learning/AI etc., including HPCA, DAC, ICCAD, DATE, ICPP, HOST, ACSAC, CVPR, ECCV, AAAI etc. He received best paper nominations from ASP-DAC2018, ICCAD2018, DATE2016 and DAC2014. Dr Wen served as the General Chair of ISVLSI 2019 (Miami), Technical Program Chair of ISVLSI 2018 (Hong Kong), as well as the program committee for many conferences such as DAC, ICCAD, DATE, etc. He is an associated editor of Neurocomputing and IEEE Circuit and Systems (CAS) Magazine. His research projects are currently sponsored by US National Science Foundation, Air Force Research Laboratory and Florida Center for Cybersecurity etc.
Join Zoom Meeting:
https://cuhk.zoom.us/j/98308617940
Enquiries: Ms. Karen Chan at Tel. 3943 8439
Artificial Intelligence in Health: from Methodology Development to Biomedical Applications
Location
Speaker:
Prof. LI Yu
Abstract:
In this talk, I will give an overview of the research in our group. Essentially, we are developing new machine learning methods to resolve the problems in computational biology and health informatics, from sequence analysis, biomolecular structure prediction, and functional annotation to disease modeling, drug discovery, drug effect prediction, and combating antimicrobial resistance. We will show how to formulate problems in the biology and health field into machine learning problems, how to resolve them using cutting-edge machine learning techniques, and how the result could benefit the biology and healthcare field in return.
Biography:
Yu Li is an Assistant Professor in the Department of Computer Science and Engineering at CUHK. His main research interest is to develop novel and new machine learning methods, mainly deep learning methods, for solving the computational problems in healthcare and biology, understanding the principles behind the bio-world, and eventually improving people’s health and wellness. He obtained his PhD in computer science from KAUST in Saudi Arabia, in Oct 2020. He obtained MS degree in computer science from KAUST at 2016. Before that, he got the Bachelor degree in Biosciences from University of Science and Technology of China (USTC).
Join Zoom Meeting:
https://cuhk.zoom.us/j/98928672713
Enquiries: Ms. Karen Chan at Tel. 3943 8439
Deploying AI at Scale in Hong Kong Hospital Authority (HA)
Location
Speaker:
Mr. Dennis Lee
Abstract:
With the ever increasing demand and aging population, it is envisioned that adoption of AI technology will support Hospital Authority to tackle various strategic service challenges and deliver improvements. HA has setup AI Strategy Framework two years ago and begun setup process & infrastructure to support AI development and delivery. The establishment of AI Lab and AI delivery center is aimed to flourish AI innovations by engaging internal and external collaboration for Proof of Concept development; and also to build data and integration pipeline to validate AI solution and integrate into the HA services at scale.
By leverage 3 platforms to (1) Improve awareness of HA staff (2) Match AI supply and Demand (3) data pipeline for timely prediction, we can gradually scale AI innovations and solution in Hospital Authority. Over the past year, many clinical and non-clinical Proof of Concept has been developed and validated. The AI Chest X-ray pilot project has been implemented for General Outpatient Clinics and Emergency Department with the aim to reduce report turnaround time and provide decision support for abnormal chest x-ray imaging.
Biography:
Mr. Dennis Lee currently serves as the Senior System Manager for Artificial Intelligence Systems of the Hong Kong Hospital Authority. He current work involve developing the Artificial Intelligence and Big Data Platform to streamline data acquisition for facilitating HA data analysis via Business Intelligence, to develop Hospital Command Center dashboards, and solution deployment for Artificial Intelligence. Mr. Lee has also been leading the Corporate Project management office and as program managers for several large scale system implementations.
Join Zoom Meeting:
https://cuhk.zoom.us/j/95162965909
Enquiries: Ms. Karen Chan at Tel. 3943 8439
Strengthening and Enriching Machine Learning for Cybersecurity
Location
Speaker:
Mr. Wenbo Guo
Abstract:
Nowadays, security researchers are increasingly using AI to automate and facilitate security analysis. Although making some meaningful progress, AI has not maximized its capability in security yet due to two challenges. First, existing ML techniques have not reached security professionals’ requirements in critical properties, such as interpretability and adversary-resistancy. Second, Security data imposes many new technical challenges, which break the assumptions of existing ML Models and thus jeopardize their efficacy.
In this talk, I will describe my research efforts to address the above challenges, with a primary focus on strengthening the interpretability of blackbox deep learning models and deep reinforcement learning policies. Regarding deep neural networks, I will describe an explanation method for deep learning-based security applications and demonstrate how security analysts could benefit from this method to establish trust in blackbox models and conduct efficient finetuning. As for DRL policies, I will introduce a novel approach to draw critical states/actions of a DRL agent and show how to utilize the above explanations to scrutinize policy weaknesses, remediate policy errors, and even defend against adversarial attacks. Finally, I will conclude by highlighting my future plan towards strengthening the trustworthiness of advanced ML techniques and maximizing their capability in cyber defenses.
Biography:
Wenbo Guo is a Ph.D. Candidate at Penn State, advised by Professor Xinyu Xing. His research interests are machine learning and cybersecurity. His work includes strengthening the fundamental properties of machine learning models and designing customized machine learning models to handle security-unique challenges. He is a recipient of the IBM Ph.D. Fellowship (2020-2022), Facebook/Baidu Ph.D. Fellowship Finalist (2020), and ACM CCS Outstanding Paper Award (2018). His research has been featured by multiple mainstream media and has appeared in a diverse set of top-tier venues in security, machine learning, and data mining. Going beyond academic research, he also actively participates in many world-class cybersecurity competitions and has won the 2018 DEFCON/GeekPwn AI challenge finalist award.
Join Zoom Meeting:
https://cuhk.zoom.us/j/95859338221
Enquiries: Ms. Karen Chan at Tel. 3943 8439
Meta-programming: Optimising Designs for Multiple Hardware Platforms
Location
Speaker:
Prof. Wayne Luk
Abstract:
This talk describes recent research on meta-programming techniques for mapping high-level descriptions to multiple hardware platforms. The purpose is to enhance design productivity and maintainability. Our approach is based on decoupling functional concerns from optimisation concerns, allowing separate descriptions to be independently maintained by two types of experts: application experts focus on algorithmic behaviour, while platform experts focus on the mapping process. Our approach supports customisable optimisations to rapidly capture a wide range of mapping strategies targeting multiple hardware platforms, and reusable strategies to allow optimisations to be described once and applied to multiple applications. Examples will be provided to illustrate how the proposed approach can map a single high-level program into multi-core processors and reconfigurable hardware platforms.
Biography:
Wayne Luk is Professor of Computer Engineering with Imperial College London and the Director of the EPSRC Centre for doctoral training in High Performance Embedded and Distributed Systems. His research focuses on theory and practice of customizing hardware and software for specific application domains, such as computational finance, climate modelling, and genomic data analysis. He is a fellow of the Royal Academy of Engineering, IEEE, and BCS.
Enquiries: Ms. Karen Chan at Tel. 3943 8439
Network Stack in the Cloud
Location
Speaker:
Prof. XU Hong
Abstract:
As cloud computing becomes ubiquitous, the network stack in this virtualized environment is becoming a focal point of research with unique challenges and opportunities. In this talk, I will introduce our efforts in this space.
First, from an architectural perspective, the network stack remains a part of the guest OS inside a VM in the cloud. I will argue that this legacy architecture is becoming a barrier to innovation/evolution. The tight coupling between the network stack and the guest OS causes many deployment troubles to tenants and management and efficiency problems to the cloud provider. I will present our vision of providing the network stack as a service as a way to address these issues. The idea is to decouple the network stack from the guest OS, and offer it as an independent entity implemented by the cloud provider. I will discuss the design and evaluation of a concrete framework called NetKernel to enable this vision. Then in the second part, I will focus on container communication, which is a common scenario in the cloud. I will present a new system called PipeDevice that adopts a hardware-software co-design approach to enable low-overhead intra-host container communication using commodity FPGA.
Biography:
Hong Xu is an Associate Professor in Department of Computer Science and Engineering, The Chinese University of Hong Kong. His research area is computer networking and systems, particularly big data systems and data center networks. From 2013 to 2020 he was with City University of Hong Kong. He received his B.Eng. from The Chinese University of Hong Kong in 2007, and his M.A.Sc. and Ph.D. from University of Toronto in 2009 and 2013, respectively. He was the recipient of an Early Career Scheme Grant from the Hong Kong Research Grants Council in 2014. He received three best paper awards, including the IEEE ICNP 2015 best paper award. He is a senior member of both IEEE and ACM.
Enquiries: Ms. Karen Chan at Tel. 3943 8439
Domain-Specific Network Optimization for Distributed Deep Learning
Location
Speaker:
Prof. Kai Chen
Associate Professor
Department of Computer Science & Engineering, HKUST
Abstract:
Communication overhead poses a significant challenge to distributed DNN training. In this talk, I will overview existing efforts toward this challenge, study their advantages and shortcomings, and further present a novel solution exploiting the domain-specific characteristics of deep learning to optimize the communication overhead of distributed DNN training in a fine-grained manner. Our solution consists of several key innovations beyond prior work, including bounded-loss tolerant transmission, gradient-aware flow scheduling, and order-free per-packet load-balancing, etc., delivering up to 84.3% training acceleration over the best existing solutions. Our proposal by no means provides an ultimate answer to this research problem, instead, we hope it can inspire more critical thinkings on intersection between Networking and AI.
Biography:
Kai Chen is an Associate Professor at HKUST, the Director of Intelligent Networking Systems Lab (iSING Lab) and HKUST-WeChat joint Lab on Artificial Intelligence Technology (WHAT Lab), as well as the PC for a RGC Theme-based Project. He received his BS and MS from University of Science and Technology of China in 2004 and 2007, and PhD from Northwestern University in 2012, respectively. His research interests include Data Center Networking, Cloud Computing, Machine Learning Systems, and Privacy-preserving Computing. His work has been published in various top venues such as SIGCOMM, NSDI and TON, etc., including a SIGCOMM best paper candidate. He is the Steering Committee Chair of APNet, serves on the Program Committees of SIGCOMM, NSDI, INFOCOM, etc., and the Editorial Boards of IEEE/ACM Transactions on Networking, Big Data, and Cloud Computing.
Join Zoom Meeting:
https://cuhk.zoom.us/j/98448863119?pwd=QUJVdzgvU1dnakJkM29ON21Eem9ZZz09
Enquiries: Ms. Karen Chan at Tel. 3943 8439
Integration of First-order Logic and Deep Learning
Location
Speaker:
Prof. Sinno Jialin Pan
Provost’s Chair Associate Professor
School of Computer Science and Engineering
Nanyang Technological University
Abstract:
How to develop a loop to integrate existing knowledge to facilitate deep learning inference and then refine knowledge from the learning process is a crucial research problem. As first-order logic has been proven to be a powerful tool for knowledge representation and reasoning, interest in integrating firstorder logic into deep learning models has grown rapidly in recent years. In this talk, I will introduce our attempts to develop a unified integration framework of first-order logic and deep learning with applications to various joint inference tasks in NLP.
Biography:
Dr. Sinno Jialin Pan is a Provost’s Chair Associate Professor with the School of Computer Science and Engineering at Nanyang Technological University (NTU) in Singapore. He received his Ph.D. degree in computer science from the Hong Kong University of Science and Technology (HKUST) in 2011. Prior to joining NTU, he was a scientist and Lab Head with the Data Analytics Department at Institute for Infocomm Research in Singapore. He joined NTU as a Nanyang Assistant Professor in 2014. He was named to the list of “AI 10 to Watch” by the IEEE Intelligent Systems magazine in 2018. He serves as an Associate Editor for IEEE TPAMI, AIJ, and ACM TIST. His research interests include transfer learning and its real-world applications.
Join Zoom Meeting:
https://cuhk.zoom.us/j/97292230556?pwd=MDVrREkrWnFEMlF6aFRDQzJxQVlFUT09
Enquiries: Ms. Karen Chan at Tel. 3943 8439
Smart Sensing and Perception in the AI Era
Location
Speaker:
Dr. Jinwei Gu
R&D Executive Director
SenseBrain (aka SenseTime USA)
Abstract:
Smart sensing and perception refer to intelligent and efficient ways of measuring, modeling, and understanding of the physical world, which act as the eyes and ears of any AI-based system. Smart sensing and perception sit across the intersection of three related areas – computational imaging, representation learning, and scene understanding. Computational imaging refers to sensing the real world with optimally designed, task-specific, multi-modality sensors and optics which actively probes key visual information. Representation learning refers to learning the transformation from sensors’ raw output to some manifold embedding or feature spaces for further processing. Scene understanding includes both the low-level capture of a 3D scene of its physical properties, as well as high-level semantic perception and understanding of the scene. Advances in this area will not only benefit computer vision tasks but also result in better hardware, such as AI image sensors, AI ISP (Image Signal Processing) chips, and AI camera systems. In this talk, I will present several latest research results including high quality image restoration and accurate depth estimation from time-of-flight sensors or monocular videos, as well as some latest computational photography products in smart phones including under-display cameras, AI image sensors and AI ISP chips. I will also layout several open challenges and future research directions in this area.
Biography:
Jinwei Gu is the R&D Executive Director of SenseBrain (aka SenseTime USA). His current research focuses on low-level computer vision, computational photography, computational imaging, smart visual sensing and perception, and appearance modeling. He obtained his Ph.D. degree in 2010 from Columbia University, and his B.S and M.S. from Tsinghua University, in 2002 and 2005 respectively. Before joining
SenseTime, he was a senior research scientist in NVIDIA Research from 2015 to 2018. Prior to that, he was an assistant professor in Rochester Institute of Technology from 2010 to 2013, and a senior researcher in the media lab of Futurewei Technologies from 2013 to 2015. He serves as an associate editor for IEEE Transactions on Computational Imaging (TCI) and IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), an area chair for ICCV2019, ECCV2020, and CVPR2021, and industry chair for ICCP2020. He is an IEEE senior member since 2018. His research work has been successfully transferred to many products such as NVIDIA CoPilot SDK, DriveIX SDK, as well as super resolution, super night, portrait restoration, RGBW solution which are widely used in many flagship mobile phones.
Join Zoom Meeting:
https://cuhk.zoom.us/j/97322964334?pwd=cGRJdUx1bkxFaENJKzVwcHdQQm5sZz09
Enquiries: Ms. Karen Chan at Tel. 3943 8439
The Role of AI for Next-generation Robotic Surgery
Location
Speaker:
Prof. DOU Qi
Abstract:
With advancements in information technologies and medicine, the operating room has undergone tremendous transformations evolving into a highly complicated environment. These achievements further innovate the surgery procedure and have great promise to enhance the patient’s safety. Within the new generation of operating theatre, the computer-assisted system plays an important role to provide surgeons with reliable contextual support. In this talk, I will present a series of deep learning methods towards interdisciplinary researches at artificial intelligence for surgical robotic perception, for automated surgical workflow analysis, instrument presence detection, surgical tool segmentation, surgical scene perception, etc. The proposed methods cover a wide range of deep learning topics including semi-supervised learning, relational graph learning, learning-based stereo depth estimation, reinforcement learning, etc. The challenges, up-to-date progresses and promising future directions of AI-powered context-aware operating theaters will also be discussed.
Biography:
Prof. DOU Qi is an Assistant Professor with the Department of Computer Science & Engineering, CUHK. Her research interests lie in innovating collaborative intelligent systems that support delivery of high-quality medical diagnosis, intervention and education for next-generation healthcare. Her team pioneers synergistic advancements across artificial intelligence, medical image analysis, surgical data science, and medical robotics, with an impact to support demanding clinical workflows such as robotic minimally invasive surgery.
Enquiries: Miss Karen Chan at Tel. 3943 8439
The Coming of Age of Microfluidic Biochips: Connection Biochemistry to Electronic Design Automation
Location
Speaker:
Prof. HO Tsung Yi
Abstract:
Advances in microfluidic technologies have led to the emergence of biochip devices for automating laboratory procedures in biochemistry and molecular biology. Corresponding systems are revolutionizing a diverse range of applications, e.g. point-of-care clinical diagnostics, drug discovery, and DNA sequencing–with an increasing market. However, continued growth (and larger revenues resulting from technology adoption by pharmaceutical and healthcare companies) depends on advances in chip integration and design-automation tools. Thus, there is a need to deliver the same level of design automation support to the biochip designer that the semiconductor industry now takes for granted. In particular, the design of efficient design automation algorithms for implementing biochemistry protocols to ensure that biochips are as versatile as the macro-labs that they are intended to replace. This talk will first describe technology platforms for accomplishing “biochemistry on a chip”, and introduce the audience to both the droplet-based “digital” microfluidics based on electrowetting actuation and flow-based “continuous” microfluidics based on microvalve technology. Next, the presenter will describe system-level synthesis includes operation scheduling and resource binding algorithms, and physical-level synthesis includes placement and routing optimizations. Moreover, control synthesis and sensor feedback-based cyberphysical adaptation will be presented. In this way, the audience will see how a “biochip compiler” can translate protocol descriptions provided by an end user (e.g., a chemist or a nurse at a doctor’s clinic) to a set of optimized and executable fluidic instructions that will run on the underlying microfluidic platform. Finally, present status and future challenges of open-source microfluidic ecosystem will be covered.
Biography:
Tsung-Yi Ho received his Ph.D. in Electrical Engineering from National Taiwan University in 2005. His research interests include several areas of computing and emerging technologies, especially in design automation of microfluidic biochips. He has been the recipient of the Invitational Fellowship of the Japan Society for the Promotion of Science (JSPS), the Humboldt Research Fellowship by the Alexander von Humboldt Foundation, the Hans Fischer Fellowship by the Institute of Advanced Study of the Technische Universität München, and the International Visiting Research Scholarship by the Peter Wall Institute of Advanced Study of the University of British Columbia. He was a recipient of the Best Paper Awards at the VLSI Test Symposium (VTS) in 2013 and IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems in 2015. He served as a Distinguished Visitor of the IEEE Computer Society for 2013-2015, a Distinguished Lecturer of the IEEE Circuits and Systems Society for 2016-2017, the Chair of the IEEE Computer Society Tainan Chapter for 2013-2015, and the Chair of the ACM SIGDA Taiwan Chapter for 2014-2015. Currently, he serves as the Program Director of both EDA and AI Research Programs of Ministry of Science and Technology in Taiwan, VP Technical Activities of IEEE CEDA, an ACM Distinguished Speaker, and Associate Editor of the ACM Journal on Emerging Technologies in Computing Systems, ACM Transactions on Design Automation of Electronic Systems, ACM Transactions on Embedded Computing Systems, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, and IEEE Transactions on Very Large Scale Integration Systems, Guest Editor of IEEE Design & Test of Computers, and the Technical Program Committees of major conferences, including DAC, ICCAD, DATE, ASP-DAC, ISPD, ICCD, etc. He is a Distinguished Member of ACM.
Enquiries: Miss Karen Chan at Tel. 3943 8439
Towards Understanding Generalization in Generative Adversarial Networks
Location
Speaker:
Prof. FARNIA Farzan
Abstract:
Generative Adversarial Networks (GANs) represent a game between two machine players designed to learn the distribution of observed data.
Since their introduction in 2014, GANs have achieved state-of-the-art performance on a wide array of machine learning tasks. However, their success has been observed to heavily depend on the minimax optimization algorithm used for their training. This dependence is commonly attributed to the convergence speed of the underlying optimization algorithm. In this seminar, we focus on the generalization properties of GANs and present theoretical and numerical evidence that the minimax optimization algorithm also plays a key role in the successful generalization of the learned GAN model from training samples to unseen data. To this end, we analyze the generalization behavior of standard gradient-based minimax optimization algorithms through the lens of algorithmic stability. We leverage the algorithmic stability framework to compare the generalization performance of standard simultaneous-update and non-simultaneous-update gradient-based algorithms. Our theoretical analysis suggests the superiority of simultaneous-update algorithms in achieving a smaller generalization error for the trained GAN model.
Finally, we present numerical results demonstrating the role of simultaneous-update minimax optimization algorithms in the proper generalization of trained GAN models.
Biography:
Farzan Farnia is an Assistant Professor of Computer Science and Engineering at The Chinese University of Hong Kong. Prior to joining CUHK, he was a postdoctoral research associate at the Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, from 2019-2021. He received his master’s and PhD degrees in electrical engineering from Stanford University and his bachelor’s degrees in electrical engineering and mathematics from Sharif University of Technology. At Stanford, he was a graduate research assistant at the Information Systems Laboratory advised by Professor David Tse. Farzan’s research interests span statistical learning theory, information theory, and convex optimization. He has been the recipient of the Stanford Graduate Fellowship (Sequoia CapitalFellowship) between 2013-2016 and the Numerical Technology Founders Prize as the second top performer of Stanford’s electrical engineering PhD qualifying exams in 2014.
Enquiries: Miss Karen Chan at Tel. 3943 8439
Complexity of Testing and Learning of Markov Chains
Location
Speaker:
Prof. CHAN Siu On
Assistant Professor
Department of Computer Science and Engineering, CUHK
Abstract:
This talk will summarize my works in two unrelated areas in complexity theory: distributional learning and extended formulation.
(1) Distributional Learning: Much of the work on distributional learning assumes the input samples are identically and independently distributed. A few recent works relax this assumption and instead assume the samples to be drawn as a trajectory from a Markov chain. Previous works by Wolfer and Kontorovich suggested that learning and identity test problems on ergodic chains can be reduced to the corresponding problems with i.i.d. samples. We show how to further reduce essentially every learning and identity testing problem on the (arguably most general) class of irreducible chans, by introducing the concept of k-cover time. The concept of k-cover time is a natural generalization of the usual notion of cover time.
The tight analysis of the sample complexity for reversible chains relies on a previous work by Ding-Lee-Peres. Their analysis relies on the so-called generalized second Ray-Knight isomorphism theorem, that connects the local time of a continuous-time reversible Markov chain to the Gaussian free field. It is natural to ask whether similar analysis can be generalized to general chains. We will discuss our ongoing work towards this goal.
(2) Extended formulation: Extended formulation lower bounds aim to show that linear programs (or other convex programs) need to be large in solving certain problems, such as constraint satisfaction. A natural open problem is whether refuting unsatisfiable 3-SAT instances requires linear programs of exponential size, and whether such a lower bound holds for every “downstream” NP-hard problem. I will discuss our ongoing work towards relating extended formulation lower bounds, using techniques from resolution lower bounds.
Biography:
Siu On CHAN graduated from the Chinese University of Hong Kong. He got his MSc at the University of Toronto and PhD at UC Berkeley. He was a postdoc at Microsoft Research New England. He is now an Assistant Professor at the Chinese University of Hong Kong. He is interested in the complexity of constraint satisfaction and learning algorithms. He won a Best Paper Award and a Best Student Paper Award at STOC 2013.
Enquiries: Miss Karen Chan at Tel. 3943 8439
Efficient Computing of Deep Neural Networks
Location
Speaker:
Prof. YU Bei
Abstract:
Deep neural networks (DNNs) are currently widely used for many artificial intelligence AI applications with state-of-the-art accuracy, but they come at the cost of high computational complexity. Therefore, techniques that enable efficient computing of deep neural networks to improve key metrics—such as energy efficiency, throughput, and latency—without sacrificing accuracy are critical. This talk provides a structured treatment of the key principles and techniques for enabling efficient computing of DNNs, including implementation level, model level, and compilation level techniques.
Biography:
Bei Yu is currently an Associate Professor at the Department of Computer Science and Engineering, The Chinese University of Hong Kong. He received PhD degree from Electrical and Computer Engineering, the University of Texas at Austin in 2014. His current research interests include machine learning with applications in VLSI CAD and computer vision. He has served as TPC Chair of 1st ACM/IEEE Workshop on Machine Learning for CAD (MLCAD), served in the program committees of DAC, ICCAD, DATE, ASPDAC, ISPD, the editorial boards of ACM Transactions on Design Automation of Electronic Systems (TODAES), Integration, the VLSI Journal. He is Editor of the IEEE TCCPS Newsletter.
Prof. Yu received seven Best Paper Awards from ASPDAC 2021 & 2012, ICTAI 2019, Integration, the VLSI Journal in 2018, ISPD 2017, SPIE Advanced Lithography Conference 2016, ICCAD 2013, six other Best Paper Award Nominations (DATE 2021, ICCAD 2020, ASPDAC 2019, DAC 2014, ASPDAC 2013, and ICCAD 2011), six ICCAD/ISPD contest awards.
Enquiries: Miss Karen Chan at Tel. 3943 8439
Some Recent Results in Database Theory by Yufei Tao and His Team
Location
Speaker:
Prof. TAO Yufei
Abstract:
This talk will present some results obtained by Yufei Tao and his students in recent years. These results span several active fields in database research nowadays – machine learning, crowdsourcing, massively parallel computation, and graph processing – and provide definitive answers to a number of important problems by establishing matching upper and lower bounds. The talk will be theoretical in nature but will assume only undergraduate-level knowledge of computer science, and is therefore suitable for a general audience.
Biography:
Yufei Tao is a Professor at the Department of Computer Science and Engineering, the Chinese University of Hong Kong. He received two SIGMOD Best Paper Awards (2013 and 2015) and a PODS Best Paper Award (2018). He served as a PC co-chair of ICDE 2014 and the PC chair of PODS 2020, and gave an invited keynote speech at ICDT 2016. He was elected an ACM Fellow in 2020 for his contributions to algorithms on large-scale data. Yufei’s research aims to develop “small-and-sweet”
algorithms: (i) small: easy to implement for deployment in practice, and (ii) sweet: having non-trivial theoretical guarantees. He particularly enjoys working on problems that arise at the cross-intersection of databases, machine learning, and theoretical computer science.
Enquiries: Miss Karen Chan at Tel. 3943 8439
Generation, Reasoning and Rewriting in Natural Dialogue System
Location
Speaker:
Prof. WANG Liwei
Abstract:
Natural Dialogue Systems, including recent eye-catching multimodal (vision + language) dialog systems, need a better understanding of utterances to generate reliable and meaningful language. In this talk, I will introduce several research works that my LaVi Lab (multimodal Language and Vision Lab) has done together with our collaborators in this area. In Particular, I will discuss those essential components in natural dialog systems, including controllable language generation, language reasoning, and utterance rewriting, published in recent top NLP and AI conferences.
Biography:
Prof. WANG Liwei received his Ph.D. from the Computer Science Department at University of Illinois at Urbana Champaign (UIUC) in 2018. After that, he joined Tencent AI Lab, NLP group at Bellevue, US as a senior researcher, leading multiple projects in multimodal (language and vision) learning and NLP. In Dec 2020, Dr. Wang joined the Computer Science and Engineering Department at CUHK as an assistant professor. In the meanwhile, he is also serving as the Editorial Board of IJCV and program committee in top NLP conferences. Recently, his team won 2020 BAAI-JD Multimodal Dialogue Challenge and also the Referit3D CVPR 2021 challenge. The research goal of Prof. Wang’s LaVi Lab is to build multi-modal interactive AI systems that can not only understand and recreate the visual world but also communicate like human beings using natural language.
Enquiries: Miss Karen Chan at Tel. 3943 8439
Towards SmartNICs in Data Center Systems
Location
Speaker:
Dr. Bojie Li
Senior Engineer
Huawei 2012 Labs
Abstract:
In modern data centers, the performance of general-purpose processors lags behind network, storage, and customized computing hardware. However, network and storage infrastructure mainly uses software processing on general-purpose processors, which becomes a bottleneck. We leverage SmartNICs to accelerate network functions, data structure, and communication primitives in cloud data centers, thus achieving full-stack acceleration of network and storage. In this talk, we will also propose a new SmartNIC architecture which is tightly integrated with the host CPU, enabling a large, disaggregated memory with the SmartNICs being a programmable data plane.
Biography:
Dr. Bojie Li is a Senior Engineer with Huawei 2012 Labs. In 2019, he obtained Ph.D. in Computer Science from University of Science and Technology of China (USTC) and Microsoft Research Asia (MSRA). His research interest is data center network and systems. He has published papers in SIGCOMM, SOSP, NSDI, ATC, and PLDI. He has received the ACM China Doctoral Dissertation Award and Microsoft Research Asia PhD Fellowship.
Join Zoom Meeting:
https://cuhk.zoom.us/j/92830629838
Enquiries: Miss Karen Chan at Tel. 3943 8439
Structurally Stable Assemblies: Theory, Algorithms, and Applications
Location
Speaker:
Dr. SONG Peng
Assistant Professor
Pillar of Information Systems Technology and Design
Singapore University of Technology and Design
Abstract:
An assembly with rigid parts is structurally stable if it can preserve its form under external forces without collapse. Structural stability is a necessary condition for using assemblies in practice such as furniture and architecture. However, designing structurally stable assemblies remains a challenging task for general and expert users since slight variation on the geometry of an individual part may affect the whole assembly’s structural stability. In this talk, I will introduce our attempts in the past years in advancing the theory and algorithms for computational design and fabrication of structurally stable assemblies. The key technique is to analyze structural stability in the kinematic space by utilizing static-kinematic duality and to ensure structural stability with geometry optimization using a two-stage approach (i.e., kinematic design and geometry realization). Our technique can handle assemblies that are structurally stable in different degrees, namely stable under a single external force, a set of external forces, and arbitrary external forces. The usefulness of these structurally stable assemblies has been demonstrated in applications like personalized puzzles, interlocking furniture, and free-form discrete architecture.
Biography:
Peng Song is an Assistant Professor at the Pillar of Information Systems Technology and Design, Singapore University of Technology and Design (SUTD), where he directs the Computer Graphics Laboratory (CGL). Prior to joining SUTD in 2019, he was a research scientist at EPFL, Switzerland. He received his PhD from Nanyang Technological University, Singapore in 2013, his master and bachelor degrees both from Harbin Institute of Technology, China in 2010 and 2007 respectively. His research is in the area of computer graphics, with a focus on computational fabrication and geometry processing. He serves as a co-organizer of a weekly web series on Computational Fabrication, and a program committee member of several leading conferences in computer graphics including SIGGRAPH Asia and Pacific Graphics.
Join Zoom Meeting:
https://cuhk.zoom.us/j/98242753532
Enquiries: Miss Karen Chan at Tel. 3943 8439
Towards Trustworthy Full-Stack AI
Location
Speaker:
Dr. Fang Chengfang
Abstract:
Due to lack of security consideration at the early development of AI algorithms, most AI systems are not robust against adversarial manipulation.
In critical applications such as healthcare, autonomous driving, and malware detection, security risks can be devastating, and thus attract numerous research efforts.
In this seminar, I will introduce some of the AI security and privacy research topics from an industry point of view, including the risk analysis throughout AI lifecycle and the pipeline of defense, in the hopes of providing a more complete picture on top of academic research to the audience.
Biography:
Chengfang Fang obtained his Ph.D. degree from National University of Singapore before joining Huawei in 2013. He has been working on security and privacy protection in several areas including machine learning, internet of things, mobile device and biometrics for more than 10 years. He has published over 20 research papers and obtained 15 patents in this domain. He is currently a principal researcher of Trustworthiness Technology Lab in Huawei Singapore Research Center.
Join Zoom Meeting:
https://cuhk.zoom.us/j/92800336791
Enquiries: Miss Karen Chan at Tel. 3943 8439
High Performance Fluid Simulation and its Applications
Location
Speaker:
Dr. Xiaopei Liu
Assistant Professor
School of Information Science and Technology
Shanghai Tech University
Abstract:
Efficient and accurate high-resolution fluid simulation in complex environment is desirable in many practical applications, e.g., the aerodynamic shape design of airplanes and cars, as well as the production of special effects in movies and games. However, this has been a challenging problem for a very long time, and yet not well solved. In this talk, I will introduce our attempts in the past years in advancing the computational techniques for high-performance fluid simulations by developing statistical kinetic model with variational principles, in a single-phase flow scenario where strong turbulence and complex geometric objects exist. I will also introduce how the general idea can be extended to multiphase flow simulations in order to allow both large density ratio and high Reynolds number. To improve computational efficiency, I will further introduce our GPU optimization and machine learning techniques that are designed as both low-level and high-level accelerations. Rendering and visualization of fluid flow data will also be briefly covered. Finally, validations in real scenarios and demonstrations of results in different applications, such as the aerodynamic simulations over aircrafts, cars and architectures for shape design purposes, the blood flow simulations inside coronary arteries for clinical diagnosis, the simulation of visual flow phenomena for movies and games, will all be shown in this talk, with a new application for learning the control policy of a fish-like underwater robot with our fast simulator.
Biography:
Dr. Xiaopei Liu is now an assistant professor at School of Information Science and Technology, Shanghai Tech University, affiliated with Visual and Data Intelligence (VDI) center. He obtained his PhD degree on computer science and engineering from The Chinese University of Hong Kong (CUHK), and worked as a postdoctoral Research Fellow at Nanyang Technological University (NTU) in Singapore, where he started the multi-disciplinary research on fluid simulation and visualization, both on classical and quantum fluids. Most of his publications are top journals and conferences, which cover multiple disciplines, such as ACM TOG, ACM SIGGRAPH/SIGGRAPH Asia, IEEE TVCG, APS PRD, AIP POF, etc. Dr. Xiaopei Liu is now working on high-performance fluid simulation in complex environment, with applications to visual effects, computational design & fabrication, medical diagnosis, robot learning, as well as fundamental science. He is also conducting research on simulation-based UAV design optimization & autonomous navigation.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93649176456
Enquiries: Miss Karen Chan at Tel. 3943 8439
Dynamic Voltage Scaling: from Low Power to Security
Location
Speaker:
Dr. Qu Gang
Abstract:
Dynamic voltage scaling (DVS) is one of the most effective and widely used techniques for low power design. It adjusts system operating voltage and clock frequency based on the real time application’s computation and deadline information in order to reduce the power and energy consumption. In this talk, I will share our research results on DVS and the lessons I have learned in three different periods of my research career. First, in the late 1990’s, as a graduate student, we formulated the problem of DVS for energy minimization and derived a series of optimal solutions under different system settings to guide the practice of DVS enabled system design. Then in 2000, I became an assistant professor and we studied how to apply DVS to scenarios where the traditional execution-time-for-energy tradeoff does not exist. Finally, in the past five years, we developed DVS-based attacks to break the trusted execution environment in model computing platforms. I will also show our work on enhancing system security by DVS through examples of device authentication and countermeasures to machine learning model inversion attacks. It is my hope that this talk can shed light on how to find a research topic and make your contributions.
Biography:
Gang Qu received his B.S. in mathematics from the University of Science and Technology of China (USTC) and Ph.D. in computer science from the University of California, Los Angeles (UCLA). He is currently a professor in the Department of Electrical and Computer Engineering at the University of Maryland, College Park, where he leads the Maryland Embedded Systems and Hardware Security Lab (MeshSec) and the Wireless Sensor Laboratory. His research activities are on trusted integrated circuit design, hardware security, energy efficient system design and wireless sensor networks. He has focused recently on applications in the Internet of Things, cyber-physical systems, and machine learning. He has published more than 250 conference papers and journal articles on these topics with several best paper awards. Dr. Qu is an enthusiastic teacher. He has taught and co-taught various security courses, including a popular MOOC on Hardware Security through Coursera. Dr. Qu has served 17 times as the general or program chair/co-chair for international conferences and workshops. He is currently on the editorial board of IEEE TCAD, TETC, ACM TODAES, JCST, Integration, and HSS. Dr. Qu is a fellow of IEEE.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96878058667
Enquiries: Miss Karen Chan at Tel. 3943 8439
Prioritizing Computation and Analyst Resources in Large-scale Data Analytics
Location
Speaker:
Ms. Kexin RONG
PhD student, Department of Computer Science
Stanford University
Abstract:
Data volumes are growing exponentially, fueled by an increased number of automated processes such as sensors and devices. Meanwhile, the computational power available for processing this data – as well as analysts’ ability to interpret it – remain limited. As a result, database systems must evolve to address these new bottlenecks in analytics. In my work, I ask: how can we adapt classic ideas from database query processing to modern compute- and analyst-limited data analytics?
In this talk, I will discuss the potential for this kind of systems development through the lens of several practical systems I have developed. By drawing insights from database query optimization, such as pushing workload- and domain-specific filtering, aggregation, and sampling into core analytics workflows, we can dramatically improve the efficiency of analytics at scale. I will illustrate these ideas by focusing on two systems — one designed to optimize visualizations for streaming infrastructure and application telemetry and one designed for high-volume seismic waveform analysis — both of which have been field-tested at scale. I will also discuss lessons from production deployments at companies including Datadog, Microsoft, Google and Facebook.
Biography:
Kexin Rong is a Ph.D. student in Computer Science at Stanford University, co-advised by Professor Peter Bailis and Professor Philip Levis. She designs and builds systems to enable data analytics at scale, supporting applications including scientific analysis, infrastructure monitoring, and analytical queries on big data clusters. Prior to Stanford, she received her bachelor’s degree in Computer Science from California Institute of Technology.
Join Zoom Meeting:
https://cuhk.zoom.us/j/97794511231?pwd=Qjg2RlArcUNrbHBwUmxNSW4yTVIxZz09
Enquiries: Miss Caroline TAI at Tel. 3943 8440
Toward a Deeper Understanding of Generative Adversarial Networks
Location
Speaker:
Dr. Farzan FARNIA
Postdoctoral Research Associate
Laboratory for Information and Decision Systems, MIT
Abstract:
While modern adversarial learning frameworks achieve state-of-the-art performance on benchmark image, sound, and text datasets, we still lack a solid understanding of their robustness, generalization, and convergence behavior. In this talk, we aim to bridge this gap between theory and practice using a principled analysis of these frameworks through the lens of optimal transport and information theory. We specifically focus on the Generative Adversarial Network (GAN) framework which represents a game between two machine players for learning the distribution of data. In the first half of the talk, we study equilibrium in GAN games for which we show the classical Nash equilibrium may not exist. We then introduce a new equilibrium notion for GAN problems, called proximal equilibrium, through which we develop a GAN training algorithm with improved stability. We provide several numerical results on large-scale datasets supporting our proposed training method for GANs. In the second half of the talk, we attempt to understand why GANs often fail in learning multi-modal distributions. We focus our study on the benchmark Gaussian mixture models and demonstrate the failures of standard GAN architectures under this simple class of multi-modal distributions. Leveraging optimal transport theory, we design a novel architecture for the GAN players which is tailored to mixtures of Gaussians. We theoretically and numerically show the significant gain achieved by our designed GAN architecture in learning multi-modal distributions. We conclude the talk by discussing some open research challenges in adversarial learning.
Biography:
Farzan Farnia is a postdoctoral research associate at the Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, where he is co-supervised by Professor Asu Ozdaglar and Professor Ali Jadbabaie. Prior to joining MIT, Farzan received his master’s and PhD degrees in electrical engineering from Stanford University and his bachelor’s degrees in electrical engineering and mathematics from Sharif University of Technology. At Stanford, he was a graduate research assistant at the Information Systems Laboratory advised by Professor David Tse. Farzan’s research interests include statistical learning theory, optimal transport theory, information theory, and convex optimization. He has been the recipient of the Stanford Graduate Fellowship (Sequoia Capital fellowship) from 2013-2016 and the Numerical Technology Founders Prize as the second top performer of Stanford’s electrical engineering PhD qualifying exams in 2014.
Join Zoom Meeting:
https://cuhk.zoom.us/j/99476583146?pwd=QVdsaTJLYU1ab2c0ODV0WmN6SzN2Zz09
Enquiries: Miss Caroline TAI at Tel. 3943 8440
Sensitive Data Analytics with Local Differential Privacy
Location
Speaker:
Mr. Tianhao WANG
PhD student, Department of Computer Science
Purdue University
Abstract:
When collecting sensitive information, local differential privacy (LDP) can relieve users’ privacy concerns, as it allows users to add noise to their private information before sending data to the server. LDP has been adopted by big companies such as Google and Apple for data collection and analytics. My research focuses on improving the ecosystem of LDP. In this talk, I will first share my research on the fundamental tools in LDP, namely the frequency oracles (FOs), which estimate the frequency of each private value held by users. We proposed a framework that unifies different FOs and optimizes them. Our optimized FOs improve the estimation accuracy of Google’s and Apple’s implementations by 50% and 90%, respectively, and serve as the state-of-the-art tools for handling more advanced tasks. In the second part of my talk, I will present our work on extending the functionality of LDP, namely, how to make a database system that satisfies LDP while still supporting a variety of analytical queries.
Biography:
Tianhao Wang is a Ph.D. candidate in the department of computer science, Purdue University, advised by Prof. Ninghui Li. He received his B.Eng. degree from software school, Fudan University in 2015. His research area is security and privacy, with a focus on differential privacy and applied cryptography. He is a member of DPSyn, which won several international differential privacy competitions. He is a recipient of the Bilsland Dissertation Fellowship and the Emil Stefanov Memorial Fellowship.
Join Zoom Meeting:
https://cuhk.zoom.us/j/94878534262?pwd=Z2pjcDUvQVlETzNoVWpQZHBQQktWUT09
Enquiries: Miss Caroline TAI at Tel. 3943 8440
Toward Reliable NLP Systems via Software Testing
Location
Speaker:
Dr. Pinjia HE
Postdoctoral researcher, Computer Science Department
ETH Zurich
Abstract:
NLP systems such as machine translation have been increasingly utilized in our daily lives. Thus, their reliability becomes critical; mistranslations by Google Translate, for example, can lead to misunderstanding, financial loss, threats to personal safety and health, etc. On the other hand, due to their complexity, such systems are difficult to get right. Because of their nature (i.e., based on large, complex neural networks), traditional reliability techniques are challenging to be applied. In this talk, I will present my recent work that has spearheaded the testing of machine translation systems, toward building reliable NLP systems. In particular, I will describe three complementary approaches which collectively found 1,000+ diverse translation errors in the widely-used Google Translate and Bing Microsoft Translator. I will also describe my work on LogPAI, an end-to-end log management framework powered by AI algorithms for traditional software reliability, and conclude the talk with my vision for making both traditional and intelligent software such as NLP systems more reliable.
Biography:
Pinjia HE has been a postdoctoral researcher in the Computer Science Department at ETH Zurich after receiving his PhD degree from The Chinese University of Hong Kong (CUHK) in 2018. He has research expertise in software engineering and artificial intelligence, and is particularly passionate about making both traditional and intelligent software reliable. His research on automated log analysis and machine translation testing appeared in top computer science venues, such as ICSE, ESEC/FSE, ASE, and TDSC. The LogPAI project led by him has been starred 2,000+ times on GitHub and downloaded 30,000+ times by 380+ organizations, and won a Most Influential Paper (MIP) award at ISSRE. He also won a 2016 Excellent Teaching Assistantship at CUHK. He has served on program committees of MET’21, DSML’21, ECOOP’20 Artifact, and ASE’19 Demo, and reviewed for top journals and conferences (e.g., TSE, TOSEM, ICSE, KDD, and IJCAI). According to Google Scholar, he has an h-index of 14 and 1,200+ citations.
Join Zoom Meeting:
https://cuhk.zoom.us/j/98498351623?pwd=UHFFUU1QbExYTXAxTWxCMk9BbW9mUT09
Enquiries: Miss Caroline TAI at Tel. 3943 8440
Edge AI – A New Battlefield for Hardware Security Research
Location
Speaker:
Prof. CHANG Chip Hong
Associate Professor
Nanyang Technological University (NTU) of Singapore
Abstract:
The flourishing of Internet of Things (IoT) has rekindled on-premise computing to allow data to be analyzed closer to the source. To support edge Artificial Intelligence (AI), hardware accelerators, open-source AI model compilers and commercially available toolkits have evolved to facilitate the development and deployment of applications that use AI at its core. This “model once, run optimized anywhere” paradigm shift in deep learning computations introduces new attack surfaces and threat models that are methodologically different from existing software-based attack and defense mechanisms. Existing adversarial examples modify the input samples presented to an AI application either digitally or physically to cause a misclassification. Nevertheless, these input-based perturbations are not robust or stealthy on multi-view target. To generate a good adversarial example for misclassifying a real-world target of variational viewing angle, lighting and distance, a decent number of pristine samples of the target object are required. The feasible perturbations are substantial and visually perceptible. Edge AI also poses a difficult catchup for existing adversarial example detectors that are designed based on sophisticated offline analyses with the assumption that the deep learning model is implemented on a general purpose 32-bit floating-point CPU or GPU cluster. This talk will first present a new glitch injection attack on edge DNN accelerator capable of misclassifying a target under variational viewpoints. The attack pattern for each target of interest consists of sparse instantaneous glitches, which can be derived from just one sample of the target. The second part of this talk will present a new hardware-oriented approach for in-situ detection of adversarial inputs feeding through a spatial DNN accelerator architecture or a third-party DNN Intellectual Property (IP) implemented on the edge. With negligibly small hardware overhead, the glitch injection circuit and the trained shallow binary tree detector can be easily implemented alongside with a deep learning model on an edge AI accelerator hardware.
Biography:
Prof. Chip Hong Chang is an Associate Professor at the Nanyang Technological University (NTU) of Singapore. He held concurrent appointments at NTU as Assistant Chair of Alumni of the School of EEE from 2008 to 2014, Deputy Director of the Center for High Performance Embedded Systems from 2000 to 2011, and Program Director of the Center for Integrated Circuits and Systems from 2003 to 2009. He has coedited five books, and have published 13 book chapters, more than 100 international journal papers (>70 are in IEEE), more than 180 refereed international conference papers (mostly in IEEE), and have delivered over 40 colloquia and invited seminars. His current research interests include hardware security and trustable computing, low-power and fault-tolerant computing, residue number systems, and application-specific digital signal processing algorithms and architectures. Dr. Chang currently serves as the Senior Area Editor of IEEE Transactions on Information Forensic and Security (TIFS), and Associate Editor of the IEEE Transactions on Circuits and Systems-I (TCAS-I) and IEEE Transactions on Very Large Scale Integration (TVLSI) Systems. He was the Associate Editor of the IEEE TIFS and IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD) from 2016 to 2019, IEEE Access from 2013 to 2019, IEEE TCAS-I from 2010 to 2013, Integration, the VLSI Journal from 2013 to 2015, Springer Journal of Hardware and System Security from 2016 to 2020 and Microelectronics Journal from 2014 to 2020. He also guest edited eight journal special issues including IEEE TCAS-I, IEEE Transactions on Dependable and Secure Computing (TDSC), IEEE TCAD and IEEE Journal on Emerging and Selected Topics in Circuits and Systems (JETCAS). He has held key appointments in the organizing and technical program committees of more than 60 international conferences (mostly IEEE), including the General Co-Chair of 2018 IEEE Asia-Pacific Conference on Circuits and Systems and the inaugural Workshop Chair and Steering Committee of the ACM CCS satellite workshop on Attacks and Solutions in Hardware Security. He is the 2018-2019 IEEE CASS Distinguished Lecturer, a Fellow of the IEEE and the IET.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93797957554?pwd=N2J0VjBmUFh6N0ZENVY0U1RvS0Zhdz09
Meeting ID: 937 9795 7554
Password: 607354
Enquiries: Miss Caroline TAI at Tel. 3943 8440
Design Exploration of DNN Accelerators using FPGA and Emerging Memory
Location
Speaker:
Dr. Guangyu SUN
Associate Professor
Center for Energy-efficient Computing and Applications (CECA)
Peking University
Abstract:
Deep neural networks (DNN) have been successfully used in the fields, such as computer vision and natural language processing. In order to improve the processing efficiency, various hardware accelerators have been proposed for DNN applications. In this talk, I will first review our works about design space exploration and design automation for DNN accelerators on FPGA platforms. Then, I will quickly introduce the potential and challenges of using emerging memory for energy-efficient DNN inference. After that, I will try to provide some advices for graduate study.
Biography:
Dr. Guangyu Sun is an associate professor at Center for Energy-efficient Computing and Applications (CECA) in Peking University. He received his B.S. and M.S degrees from Tsinghua University, Beijing, in 2003 and 2006, respectively. He received his Ph.D. degree in Computer Science from the Pennsylvania State University in 2011. His research interests include computer architecture, acceleration system, and design automation for modern applications. He has published 100+ journals and refereed conference papers in these areas. He is an associate editor of ACM TECS and ACM JETC.
Join Zoom Meeting:
https://cuhk.zoom.us/j/95836460304?pwd=UkRwSldjNWdUWlNvNnN2TTlRZ1ZUdz09
Meeting ID: 958 3646 0304
Password: 964279
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
In-Memory Computing – an algorithm –architecture co-design approach towards the POS/w era
Location
Speaker:
Prof. LI Jiang
Associate Professor
Department of computer science and engineering
Shanghai Jiao Tong University
Abstract:
The rapid rising computing power over the past decade has supported the advance of Artificial Intelligence. Still, in the post-Moore era, AI chips with traditional CMOS process and Van-Neumann architectures face huge bottlenecks in memory walls and energy efficiency wall. In-memory computing architecture based on emerging memristor technology has become a very competitive computing paradigm to deliver two order-of-magnitude higher energy efficiency. The memristor process has apparent advantages in power consumption, multi-bit, and cost. However, it faces challenges of the low manufacturing scalability and process variation, which lead to the instability of computation and limited capability of accommodate large and complex neural networks. This talk will introduce the algorithm and architecture co-optimization approach to solve the above challenges.
Biography:
Li Jiang is an associate professor from Dept. of CSE, Shanghai Jiao Tong University. He received the B.S. degree from the Dept. of CS&E, Shanghai Jiao Tong University in 2007, the MPhil and the Ph.D. degree from the Dept. of CS&E, the Chinese University of Hong Kong in 2010 and 2013 respectively. He has published more than 50 peer-reviewd papers in top-tier computer architecture and EDA conferences and journals, including a best paper nomination in ICCAD. According to the IEEE Digital Library, five papers ranked in the top 5% of citations of all papers collected at its conferences. The achievements have been highly recognized and cited by academic and industry experts, including Academician Zheng Nanning, Academician William Dally, Prof. Chengming Hu, and many ACM/IEEE fellows. Some of the achievements have been introduced into the IEEE P1838 standard, and a number of technologies have been put into commercial use in cooperation with TSMC, Huawei and Alibaba. He got best Ph.D. Dissertation award in ATS 2014, and was in the final list of TTTC’s E. J. McCluskey Doctoral Thesis Award. He received ACM Shanghai Rising Star award, and CCF VLSI early career award. He received 2020 CCF distinguished Speaker. He serves as co-chair and TPC member in several international and national conferences, such as MICRO, DATE, ASP-DAC, ITC-Asia, ATS , CFTC, CTC and etc. He is an associate Editor of IET Computers Digital Techniques, VLSI the Integration Journal.
Join Zoom Meeting:
https://cuhk.zoom.us/j/95897084094?pwd=blZlanFOczF4aWFvM2RuTDVKWFlZZz09
Meeting ID: 958 9708 4094
Password: 081783
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
Speed up DNN Model Training: An Industrial Perspective
Location
Speaker:
Mr. Mike Hong
CTO of BirenTech
Abstract:
Training large DNN models is compute-intensive, often taking days, weeks or even months to complete. Therefore, how to speed it up has attracted lots of attention from both academia and industry. In this talk, we shall cover a number of accelerated DNN training techniques from an industrial perspective, including various optimizers, large batch training, distributed computation and all-reduce network topology.
Biography:
Mike Hong has been working on GPU architecture design for 26 years and is currently serving as the CTO of BirenTech, an intelligent chip design company that has attracted more than 200 million US$ series A round financing since founded in 2019. Before joining Biren, Mike was the Chief Architect in S3, Principal Architect for Tesla architecture in NVIDIA, GPU team leader and the Chief Architect in HiSilicon. Mike holds more than 50 US patents including the texture compression patent which is the industrial standard for all the PCs, Macs and game consoles.
Join Zoom Meeting:
https://cuhk.zoom.us/j/92074008389?pwd=OE1EbjBzWk9oejh5eUlZQ1FEc0lOUT09
Meeting ID: 920 7400 8389
Password: 782536
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
Artificial Intelligence for Radiotherapy in the Era of Precision Medicine
Location
Speaker:
Prof. CAI Jing
Professor of Department of Health Technology and Informatics
The Hong Kong Polytechnic University (PolyU)
Abstract:
Artificial Intelligence (AI) is evolving rapidly and promises to transform the world in an unprecedented way. The tremendous possibilities that AI can bring to radiation oncology have triggered a flood of activities in the field. Particularly, with the support of big data and accelerated computation, deep learning is taking off with tremendous algorithmic innovations and powerful neural network models. AI technology has great promises in improving radiation therapy from treatment planning to treatment assessment. It can aid radiation oncologists in reaching unbiased consensus treatment planning, help train junior radiation oncologists, update practitioners, reduce professional costs, and improve quality assurance in clinical trials and patient care. It can significantly reduce physicians’ time and efforts required to contour, plan, and review. Given the promising learning tools and massive computational resources that are becoming readily available, AI will dramatically change the landscape of radiation oncology research and practice soon. This presentation will give an overview of the recent advances in AI for radiation oncology applications, followed with a set of examples of AI applications in various aspects of radiation therapy, including but not limited to, organ segmentation, target volume delineation, treatment planning, quality assurance, response assessment, outcome prediction, etc. A number of examples of AI applications in radiotherapy will be illustrated in the presentation. For example, I will present a new approach to derive the lung functional images for function-guided radiation therapy, using a deep convolutional neural network to learn and exploit the underlying functional in-formation in the CT image and generate functional perfusion image. I will demonstrate a novel method for pseudo-CT generation from multi-parametric MR images using multi-channel multi-path generative adversarial network (MCMP-GAN) for MRI-based radiotherapy application. I will also show promising capability of MRI-based radiomics features for pre-treatment identification of adaptive radiation therapy eligibility in nasopharyngeal carcinoma (NPC) patients.
Biography:
Prof. CAI Jing earned his PhD in Engineering Physics in 2006 and then completed his clinical residency in Medical Physics in 2009 from the University of Virginia, USA. He entered the ranks of academia as Assistant Professor at Duke University in 2009, and was promoted to Associate Professor in 2014. He joined the Hong Kong Polytechnic University in 2017, and is currently a full Professor and the funding Programme Leader of Medical Physics MSc Programme in the Department of Health Technology and Informatics. He is board certified in Therapeutic Radiological Physics by American Board of Radiography (ABR) since 2010. He is the PI/Co-PI for more than 20 external research funds, including 5 NIH, 3 GRF, 3 HMRF and 1 ITSP grants, with a total funding of more than 40M HK Dollars. He has published over 100 journal papers and 200 conference papers/abstracts, and has mentored over 60 trainees as the supervisor. He serves on the editorial boards for several prestigious journals in the fields of medical physics and radiation oncology. He was elected to Fellow of American Association of Physicists in Medicine (AAPM) in 2018.
Join Zoom Meeting:
https://cuhk.zoom.us/j/92068646609?pwd=R0ZRR1VXSmVQOUkyQnZrd0t4dW0wUT09
Meeting ID: 920-6864-6609
Password: 076760
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
Closing the Loop of Human and Robot
Location
Speaker:
Prof. LU Cewu
Research Professor at Shanghai Jiao Tong University (SJTU)
Abstract:
This talk is toward closing the loop of Human and Robot. We present our recent research of human activity understanding and robot learning. For Human side, we present our recent research “Human Activity Knowledge Engine (HAKE)” which largely improves human activity understanding. The improvements of Alphapose (famous pose estimator) are also introduced. For robot side, we discuss our understanding of robot task and new insight “Primitive model”. Thus, GraspNet – first dynamic grasping benchmark dataset is proposed, a novel end-to-end grasping deep learning approach is also introduced. A 3D point-level semantic embedding method for object manipulation will be discussed. Finally, we will discuss how to further close the Loop of Human and Robot.
Biography:
Cewu Lu is a Research Professor at Shanghai Jiao Tong University (SJTU). Before he joined SJTU, he was a research fellow at Stanford University working under Prof. Fei-Fei Li and Prof. Leonidas J. Guibas. He got the his PhD degree from The Chinese Univeristy of Hong Kong, supervised by Prof. Jiaya Jia. He is selected as young 1000 talent plan. Prof. Lu Cewu is selected as MIT TR35 – “MIT Technology Review, 35 Innovators Under 35” (China), and Qiushi Outstanding Young Scholar (求是杰出青年学者),which is the only one AI awardee in recent 3 years. Prof. Lu serves as an Area Chair for CVPR 2020 and reviewer for 《nature》. Prof. Lu has published about 100 papers in top AI journal and conference, including 9 papers being ESI high cited paper. His research interests fall mainly in Computer Vision and Robotics Learning.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96062514495?pwd=aEp4aEl5UVhjOW1XemdpWVNZTVZOZz09
Meeting ID: 960-6251-4495
Password: 797809
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
Detecting Vulnerabilities using Patch-Enhanced Vulnerability Signatures
Location
Speaker:
Prof. HUO Wei
Professor of Institute of Information Technology (IIE)
Chinese Academy of Sciences (CAS)
Abstract:
Recurring vulnerabilities widely exist and remain undetected in real-world systems, which are often resulted from reused code base or shared code logic. However, the potentially small differences between vulnerable functions and their patched functions as well as the possibly large differences between vulnerable functions and target functions to be detected bring challenges to the current solutions. I shall introduce a novel approach to detect recurring vulnerabilities with low false positives and low false negatives. The evaluation on ten open-source systems has shown that the approach proposed significantly outperformed state-of-the-art clone-based and function matching-based recurring vulnerability detection approaches, with 23 CVE identifiers assigned.
Biography:
Wei HUO is a full professor within Institute of Information Technology (IIE), Chinese Academy of Sciences (CAS). He focuses on software security, vulnerability detection, program analysis, etc. He leads the group of VARAS (Vulnerability Analysis and Risk Assessment System). He has published multi papers at top venues in computer security and software engineering, including ASE, ICSE, Usenix Security. Besides, his group has uncovered hundreds of 0-day vulnerabilities in popular software and firmware, with 100+ CVEs assigned.
Join Zoom Meeting:
https://cuhk.zoom.us/j/97738806643?pwd=dTIzcWhUR2pRWjBWaG9tZkdkRS9vUT09
Meeting ID: 977-3880-6643
Password: 131738
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
Computational Fabrication and Assembly: from Optimization and Search to Learning
Location
Speaker:
Prof. FU Chi Wing Philip
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Computational fabrication is an emerging research topic in computer graphics, beginning roughly a decade ago with the need to develop computational solutions for efficient 3D printing and later for 3D fabrication and object assembly at large. In this talk, I will introduce a series of research works in this
area with a particular focus on the following two recent ones:
(i) Computational LEGO Technic assembly, in which we model the component bricks, their connection mechanisms, and the input user sketch for computation, and then further develop an optimization model with necessary constraints and our layout modification operator to efficiently search for an optimum LEGO Technic assembly. Our results not only match the input sketch with coherently-connected LEGO Technic bricks but also respect the intended symmetry and structural integrity of the designs.
(ii) TilinGNN, the first neural optimization approach to solve a classical instance of the tiling problem, in which we formulate and train a neural network model to maximize the tiling coverage on target shapes, while avoiding overlaps and holes between the tiles in a self-supervised manner. In short, we model the tiling problem as a discrete problem, in which the network is trained to predict the goodness of each candidate tile placement, allowing us to iteratively select tile placements and assemble a tiling
on the target shape.
In the end, I will try to present also some of the results from my other research works in the areas of point cloud processing, 3D vision, and augmented reality.
Biography:
Chi-Wing Fu is an associate professor in the department of computer science and engineering at the Chinese University of Hong Kong (CUHK). His research interests are in computer graphics, vision, and human-computer interaction, or more specifically in computation fabrication, 3D computer vision, and user interaction. Chi-Wing obtained his B.Sc. and M.Phil. from the CUHK and his Ph.D. from Indiana University, Bloomington. Before re-joining the CUHK in early 2016, he was an associate professor with tenure at the school of computer science and engineering at Nanyang Technological University, Singapore.
Join Zoom Meeting:
https://cuhk.zoom.us/j/99943410200
Meeting ID: 999 4341 0200
Password: 492333
Enquiries: Miss Caroline Tai at Tel. 3943 8440
Bioinformatics: Turning experimental data into biomedical hypotheses, knowledge and applications
Location
Speaker:
Prof. YIP Yuk Lap Kevin
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Contemporary biomedical research relies heavily on high-throughput technologies that examine many objects, their individual activities or their mutual interactions in a single experiment. The data produced are usually high-dimensional, noisy and biased. An important aim of bioinformatics is to extract useful information from such data for developing both conceptual understandings of the biomedical phenomena and downstream applications. This requires the integration of knowledge from multiple disciplines, such as data properties from the biotechnology, molecular and cellular mechanisms from biology, evolutionary principles from genetics, and patient-, disease- and drug-related information from medicine. Only with these inputs can the data analysis goals be meaningfully formulated as computational problems and properly solved. Computational findings also need to be subsequently validated and functionally tested by additional experiments, possibly iterating back-and-forth between data production and data analysis many times before a conclusion can be drawn. In this seminar, I will use my own research to explain how bioinformatics can help create new biomedical hypotheses, knowledge and applications, with a focus on recent works that use machine learning methods to study basic molecular mechanisms and specific human diseases.
Biography:
Kevin Yip is an associate professor in Department of Computer Science and Engineering at The Chinese University of Hong Kong (CUHK). He obtained his bachelor degree in computer engineering and master degree in computer science from The University of Hong Kong, and his PhD degree in computer science from Yale University. Before joining CUHK, he has worked as a researcher in HKU-Pasteur Institute, Yale Center for Medical Informatics, and Department of Molecular Biophysics and Biochemistry at Yale University. Since his master study, Dr. Yip has been conducting research in bioinformatics, with special interests in modeling gene regulatory
mechanisms and studying how their perturbations are related to human diseases. Dr. Yip has participated in several international research consortia, including Encyclopedia of DNA Elements (ENCODE), model organism ENCODE (modENCODE), and International Human Epigenomics Consortium (IHEC). Locally, Dr. Yip has been collaborating with scientists and clinicians in the quest of understanding the mechanisms that underlie different human diseases, such as hepatocellular carcinoma, nasopharyngeal carcinoma, type II diabetes, and Hirschsprung’s disease. Dr. Yip received the title of Outstanding Fellow from Faculty of Engineering and the Young Researcher Award from CUHK in 2019.
Join Zoom Meeting:
https://cuhk.zoom.us/j/98458448644
Meeting ID: 984 5844 8644
Password: 945709
Enquiries: Miss Caroline Tai at Tel. 3943 8440
Dependable Storage Systems
Location
Speaker:
Prof. LEE Pak Ching Patrick
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Making large-scale storage systems dependable against failures is critical yet non-trivial in the face of the ever-increasing amount of data. In this talk, I will present my work on dependable storage systems, with the primary goal of improving the fault tolerance, recovery, security, and performance of different types of storage architectures. To make a case, I will present new theoretical and applied findings on erasure coding, a low-cost redundancy technique for fault-tolerant storage. I will present general techniques and code constructions for accelerating the repair of storage failures, and further propose a unified framework for readily deploying a variety of erasure coding solutions in state-of-the-art distributed storage systems. I will also introduce my other work on the dependability of applied distributed systems, in the areas of encrypted deduplication, key-value stores, network measurement, and stream processing. Finally, I will highlight the industrial impact of our work beyond publications.
Biography:
Patrick P. C. Lee is now an Associate Professor in the Department of Computer Science and Engineering at the Chinese University of Hong Kong. His research interests are in various applied/systems topics on improving the dependability of large-scale software systems, including storage systems, distributed systems and networks, and cloud computing. He now serves as an Associate Editor in IEEE/ACM Transactions on Networking and ACM Transactions on Storage. He served as a TPC co-chair of APSys 2020, and as a TPC member of several major systems and networking conferences. He received the best paper awards at CoNEXT 2008, TrustCom 2011, and SRDS 2020. For details, please refer to his personal homepage: http://www.cse.cuhk.edu.hk/~pclee.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96195753407
Meeting ID: 961 9575 3407
Password: 892391
Enquiries: Miss Caroline Tai at Tel. 3943 8440
From Combating Errors to Embracing Errors in Computing Systems
Location
Speaker:
Prof. Xu Qiang
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Faults are inevitable in any computing systems, and they may occur due to environmental disturbance, circuit aging, or malicious attacks. On the one hand, designers try all means to prevent, contain, and control faults to achieve error-free computation, especially for those safety-critical applications. On the other hand, many applications in the big data era (e.g., search engine and recommended systems) that require lots of computing power are often error-tolerant. In this talk, we present some techniques developed at our group over the past several years, including error-tolerant solutions that combat all sorts of hardware faults and approximate computing techniques that embrace errors in computing systems for energy savings.
Biography:
Qiang Xu is an associate professor of Computer Science & Engineering at the Chinese University of Hong Kong. He leads the CUhk REliable laboratory (CURE Lab.), and his research interests include electronic design automation, fault-tolerant computing and trusted computing. Dr. Xu has published 150+ papers at referred journals and conference proceedings, and received two Best Paper Awards and five Best Paper Award Nominations. He is currently serving as an associate editor for IEEE Transaction on Computer-Aided Design of Integrated Circuits and Systems and Integration, the VLSI Journal.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96930968459
Meeting ID: 969 3096 8459
Password: 043377
Enquiries: Miss Caroline Tai at Tel. 3943 8440
Memory/Storage Optimization for Small/Big Systems
Location
Speaker:
Prof. Zili SHAO
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Memory/storage optimization is one of the most critical issues in computer systems. In this talk, I will first summarize our work in optimizing memory/storage systems for embedded and big data applications. Then, I will present an approach by deeply integrating device and application to optimize flash-based key-value caching – one of the most important building blocks in modern web infrastructures and high-performance data-intensive applications. I will also introduce our recent work in optimizing unique address checking for IoT blockchains.
Biography:
Zili Shao is an Associate Professor at Department of Computer Science and Engineering, The Chinese University of Hong Kong. He received his Ph.D. degree from The University of Texas at Dallas in 2005. Before joining CUHK in 2018, he was with Department of Computing, The Hong Kong Polytechnic University, where he started in 2005. His current research interests include embedded software and systems, storage systems and related industrial applications.
Join Zoom Meeting:
https://cuhk.zoom.us/j/95131164721
Meeting ID: 951 3116 4721
Password: 793297
Enquiries: Miss Caroline Tai at Tel. 3943 8440
VLSI Mask Optimization: From Shallow To Deep Learning
Location
Speaker:
Prof. YU Bei
Assistant Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
The continued scaling of integrated circuit technologies, along with the increased design complexity, has exacerbated the challenges associated with manufacturability and yield. In today’s semiconductor manufacturing, lithography plays a fundamental role in printing design patterns on silicon. However, the growing complexity and variation of the manufacturing process have tremendously increased the lithography modeling and simulation cost. Both the role and the cost of mask optimization – now indispensable in the design process – have increased. Parallel to these developments are the recent advancements in machine learning which have provided a far-reaching data-driven perspective for problem solving. In this talk, we shed light on the recent deep learning based approaches that have provided a new lens to examine traditional mask optimization challenges. We present hotspot detection techniques, leveraging advanced learning paradigms, which have demonstrated unprecedented efficiency. Moreover, we demonstrate the role deep learning can play in optical proximity correction (OPC) by presenting its successful application in our full-stack mask optimization framework.
Biography:
Bei Yu is currently an Assistant Professor at the Department of Computer Science and Engineering, The Chinese University of Hong Kong. He received the Ph.D degree from Electrical and Computer Engineering, University of Texas at Austin, USA in 2014, and the M.S. degree in Computer Science from Tsinghua University, China in 2010. His current research interests include machine learning and combinatorial algorithm with applications in VLSI computer aided design (CAD). He has served as TPC Chair of 1st ACM/IEEE Workshop on Machine Learning for CAD (MLCAD), served in the program committees of DAC, ICCAD, DATE, ASPDAC, ISPD, the editorial boards of ACM Transactions on Design Automation of Electronic Systems (TODAES), Integration, the VLSI Journal, and IET Cyber-Physical Systems: Theory & Applications. He is Editor of IEEE TCCPS Newsletter.
Dr. Yu received six Best Paper Awards from International Conference on Tools with Artificial Intelligence (ICTAI) 2019, Integration, the VLSI Journal in 2018, International Symposium on Physical Design (ISPD) 2017, SPIE Advanced Lithography Conference 2016, International Conference on Computer-Aided Design (ICCAD) 2013, Asia and South Pacific Design Automation Conference (ASPDAC) 2012, four other Best Paper Award Nominations (ASPDAC 2019, DAC 2014, ASPDAC 2013, and ICCAD 2011), six ICCAD/ISPD contest awards, IBM Ph.D. Scholarship in 2012, SPIE Education Scholarship in 2013, and EDAA Outstanding Dissertation Award in 2014.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96114730370
Meeting ID: 961 1473 0370
Password: 984602
Enquiries: Miss Caroline Tai at Tel. 3943 8440
Local Versus Global Security in Computation
Location
Speaker:
Prof. Andrej BOGDANOV
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Secret sharing schemes are at the heart of cryptographic protocol design. In this talk I will present my recent discoveries about the informational and computational complexity of secret sharing and their relevance to secure multiparty computation:
- The share size in the seminal threshold secret sharing scheme of Shamir and Blakley from the 1970s is essentially optimal.
- Secret reconstruction can sometimes be carried out in the computational model of bounded-depth circuits, without resorting to modular linear algebra.
- Private circuits that are secure against local information leakage are also secure against limited but natural forms of global leakage.
I will also touch upon some loosely related results in cryptography, pseudorandomness, and coding theory.
Biography:
Andrej Bogdanov is associate professor of Computer Science and Engineering and director of the Institute of Theoretical Computer Science and Communications at the Chinese University of Hong Kong. His research interests are in cryptography, pseudorandomness, and sublinear-time algorithms.
Andrej obtained his B.S. and M. Eng. degrees from MIT in 2001 and his Ph.D. from UC Berkeley in 2005. Before joining CUHK in 2008 he was a postdoctoral associate at the Institute for Advanced Study in Princeton, at DIMACS (Rutgers University), and at ITCS (Tsinghua University). He was a visiting professor at the Tokyo Institute of Technology in 2013 and a long-term program participant at the UC Berkeley Simons Institute for the Theory of Computing in 2017.
Join Zoom Meeting:
https://cuhk.zoom.us/j/94008322629
Meeting ID: 940 0832 2629
Password: 524278
Enquiries: Miss Caroline Tai at Tel. 3943 8440
A Compiler Infrastructure for Embedded Multicore SoCs
Location
Speaker:
Dr. Sheng Weihua
Chief Expert
Software Tools and Engineering at Huawei
Abstract:
Compilers play a pivotal role in the software development process for microprocessors, by automatically translating high-level programming languages into machinespecific executable code. For a long time, while processors were scalar, compilers were regarded as a black box among the software community, due to their successful internal encapsulation of machine-specific details. Over a decade ago, major computing processor manufacturers began to compile multiple (simple) cores into a single chip, namely multicores, to retain scaling according to Moore’s law. The embedded computing industry followed suit, introducing multicores years later, amid aggressive marketing campaigns aimed at highlighting the number of processors for product differentiation in consumer electronics. While the transition from scalar (uni)processors to multicores is an evolutionary step in terms of hardware, it has given rise to fundamental changes in software development. The performance “free lunch”, having ridden on the growth of faster processors, is over. Compiler technology does not develop and scale for multicore architectures, which contributes considerably to the software crisis in the multicore age. This talk addresses the challenges associated with developing compilers for multicore SoCs (Systems-On-Chip) software development, focusing on embedded systems, such as wireless terminals and modems. It also captures a trajectory from research towards a commercial prototyping, shedding light on some lessons on how to do research effectively.
Biography:
Mr. Sheng has had early career roots in the electronic design automation industry (CoWare and Synopsys). He has spearheaded the technology development on multicore programming tools at RWTH Aachen University from 2007 to 2013, which later turned into the foundation of Silexica. He has a proven record of successful consultation and collaboration with top tier technology companies on multicore design tools. Mr. Sheng is a co-founder of Silexica Software Solutions GmbH in Germany. He served as CTO during 2014-2016. Since 2017, as VP and GM of APAC, he was responsible for all aspects of Silexica sales and operations across the APAC region. In 2019 he joined Huawei Technologies. Mr. Sheng received BEng from Tsinghua University and MSc/PhD from RWTH Aachen University in Germany.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93855822245
Meeting ID: 938-5582-2245
Password: 429533
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
Robust Deep Neural Network Design under Fault Injection Attack
Location
Speaker:
Prof. Xu Qiang
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Deep neural networks (DNNs) have gained mainstream adoption in the past several years, and many artificial intelligence (AI) applications employ DNNs for safety- and security-critical tasks, e.g., biometric authentication and autonomous driving. In this talk, we first briefly discuss the security issues in deep learning. Then, we focus on fault injection attacks and introduce some of our recent works in this domain.
Biography:
Qiang Xu leads the CUhk REliable laboratory (CURE Lab.) and his research interests include fault-tolerant computing and trusted computing. He has published 150+ papers in these fields and received a number of best paper awards/nominations.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93862944206
Meeting ID: 938-6294-4206
Enquiries: Miss Rachel Cheuk at Tel. 3943 8439
The Coming of Age of Microfluidic Biochips: Connection Biochemistry to Electronic Design Automation
Location
Speaker:
Prof. Tsung-yi HO
Professor
Department of Computer Science
National Tsing Hua University
Abstract:
Advances in microfluidic technologies have led to the emergence of biochip devices for automating laboratory procedures in biochemistry and molecular biology. Corresponding systems are revolutionizing a diverse range of applications, e.g., point-of-care clinical diagnostics, drug discovery, and DNA sequencing–with an increasing market. However, continued growth (and larger revenues resulting from technology adoption by pharmaceutical and healthcare companies) depends on advances in chip integration and design-automation tools. Thus, there is a need to deliver the same level of design automation support to the biochip designer that the semiconductor industry now takes for granted. In particular, the design of efficient design automation algorithms for implementing biochemistry protocols to ensure that biochips are as versatile as the macro-labs that they are intended to replace. This talk will first describe technology platforms for accomplishing “biochemistry on a chip”, and introduce the audience to both the droplet-based “digital” microfluidics based on electrowetting actuation and flow-based “continuous” microfluidics based on microvalve technology. Next, system-level synthesis includes operation scheduling and resource binding algorithms, physical-level synthesis includes placement and routing optimizations, and control synthesis and sensor feedback-based cyberphysical adaptation will be presented. In this way, the audience will see how a “biochip compiler” can translate protocol descriptions provided by an end user (e.g., a chemist or a nurse at a doctor’s clinic) to a set of optimized and executable fluidic instructions that will run on the underlying microfluidic platform. Finally, recent advances in open-source microfluidic ecosystem will be covered.
Biography:
Tsung-Yi Ho received his Ph.D. in Electrical Engineering from National Taiwan University in 2005. He is a Professor with the Department of Computer Science of National Tsing Hua University, Hsinchu, Taiwan. His research interests include several areas of computing and emerging technologies, especially in design automation of microfluidic biochips. He has been the recipient of the Invitational Fellowship of the Japan Society for the Promotion of Science (JSPS), the Humboldt Research Fellowship by the Alexander von Humboldt Foundation, the Hans Fischer Fellowship by the Institute of Advanced Study of the Technische Universität München, and the International Visiting Research Scholarship by the Peter Wall Institute of Advanced Study of the University of British Columbia. He was a recipient of the Best Paper Awards at the VLSI Test Symposium (VTS) in 2013 and IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems in 2015. He served as a Distinguished Visitor of the IEEE Computer Society for 2013-2015, a Distinguished Lecturer of the IEEE Circuits and Systems Society for 2016-2017, the Chair of the IEEE Computer Society Tainan Chapter for 2013-2015, and the Chair of the ACM SIGDA Taiwan Chapter for 2014-2015. Currently, he serves as the Program Director of both EDA and AI Research Programs of Ministry of Science and Technology in Taiwan, the VP Technical Activities of IEEE CEDA, an ACM Distinguished Speaker, and an Associate Editor of the ACM Journal on Emerging Technologies in Computing Systems, ACM Transactions on Design Automation of Electronic Systems, ACM Transactions on Embedded Computing Systems, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, and IEEE Transactions on Very Large Scale Integration Systems, a Guest Editor of IEEE Design & Test of Computers, and the Technical Program Committees of major conferences, including DAC, ICCAD, DATE, ASP-DAC, ISPD, ICCD, etc. He is a Distinguished Member of ACM.
Join Zoom Meeting:
https://cuhk.zoom.us/j/94385618900
https://cuhk.zoom.com.cn/j/94385618900(Mainland China)
Enquiries: Miss Caroline Tai at Tel. 3943 8440
Towards Understanding Biomolecular Structure and Function with Deep Learning
Location
Speaker:
Mr. Yu LI
PhD student
King Abdullah University of Science & Technology (KAUST)
Abstract:
Biomolecules, existing in high-order structural forms, are indispensable for the normal functioning of our bodies. To demystify those critical biological processes, we need to investigate biomolecular structures and functions. In this talk, we showcase our efforts in that research direction using deep learning. First, we proposed a deep learning guarded Bayesian inference framework for reconstructing super-resolved structure images from the super-resolved fluorescence microscopy data. This framework enables us to observe the overall biomolecular structures in living cells with super-resolution in almost real-time. Then, we zoom in on a particular biomolecule, RNA, predicting its secondary structure. For this one of the oldest problems in bioinformatics, we proposed an unrolled deep learning method, which can bring us with 20% performance improvement, regarding the F1 score. Finally, by leveraging the physiochemical features and deep learning, we proposed the first-of-its-kind framework to investigate the interaction between RNA and RNA-binding proteins (RBP). This framework can provide us with both the interaction details and high-throughput binding prediction results. Extensive in vitro and in vivo biological experiments demonstrate the effectiveness of the proposed method.
Biography:
Yu Li is a PhD student at KAUST in Saudi Arabia, majoring in Computer Science, under the supervision of Prof. Xin Gao. He is a member of Computational Bioscience Research Center (CBRC) at KAUST. His main research interest is developing novel and new machine learning methods, mainly deep learning methods, for solving the computational problems in biology and understanding the principles behind the bio-world. He obtained MS degree in CS from KAUST at 2016. Before that, he got the Bachelor degree in Biosciences from University of Science and Technology of China (USTC).
Join Zoom Meeting:
https://cuhk.zoom.us/j/91295938758
https://cuhk.zoom.com.cn/j/91295938758(Mainland China)
Enquiries: Miss Caroline Tai at Tel. 3943 8440
High-Performance Data Analytics Frameworks
Location
Speaker:
Prof. James CHENG
Assistant Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
ABSTRACT:
Distributed data analytics frameworks lie at the heart of modern computing infrastructures in many organizations. In this talk, I’ll introduce my work on large-scale data analytics frameworks, including systems designed for specialized workloads (e.g. graph analytics, machine learning, high dimensional similarity search) and those for general workloads. I will also show some applications of these systems and their impact.
BIOGRAPHY:
James Cheng obtained his B.Eng. and Ph.D. degrees from the Hong Kong University of Science and Technology. His research focuses on distributed computing frameworks, large-scale graph analytics, and distributed machine learning.
Enquiries: Ms. Crystal Tam at tel. 3943 8439
How To Preserve Privacy In Learning?
Location
Speaker:
Mr. Di WANG
PhD student
State University of New York
Buffalo
Abstract:
Recent research showed that most of the existing learning models are vulnerable to various privacy attacks. Thus, a major challenge facing the machine learning community is how to learn effectively from sensitive data. An effective way for this problem is to enforce differential privacy during the learning process. As a rigorous scheme for privacy preserving, Differential Privacy (DP) has now become a standard for private data analysis. Despite its rapid development in theory, DP’s adoption to the machine learning community remains slow due to various challenges from the data, the privacy models and the learning tasks. In this talk, I will use the Empirical Risk Minimization (ERM) problem as an example and show how to overcome these challenges. Particularly, I will first talk about how to overcome the high dimensionality challenge from the data for Sparse Linear Regression in the local DP (LDP) model. Then, I will discuss the challenge from the non-interactive LDP model and show a series of results to reduce the exponential sample complexity of ERM. Next, I will present techniques on achieving DP for ERM with non-convex loss functions. Finally, I will discuss some future research along these directions.
Biography:
Di Wang is currently a PhD student in the Department of Computer Science and Engineering at the State University of New York (SUNY) at Buffalo. Before that, he obtained his BS and MS degrees in mathematics from Shandong University and the University of Western Ontario, respectively. During his PhD studies, he has been invited as a visiting student to the University of California, Berkeley, Harvard University, and Boston University. His research areas include differentially private machine learning, adversarial machine learning, interpretable machine learning, robust estimation and optimization. He has received the SEAS Dean’s Graduate Achievement Award and the Best CSE Graduate Research Award from SUNY Buffalo.
Join Zoom Meeting:
https://cuhk.zoom.us/j/98545048742
https://cuhk.zoom.com.cn/j/98545048742(Mainland China)
Meeting ID: 985 4504 8742
Enquiries: Miss Caroline Tai at Tel. 3943 8440
Transfer Learning for Language Understanding and Generation
Location
Speaker:
Mr. Di JIN
PhD student
MIT
Abstract:
Deep learning models have been increasingly prevailing in various Natural Language Processing (NLP) tasks, and even surpassed human-level performance in some of them. However, the performance of these models would degrade significantly on low-resource data, even worse than conventional shallow models in some cases. In this work, we combat with the curse of data-inefficiency with the help of transfer learning for both language understanding and generation tasks. First, I will introduce MMM, a Multi-stage Multi-task learning framework for the Multi-choice Question Answering (MCQA) task, which brings in around 10% of performance improvement on 5 MCQA low-resource datasets. Second, an iterative back-translation (IBT) schema is proposed to boost the performance of machine translation models on zero-shot domains (with no labeled data) by adapting from the source domain with large-scale labeled data.
Biography:
Di Jin is a fifth year PhD student at MIT working with Prof. Peter Szolovits. He works on Natural Language Processing (NLP) and its applications into biomedical and clinical domains. Previous works focused on sequential sentence classification, transfer learning for low-resource data, adversarial attacking and defense, and text editing/rewriting.
Join Zoom Meeting:
https://cuhk.zoom.us/j/834299320
https://cuhk.zoom.com.cn/j/834299320(Mainland China)
Meeting ID: 834 299 320
Find your local number: https://cuhk.zoom.us/u/abeVNXWmN
Enquiries: Miss Caroline Tai at Tel. 3943 8440
Coupling Decentralized Key-Value Stores with Erasure Coding
Location
Speaker:
Prof. Patrick Lee Pak Ching
Associate Professor
Department of Computer Science and Engineering
The Chinese University of Hong Kong
Abstract:
Modern decentralized key-value stores often replicate and distribute data via consistent hashing for availability and scalability. Compared to replication, erasure coding is a promising redundancy approach that provides availability guarantees at much lower cost. However, when combined with consistent hashing, erasure coding incurs a lot of parity updates during scaling (i.e., adding or removing nodes) and cannot efficiently handle degraded reads caused by scaling. We propose a novel erasure coding model called FragEC, which incurs no parity updates during scaling. We further extend consistent hashing with multiple hash rings to enable erasure coding to seamlessly address degraded reads during scaling. We realize our design as an in-memory key-value store called ECHash, and conduct testbed experiments on different scaling workloads in both local and cloud environments. We show that ECHash achieves better scaling performance (in terms of scaling throughput and degraded read latency during scaling) over the baseline erasure coding implementation, while maintaining high basic I/O and node repair performance.
Speaker’s Bio:
Patrick Lee is now an Associate Professor at CUHK CSE. Please refer to http://www.cse.cuhk.edu.hk/~pclee.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
Complexity Management in the Design of Cyber-Physical Systems
Speaker:
Prof. Hermann KOPETZ
Professor Emeritus
Technical University of Vienna
Abstract:
The human effort required to understand, design, and maintain a software system depends on the complexity of the artifact. After a short introduction into the different facets of complexity, this talk deals with the characteristics of multi-level models and the appearance of emergent phenomena. The focus of the core section of the talk is a discussion of simplification principles in the design of Cyber-Physical Systems. The most widely used simplification principle, divide and conquer, partitions a large system horizontally, temporally, or vertically into nearly independent parts that are small enough in order that their behavior can be understood considering the limited capacity of the human cognitive appparatus. The most effective—and difficult—simplification principle is the new conceptualization of the emergent properties of interacting parts.
A more detailed discussion of the topic is contained in the upcoming book: Simplicity is Complex, Foundations of Cyber-Physical System Design that will be published by Springer Verlag in the summer of 2019.
Speaker’s Bio:
Hermann Kopetz received a PhD degree in Physics sub auspiciis praesidentis from the University in Vienna in 1968 and is since 2011 professor emeritus at the Technical University of Vienna. He is the chief architect of the time-triggered technology for dependable embedded Systems and a co-founder of the company TTTech. The time-triggered technology is deployed in leading aerospace, automotive and industrial applications. Kopetz is a Life Fellow of the IEEE and a full member of the Austrian Academy of Science. He received a Dr. honoris causa degree from the University Paul Sabatier in Toulouse in 2007. Kopetz served as the chairman of the IEEE Computer Society Technical Committee on Dependable Computing and Fault Tolerance and in program committees of many scientific conferences. He is a founding member and a former chairman of IFIP WG 10.4. Kopetz has written a widely used textbook on Real-Time Systems (that has been translated to Chinese) and published more than 200 papers and 30 patents.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
Scalable Bioinformatics Methods For Single Cell Data
Location
Speaker:
Dr. Joshua Ho
Associate Professor
School of Biomedical Sciences
University of Hong Kong
Abstract:
Single cell RNA-seq and other high throughput technologies have revolutionised our ability to interrogate cellular heterogeneity, with broad applications in biology and medicine. Standard bioinformatics pipelines are designed to process individual data sets containing thousands of single cells. Nonetheless, data sets are increasing in size, and some biological questions can only be addressed by performing large-scale data integration. There is a need to develop scalable bioinformatics tools that can handle large data sets (e.g., with >1 million cells). Our laboratory has been developing scalable bioinformatics tools that make use of modern cloud computing technology, fast heuristic algorithms, and virtual reality visualisation to support scalable data processing, analysis, and exploration of large single cell data. In this talk, we will describe some of these tools and their applications.
Speaker’s Bio:
Dr Joshua Ho is an Associate Professor in the School of Biomedical Sciences at the University of Hong Kong (HKU). Dr Ho completed his BSc (Hon 1, Medal) and PhD in Bioinformatics from the University of Sydney, and undertook postdoctoral research at the Harvard Medical School. His research focuses on advanced bioinformatics technology, ranging from scalable single cell analytics, metagenomic data analysis, and digital healthcare technology (such as mobile health, wearable devices, and healthcare artificial intelligence). Dr Ho has over 80 publications, including first or senior-author papers in leading journals such as Nature, Genome Biology, Nucleic Acids Research and Science Signaling. His research excellence has been recognized by the 2015 NSW Ministerial Award for Rising Star in Cardiovascular Research, the 2015 Australian Epigenetics Alliance’s Illumina Early Career Research Award, and the 2016 Young Tall Poppy Science Award.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
Temporal Logic Semantics for Teleo-Reactive Robotic Agent Programs
Location
Speaker:
Prof. Keith L. Clark
Emeritus Professor
Imperial College London
Abstract:
Teleo-Reactive (TR) robotic agent programs comprise sequences of guarded action rules clustered into named parameterised procedures. Their ancestry goes back to the first cognitive robot, Shakey. Like Shakey, a TR programmed robotic agent has a deductive Belief Store comprising constantly changing predicate logic percept facts, and fixed knowledge facts and rules for querying the percepts. In this paper we introduce TR programming using a simple example expressed in the teleo-reactive programming language TeleoR, which is a syntactic extension of QuLog, a typed logic programming language used for the agent’s Belief Store. The example program illustrates key properties that a TeleoR program should have. We give formal definitions of these key properties, and an informal operational semantics of the evaluation of a TeleoR procedure call. We then formally express the key properties in LTL. Finally we show how their LTL formalisation can be used to prove key properties of TeleoR procedures using the example TeleoR program.
Speaker’s Bio:
Keith Clark has Bachelor degrees in both mathematics and philosophy and a PhD in Computational Logic. He is one of the founders of Logic Programming. His early research was primarily in the theory and practice of LP. His paper: “Negation as Failure” (1978), giving a semantics to Prolog’s negation operator, has over 3000 citations.
In 1981, inspired by Hoare’s CSP, with a PhD student Steve Gregory, he introduced the concepts of committed choice non-determinism and stream communicating and-parallel sub-proofs into logic programming. This restriction of the LP concept was then adopted by the Japanese Fifth Generation Project. This had the goal of building multi-processor knowledge using computers. Unfortunately, the restrictions men it is not a natural tool for building KP applications, and the FGP project failed. Since 1990 his research emphasis has been on the design, implementation and application of multi-threaded rule based programming languages, with a strong declarative component, for multi-agent and cognitive robotic applications.
He has had visiting positions at Stanford University, UC Santa Cruz, Syracuse University and Uppsala University amongst others. He is currently an Emeritus Professor at Imperial, and an Honorary Professor at University of Queensland and the University of New Soul Wales. He has consulted for the Japanese Fifth Generation Project, Hewlett Packard, IBM, Fujitsu and two start-ups. With colleague Frank McCabe, he founded the company Logic Programming Associates in 1980. This produced and marketed Prolog systems for micro-computers, offering training and consultancy on their use. The star product was MacProlog, with primitives for exploiting the Mac GUI for AI applications.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
LEC: Learning Driven Data-path Equivalence Checking
Location
Speaker:
Dr. Jiang Long
Apple silicon division
Abstract:
In LEC system, we present a learning-based framework to solve the data-path equivalence checking problem in a high-level synthesis design flow, which is gaining popularity in modern day SoC design process where CPU cores are accompanied by dedicated accelerators for computation intensive applications. In such a context, the data-path logic is no longer a ‘pure’ data computation logic but rather an arbitrary sea-of-logic, where highly optimized computation intensive arithmetic components are surrounded by a web of custom control logic. In such a setting, the state-of-art SAT-sweeping framework at the Boolean level is no longer effective as the specification and implementation under comparison may not have any internal structural similarities. LEC employs an open architecture, iterative compositional proof strategies, and a learning framework to locate, isolate and reverse engineer the true bottlenecks in order to reason about their equivalence relation at a higher level. The effectiveness of LEC procedures is demonstrated by benchmarking results on a set of realistic industrial problems.
Speaker’s Bio:
Jiang graduated from Computer Science Department at Jilin University, Changchun, China in 1992. In 1996, Jiang entered the graduate program in Computer Science at Tsinghua University, Beijing, China. A year later, from 1997 to 1999, Jiang studied in Computer Science Department at University of Texas at Austin as a graduate student. It is during the years at UT-Austin, Jiang developed an interest and focused in the field of formal verification of digital systems ever since. Between 2000 and 2014, Jiang worked on EDA formal verification tool development at Synopsys Inc and later at Mentor Graphics Corporation. Since March 2014, Jiang worked at Apple silicon division on SoC design formal verification and currently focusing on verification methodology and tool development for Apple CPU design and verification. While working in industry, between 2008 and 2017, Jiang completed his PhD degree at EECS Department in University of California at Berkeley in the area of logic synthesis and verification. Jiang ‘s dissertation work is on reasoning about high-level constructs for hardware and software formal verification in the context of high-level synthesis design flow.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
From 7,000X Model Compression to 100X Acceleration – Achieving Real-Time Execution of ALL DNNs on Mobile Devices
Location
Speaker:
Prof. Yanzhi Wang
Department of Electrical and Computer Engineering
Northeastern University
Abstract:
This presentation focuses on two recent contributions on model compression and acceleration of deep neural networks (DNNs). The first is a systematic, unified DNN model compression framework based on the powerful optimization tool ADMM (Alternating Direction Methods of Multipliers), which applies to non-structured and various types of structured weight pruning as well as weight quantization technique of DNNs. It achieves unprecedented model compression rates on representative DNNs, consistently outperforming competing methods. When weight pruning and quantization are combined, we achieve up to 6,635X weight storage reduction without accuracy loss, which is two orders of magnitude higher than prior methods. Our most recent results conducted a comprehensive comparison between non-structured and structured weight pruning with quantization in place, and suggest that non-structured weight pruning is not desirable at any hardware platform.
However, using mobile devices as an example, we show that existing model compression techniques, even assisted by ADMM, are still difficult to translate into notable acceleration or real-time execution of DNNs. Therefore, we need to go beyond the existing model compression schemes, and develop novel schemes that are desirable for both algorithm and hardware. Compilers will act as the bridge between algorithm and hardware, maximizing parallelism and hardware performance. We develop a combination of pattern pruning and connectivity pruning, which is desirable at all of theory, algorithm, compiler, and hardware levels. We achieve 18.9ms inference time of large-scale DNN VGG-16 on smartphone without accuracy loss, which is 55X faster than TensorFlow-Lite. We can potentially enable 100X faster and real-time execution of all DNNs using the proposed framework.
Speaker’s Bio:
Prof. Yanzhi Wang is currently an assistant professor in the Department of Electrical and Computer Engineering at Northeastern University. He has received his Ph.D. Degree in Computer Engineering from University of Southern California (USC) in 2014, and his B.S. Degree with Distinction in Electronic Engineering from Tsinghua University in 2009.
Prof. Wang’s current research interests mainly focus on DNN model compression and energy-efficient implementation (on various platforms). His research maintains the highest model compression rates on representative DNNs since 09/2018. His work on AQFP superconducting based DNN acceleration is by far the highest energy efficiency among all hardware devices. His work has been published broadly in top conference and journal venues (e.g., ASPLOS, ISCA, MICRO, HPCA, ISSCC, AAAI, ICML, CVPR, ICLR, IJCAI, ECCV, ICDM, ACM MM, DAC, ICCAD, FPGA, LCTES, CCS, VLDB, ICDCS, TComputer, TCAD, JSAC, TNNLS, Nature SP, etc.), and has been cited around 5,000 times. He has received four Best Paper Awards, has another eight Best Paper Nominations and three Popular Paper Awards.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
Facilitating Programming for Data Science via DSLs and Machine Learning
Location
Speaker:
Prof. Artur Andrzejak
University of Heidelberg
Germany
Abstract:
Data processing and analysis becomes relevant for a growing number of domains and applications, ranging from natural science to industrial applications. Given the variety of scenarios and the need for flexibility, each project typically require custom programming. This task might pose a challenge for the domain specialists (typically non-developers), and frequently becomes a major cost and time factor in crafting a solution. This problem even aggravates if performance or scalability are important, due to increased complexity of developing parallel/distributed software.
This talk focuses on selected solutions of these challenges. In particular, we will discuss a tool NLDSL [1] for accelerated implementation of Domain Specific Languages (DSLs) for libraries following the “fluent interface” programming model. We showcase how this solution facilitates script development in context of popular data science frameworks/libraries like (Python) Pandas, scikit-learn, Apache Spark, or Matplotlib. The key elements are “no overhead” integration of DSL and Python code, DLS-level code recommendations, and support for adding ad-hoc DSL elements tailored to even small application domains.
We will also discuss solutions utilizing machine learning. One of them are code fragment recommenders. Here frequently used code fragments (snippets) are extracted from Stackoveflow/GitHub, generified, and stored in a database. During development they are recommended to users based on textual queries, selection of relevant data, user interaction history, and other inputs.
Another work attempts to combine the approach for Python code completion via neural attention and pointer networks by Jian Li et al. [2] with probabilistic models for code [3]. Our study shows some promising improvement of accuracy.
If time permits, we will also take a quick look at alternative approaches for accelerated programming in context of data analysis: natural language interfaces for code development (e.g. bots), and the emerging technologies for program synthesis.
[1] Artur Andrzejak, Kevin Kiefer, Diego Costa, Oliver Wenz, Agile Construction of Data Science DSLs (Tool Demo), ACM SIGPLAN Int. Conf. on Generative Programming: Concepts & Experiences (GPCE), 21-22 October 2019, Athens, Greece.
[2] Jian Li, Yue Wang, Michael R. Lyu, and Irwin King, Code completion with neural attention and pointer networks. In Proc. 27th International Joint Conference on Artificial Intelligence (IJCAI’18), 2018, AAAI Press.
[3] Pavol Bielik, Veselin Raychev, and Martin Vechev. PHOG: Probabilistic model for code. In Prof. 33rd International Conference on Machine Learning, 20–22 June 2016, New York, USA.
Speaker’s Bio:
Artur Andrzejak has received a PhD degree in computer science from ETH Zurich in 2000 and a habilitation degree from FU Berlin in 2009. He was a postdoctoral researcher at the HP Labs Palo Alto from 2001 to 2002 and a researcher at ZIB Berlin from 2003 to 2010. He was leading the CoreGRID Institute on System Architecture (2004 to 2006) and acted as a Deputy Head of Data Mining Department at I2R Singapore in 2010. Since 2010 he is a W3-professor at University of Heidelberg and leads there the Parallel and Distributed Systems group. His research interests include scalable data analysis, reliability of complex software systems, and cloud computing. To find out more about his research group, visit http://pvs.ifi.uni-heidelberg.de/.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
How To Do High Quality Research And Write Acceptable Papers?
Location
Speaker:
Prof. Michael R. Lyu
Professor and Chairman
Computer Science & Engineering Department
The Chinese University of Hong Kong
Abstract:
Publish or Perish. This is the pressure of most academic researchers. Even if your advisor(s) do not ask you to publish a certain number of papers as the graduation requirement, performing high quality research is still essential. In this talk I will share my experience in the question all graduate students will ask, “How to do high quality research and write acceptable papers?”
Speaker’s Bio:
Michael Rung-Tsong Lyu is a Professor and Chairman of Computer Science and Engineering Department at The Chinese University of Hong Kong. He worked at the Jet Propulsion Laboratory, the University of Iowa, Bellcore, and Bell Laboratories. His research interests include software reliability engineering, distributed systems, fault-tolerant computing, service computing, multimedia information retrieval, and machine learning. He has published 500 refereed journal and conference papers in these areas, which recorded 30000 Google Scholar citations and h-index of 85. He served as an Associate Editor of IEEE Transactions on Reliability, IEEE Transactions on Knowledge and Data Engineering (TKDE), Journal of Information Science and Engineering, and IEEE Transactions on Services Computing. He is currently on the editorial boards of ACM Transactions on Software Engineering and Methodology (TOSEM), IEEE Access, and Software Testing, Verification and Reliability Journal (STVR). He was elected to IEEE Fellow (2004), AAAS Fellow (2007), Croucher Senior Research Fellow (2008), IEEE Reliability Society Engineer of the Year (2010), ACM Fellow (2015), and received the Overseas Outstanding Contribution Award from China Computer Federation in 2018. Prof. Lyu received his B.Sc. from National Taiwan University, his M.Sc. from University of California, Santa Barbara, and his Ph.D. in Computer Science from University of California, Los Angeles.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
Scrumptious Sandwich Problems: A Tasty Retrospective for After Lunch
Location
Speaker:
Prof. Martin Charles Golumbic
University of Haifa
Abstract:
Graph sandwich problems are a prototypical example of checking consistency when faced with only partial data. A sandwich problem for a graph with respect to a graph property $\Pi$ is a partially specified graph, i.e., only some of the edges and non-edges are given, and the question to be answered is, can this graph be completed to a graph which has the property $\Pi$? The graph sandwich problem was investigated for a large number of families of graphs in a 1995 paper by Golumbic, Kaplan and Shamir, and over 200 subsequent papers by many researchers have been published since.
In some cases, the problem is NP-complete such as for interval graphs, comparability graphs, chordal graphs and others. In other cases, the sandwich problem can be solved in polynomial time such as for threshold graphs, cographs, and split graphs. There are also interesting special cases of the sandwich problem, most notably the probe graph problem where the unspecified edges are confined to be within a subset of the vertices. Similar sandwich problems can also be defined for hypergraphs, matrices, posets and Boolean functions, namely, completing partially specified structures such that the result satisfies a desirable property. In this talk, we will present a survey of results that we and others have obtained in this area during the past decade.
Speaker’s Bio:
Martin Charles Golumbic is Emeritus Professor of Computer Science and Founder of the Caesarea Edmond Benjamin de Rothschild Institute for Interdisciplinary Applications of Computer Science at the University of Haifa. He is the founding Editor-in-Chief of the journal “Annals of Mathematics and Artificial Intelligence” and is or has been a member of the editorial boards of several other journals including “Discrete Applied Mathematics”, “Constraints” and “AI Communications”. His current area of research is in combinatorial mathematics interacting with real world problems in computer science and artificial intelligence.
Professor Golumbic received his Ph.D. in mathematics from Columbia University in 1975 under the direction of Samuel Eilenberg. He has held positions at the Courant Institute of Mathematical Sciences of New York University, Bell Telephone Laboratories, the IBM Israel Scientific Center and Bar-Ilan University. He has also had visiting appointments at the Université de Paris, the Weizmann Institute of Science, Ecole Polytechnique Fédérale de Lausanne, Universidade Federal do Rio de Janeiro, Rutgers University, Columbia University, Hebrew University, IIT Kharagpur and Tsinghua University.
He is the author of the book “Algorithmic Graph Theory and Perfect Graphs” and coauthor of the book “Tolerance Graphs”. He has written many research articles in the areas of combinatorial mathematics, algorithmic analysis, expert systems, artificial intelligence, and programming languages, and has been a guest editor of special issues of several journals. He is the editor of the books “Advances in Artificial Intelligence, Natural Language and Knowledge-based Systems”, and “Graph Theory, Combinatorics and Algorithms: Interdisciplinary Applications”. His most recent book is “Fighting Terror Online: The Convergence of Security, Technology, and the Law”, published by Springer-Verlag.
Prof. Golumbic and was elected as Foundation Fellow of the Institute of Combinatorics and its Applications in 1995, and has been a Fellow of the European Artificial Intelligence society ECCAI since 2005. He is a member of the Academia Europaea, honoris causa — elected 2013. Martin Golumbic has been the chairman of over fifty national and international symposia. He a member of the Phi Beta Kappa, Pi Mu Epsilon, Phi Kappa Phi, Phi Eta Sigma honor societies and is married and the father of four bilingual, married daughters and has seven granddaughters and five grandsons.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
Bitcoin, blockchains and DLT applications
Location
Speaker:
Prof. Stefano Bistarelli
Department of Mathematics and Informatics
University of Perugia
Italy
Abstract:
Nowadays there are more than 1 thousand and an half cryptocurrencies and (public) blockchains with an overall capitalization of more than 300 Billions of USD. The most famous cryptocurrency (and blockchain) is Bitcoin, described in a white-paper written under the pseudonym of “Satoshi Nakamoto”. His invention is an open-source, peer-to-peer digital currency (being electronic, with no physical manifestation). Money transactions do not require a third-party intermediary, such as credit cards issuers. The Bitcoin network is completely decentralised, with all parts of transactions performed by the users of the system. A complete transaction record of every Bitcoin and every Bitcoin user’s encrypted identity is maintained on a public ledger. The seminar will introduce bitcoin and blockchain with a deep view of transactions and some insight on specific application (e-voting).
Speaker’s Bio:
Stefano Bistarelli is Associate Professor of Computer Science at the Department of Mathematics and Informatics at the University of Perugia (Italy) since November 2008. Previously he was Associate Professor at the Department of Sciences at the University “G. d’Annunzio” in Chieti-Pescara since September 2005 and assistant professor in the same department since September 2002. He is also research associate of the Institute of Computer Science and Telematics (IIT) at the CNR (Italian National Research Council) in Pisa since 2002. He obtained his Ph.D. in Computer Science in 2001 that was awarded as the best Theoretical Computer Science and Artificial Intelligence Thesis (awarded respectively by the Italian Chapter of the European Association of Theoretical Computer Science (EATCS) and by the Italian Association for Artificial Intelligence (AI*IA)). In the same year he was also nominated by the IIT-CNR for the Cor Baayen European award and selected as the candidate for Italy for the award. He was PostDocs at University of Padua and at the IIT-CNR in Pisa and visiting researcher at the Chinese University of Hong Kong and at the UCC in Cork. Some collaborations, invited talks or visits involved also others research centres (INRIA, Paris; IC-Park, London; Department of Information Systems and Languages, Barcelona; ILLC, Amsterdam; Computer Science Institute LMU, Monaco; EPFL, Losanna; S.R.I, San Francisco). He has organized and served in the PC of several workshops in the constraints and security fields; he was also chair of the Constraint track at FLAIR and currently of the same track at the SAC ACM symposium. His research interests are related to (soft) constraint programming and solving. He also works on Computer Security and recently on QoS. On these topics he has published more then 100 articles, a book and edited a special issue of a journal on soft constraints. He is also in the editorial board of the electronic version of the Open AI Journal (Bentham Open).
Enquiries: Ms. Shirley Lau at tel. 3943 8439
Integrating Reasoning on Combinatorial Optimisation Problems into Machine Learning
Location
Speaker:
Dr. Emir Demirovic
School of Computing and Information Systems
University of Melbourne
Australia
Abstract:
We study the predict+optimise problem, where machine learning and combinatorial optimisation must interact to achieve a common goal. These problems are important when optimisation needs to be performed on input parameters that are not fully observed but must instead be estimated using machine learning. Our aim is to develop machine learning algorithms that take into account the underlying combinatorial optimisation problem. While a plethora of sophisticated algorithms and approaches are available in machine learning and optimisation respectively, an established methodology for solving problems which require both machine learning and combinatorial optimisation remains an open question. In this talk, we introduce the problem, discuss its difficulties, and present our progress based on our papers from CPAIOR’19 and IJCAI’19.
Speaker’s Bio:
Dr. Emir Demirovic is an associate lecturer and postdoctoral researcher (research fellow) at the University of Melbourne in Australia. He received his PhD from the Vienna University of Technology (TU Wien) and worked at a production planning and scheduling company MCP for seven months. Dr. Demirovic’s primary research interest lies in solving complex real-world problems through combinatorial optimisation and combinatorial machine learning, which combines optimisation and machine learning. His work includes both developing general-purpose algorithms and applications. An example of such a problem is to design algorithms to generate high-quality timetables for high schools based on the curriculum, teacher availability, and pedagogical requirements. Another example is to optimise a production plan while only having an estimate of costs rather than precise numbers.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
Machine learning with problematic datasets in diverse applications
Location
Speaker:
Prof. Chris Willcocks
Durham University
UK
Abstract:
Machine learning scientists often ask the question “What was the distribution from which the dataset was generated from?” and subsequently “How do we learn to transform observations from what we are given, to what is required by the task?”. This seminar highlights successful research where our group took explicit steps to deal with problematic datasets in several different applications, from building robust medical diagnosis systems with a very limited amount of poorly labeled data, to how we hid secret messages in plain sight in tweets without changing the underlying message, how we captured plausible interpolations and successful dockings of proteins despite significant dataset bias, through to recent advances in meta learning to tackle the evolving task distribution in the ongoing anti-counterfeiting arms race.
Speaker’s Bio:
Chris G. Willcocks is a recently appointed Assistant Professor in the Innovative Computing Group at the Department of Computer Science at Durham University in the UK, where he currently teaches the year 3 Machine Learning and year 2 Cyber Security sub-modules. Before 2016, he worked on industrial machine learning projects for P&G, Dyson, Unilever, and the British Government in the areas of Computational Biology, Security, Anti-Counterfeiting and Medical Image Computing. In 2016, he founded the Durham University research spinout company Intogral Limited, where he successfully led research and development commercialisation through to Series A investment, deploying ML models used by large multinationals in diverse markets in Medicine, Pharmaceutics, and Security. Since returning to academia, he has recently published in top journals in Pattern Analysis, Medical Imaging, and Information Security, where his theoretical interests are in Variational Bayesian methods, Riemannian Geometry, Level-set methods, and Meta Learning.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
Abusing Native App-like Features in Web Applications
Location
Speaker:
Prof. Sooel Son
Assistant Professor KAIST School of Computing (SoC) and Graduate School of Information Security (GSIS)
Abstract:
Progressive Web App (PWA) is a new generation of Web application designed to provide native app-like browsing experiences even when a browser is offline. PWAs make full use of new HTML5 features which include push notification, cache, and service worker to provide short-latency and rich Web browsing experiences. We conduct the first systematic study of the security and privacy aspects unique to PWAs. We identify security flaws in main browsers as well as design flaws in popular third-party push services, that exacerbate the phishing risk. We introduce a new side-channel attack that infers the victim’s history of visited PWAs. The proposed attack exploits the offline browsing feature of PWAs using a cache. We demonstrate a cryptocurrency mining attack which abuses service workers.
Speaker’s Bio:
Sooel Son is an assistant professor at KAIST School of Computing (SoC) and Graduate School of Information Security (GSIS). He received his Computer Science PhD from The University of Texas at Austin. Before KAIST, he worked on building frameworks that identify invasive Android applications at Google. His research focuses on Web security and privacy problems. He is interested in analyzing Web applications, finding Web vulnerabilities, and implementing new systems to find such vulnerabilities.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
How Physical Synthesis Flows
Location
Speaker:
Dr. Patrick Groeneveld
Stanford University
Abstract:
In this talk we will analyze how form follows function in physical design. Analyzing recent mobile chips and chips for self-driving cars we can reason about the structure of advanced billion transistor systems. The strength and weaknesses of the hierarchical abstractions will be matched with the sweet spots of the core physical synthesis algorithms. These algorithms are chained in a physical design flow that consists of hundreds of steps, each of which may have unexpected interactions. Trading off multiple conflicting objectives such as area, speed and power is sometimes more an art than a science. The presentation will present the underlying principles that eventually lead to design closure.
Speaker’s Bio:
Before working at Cadence and Synopsys, Patrick Groeneveld was Chief Technologist at Magma Design Automation where he was part of the team that developed a groundbreaking RTL-to-GDS2 synthesis product. Patrick was also a Full Professor of Electrical Engineering at Eindhoven University. He is currently teaching at in the EE department at Stanford University and also serves as finance chair in the Executive Committee of the Design Automation Conference. Patrick received his MSc and PhD degrees from Delft University of Technology in the Netherlands. In his spare time, Patrick enjoys flying airplanes, running, electric vehicles, tinkering and reading useless information.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
From Automated Privacy Leak Analysis to Privacy Leak Prevention for Mobile Apps
Location
Speaker:
Dr. Sencun Zhu
Associate Professor
Pennsylvania State University
Abstract:
With the enormous popularity of smartphones, millions of mobile apps are developed to provide rich functionalities for users by accessing certain personal data, leading to great privacy concerns. To address this problem, many approaches have been proposed to detecting privacy disclosures in mobile apps, but they largely fail to automatically determine whether the privacy disclosures are necessary for the functionality of apps. In this talk, we will introduce LeakDoctor, an analysis system that integrates dynamic response differential analysis with static response taint analysis toautomatically diagnose privacy leaks by judging if a privacy disclosure from an app is necessary for some functionality of the app. Furthermore, we will present the design, implementation, and evaluation of a context-aware real-time mediation system that bridges the semantic gap between GUI foreground interaction and background access, to protect mobile apps from leaking users’ private information.
Speaker’s Bio:
Dr. Sencun Zhu is an associate professor of Department of Computer Science and Engineering at The Pennsylvania State University (PSU). He received the B.S. degree in precision instruments from Tsinghua University, , the M.S. degree in signal processing from the University of Science and Technology of China, Graduate School at Beijing, and the Ph.D. degree in information technology from George Mason University in 1996, 1999, and 2004, respectively. His research interests include wireless and mobile security, software and network security, fraud detection, and user online safety and privacy. His research has been funded by National Science Foundation, National Security Agency, and Army Research Office/Lab. He received NSF Career Award in 2007 and a Google Faculty Research Award in 2013. More details of his research can be found in http://www.cse.psu.edu/~sxz16/.
Enquiries: Ms. Shirley Lau at tel. 3943 8439
Building Error-Resilient Machine Learning Systems for Safety-Critical Applications
Location
Speaker:
Prof. Karthik Pattabiraman
Associate Professor
ECE Department and CS Department (affiliation)
University of British Columbia (UBC)
Abstract:
Machine learning (ML) has increasingly been adopted in safety-critical systems such as Autonomous vehicles (AVs) and home robotics. In these domains, reliability and safety are important considerations, and hence it is critical to ensure the resilience of ML systems to faults and errors. On the other hand, soft errors are increasing in commodity computer systems due to the effects of technology scaling and manufacturing variations in hardware design. Further, traditional solutions for hardware faults such as Triple-Modular Redundancy are prohibitively expensive in terms of energy consumption, and are hence not practical in this domain. Therefore, there is a compelling need to ensure the resilience of ML applications to soft errors on commodity hardware platforms. In this talk, I will describe two of the projects we worked on in my group at UBC to ensure the error-resilience of ML applications deployed in the AV domain. I will also talk about some of the challenges in this area, and the work we’re doing to address these challenges.
This is joint work with my students, Nvidia Research, and Los Alamos National Labs.
Speaker’s Bio:
Karthik Pattabiraman received his M.S and PhD. degrees from the University of Illinois at Urbana-Champaign (UIUC) in 2004 and 2009 respectively. After a post-doctoral stint at Microsoft Research (MSR), Karthik joined the University of British Columbia (UBC) in 2010, where he is now an associate professor of electrical and computer engineering. Karthik’s research interests are in building error-resilient software systems, and in software engineering and security. Karthik has won distinguished paper/runner up awards at the IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 2018, the IEEE International Conference on Software Testing (ICST), 2013, the IEEE/ACM International Conference on Software Engineering (ICSE), 2014, He is a recipient of the distinguished alumni early career award from UIUC’s Computer Science department in 2018, the NSERC Discovery Accelerator Supplement (DAS) award in 2015, and the 2018 Killam Faculty Research Prize, and 2016 Killam Faculty Research Fellowship at UBC. He also won the William Carter award in 2008 for best PhD thesis in the area of fault-tolerant computing. Karthik is a senior member of the IEEE, and the vice-chair of the IFIP Working Group on Dependable Computing and Fault-Tolerance (10.4). Find out more about him at: http://blogs.ubc.ca/karthik
Enquiries: Ms. Shirley Lau at tel. 3943 8439
Declarative Programming in Software-defined Networks: Past. Present, and the Road Ahead
Location
Speaker:
Dr. Loo Boon Thau
Professor of Computer and Information Science Department
University of Pennsylvania
Abstract:
Declarative networking is a technology that has transformed the way software-defined networking programs are written and deployed. Instead of writing low level code, network operators can write high level specifications that can be verified and compiled into actual implementations. This talk describes 15 years of research in declarative networking, tracing its roots as a domain specific language, to its role in verification, debugging of networks, and commercial use as a declarative network analytics engine. The talk concludes with a peek into the future of declarative networking programming, in the area of examples-guided network synthesis, and infrastructure-aware declarative query processing.
Speaker’s Bio:
Boon Thau Loo is a Professor in the Computer and Information Science (CIS) department at the University of Pennsylvania. He holds a secondary appointment in the Electrical and Systems Engineering (ESE) department. He is also the Associate Dean of the Master’s and Professional Programs, where he oversees all masters programs at the School of Engineering and Applied Science. He is also currently the interim director of the Distributed Systems Laboratory (DSL), an inter-disciplinary systems research lab bringing together researchers in networking, distributed systems, and security. He received his Ph.D. degree in Computer Science from the University of California at Berkeley in 2006. Prior to his Ph.D, he received his M.S. degree from Stanford University in 2000, and his B.S. degree with highest honors from University of California-Berkeley in 1999. His research focuses on distributed data management systems, Internet-scale query processing, and the application of data-centric techniques and formal methods to the design, analysis and implementation of networked systems. He was awarded the 2006 David J. Sakrison Memorial Prize for the most outstanding dissertation research in the Department of EECS at University of California-Berkeley, and the 2007 ACM SIGMOD Dissertation Award. He is a recipient of the NSF CAREER award (2009), the Air Force Office of Scientific Research (AFOSR) Young Investigator Award (2012) and Penn’s Emerging Inventor of the year award (2018). He has published 100+ peer reviewed publications and has supervised twelve Ph.D. dissertations. His graduated Ph.D. students include 3 tenure-track faculty members and winners of 4 dissertation awards.
In addition to his academic work, he actively participates in entrepreneurial activities involving technology transfer. He is the Chief Scientist at Termaxia, a software-defined storage startup based in Philadelphia that he co-founded in 2015. Termaxia offers low-power high-performance software-defined storage solutions targeting the exabyte-scale storage market, with customers in the US, China, and Southeast Asia. Prior to Termaxia, he co-founded Gencore Systems (Netsil) in 2014, a cloud performance analytics company that spun out of his research team at Penn, commercializing his research on the Scalanytics declarative analytics platform. The company was successfully acquired by Nutanix Inc in 2018. He has also published several papers with industry partners (e.g AT&T, HP Labs, Intel, LogicBlox, Microsoft) applying research on real-world systems that result in actual production deployment and patents.
Enquiries: Ms. Shirley Lau at tel. 3943 8439