I am Madhawa
My work explores how Computer Vision and Spatial Computing can shape the next generation of interactive human-AI interfaces.
This is my personal webpage.
More about meMy work explores how Computer Vision and Spatial Computing can shape the next generation of interactive human-AI interfaces.
This is my personal webpage.
More about meHi, I’m Madhawa (/'mæ.də.wə/). A Research Engineer at CSIRO, working in applied research. My work involves using imaging techniques, computer vision, and large language models to develop interactive systems for both industry and research applications. This includes investigating spatial computing interfaces across platforms such as iOS, the Web, and head mounted displays like Microsoft HoloLens, Magic Leap, Apple Vision Pro, and Meta Quest.
In my research, I develop rapid prototypes and conduct user studies to explore how human interaction can be shaped through spatial computing interfaces across these diverse platforms. With the rise of vision language models (VLMs), combined with spatial computing capabilities, we could bring semantic world understanding to AI systems. This could enable intelligent machines to coexist with humans and support new forms of human-AI interaction modalities.
For example, in recent projects I have been exploring the development of interactive and explainable interfaces for human robot interaction. I investigate how to design interfaces that allow non roboticists to engage meaningfully with robotic systems. A key use case involves smart laboratories, where scientists from various disciplines can collaborate with AI powered robots to accelerate scientific discovery.
That is a nutshell of the work I do, but apart from work, I like to bake (check here @myfoodepisodes), volunteer, play the ukulele, or go for long distance runs. I also enjoy blogging (medium).
AR/VR/XR
Applied AI - Computer Vision
Human-Computer Interaction
User Experience Research
Research areas - Augmented Reality (AR) Virtual Reality (VR), Gesture Interactions, Human Computer Interaction, Computer Vision, Computer Graphics
Deliver lectures and/or tutorials/labs. Conduct classes to an appropriate standard of
teaching and professionalism.
Courses taught: Human-Computer Interaction and Design
(COMP3900/6390), Data Mining (COMP3425/8410), Programming for Scientists (COMP1730/6730)
and (COMP7230), Software Engineering (COMP2120/6120), Web Development and Design
(COMP1710/6780), Game Development (COMP3540/6540)
As a senior software engineer I was involved in software engineering consultations, product feature designs and training junior engineers. Consultations include on-site and off-site customer engagements , e.g. Arizona of Department of Administration (ADOA) - State of Arizona, USA, and Transport for London (TfL) - UK.
I was a member of the Internet of Things (IoT) and Enterprise Mobility Management (EMM) research engineering team and lately joined client engagement (services) team. As a software engineer I worked on developing and integrating features to WSO2 product suite and providing customer support as a part of the role in services team. Some features that I developed include Android for Work and Apple Device Enrolment Program (DEP) with WSO2 EMM. As a services team member, I served in off-site customer engagements in Japan and USA clients.
IEEE SMC 2021: IEEE International Conference on Systems, Man, and Cybernetics (SMC) 2021.
ACM ICMI 2020: 22nd International Conference on Multimodal Interaction 2020.
ISWC 2020: 19th International Semantic Web Conference 2020.
SAW 2019: 1st International Workshop on Sensors and Actuators on the Web 2019.
ICTer 2016: 16th International Conference on Advances in ICT for Emerging Regions (ICTer) 2016.