Moving from the PM,PC to the Public Money, Public Transparent Digital Service(PMPTDS): what went wrong and how could we improve it?

PMPC(Public Money, Public Code) is a concept promoted by EU since 2017. Which is a good for promoting the free/open-source culture, yet still leave some pitfalls which might be used to underturn the original ideal if not executed carefully. This talk gives a brief introduction of PMPC, followed by reviewing certain failed cases in Taiwan in the past 20 years and what still going wrong right now.

Kuo-Chieh Ting

Kuo-Chieh got dual graduate degrees in Computer Science, and Art and Technology. He is a veteran of FOSS and Open Data movement. He is among the first generation Chinese users of Libreoffice and Mageia Linux(named Star Office/Open Office and Mandriva back then, respectively) He has been actively involved in Chinese language l10n community events.

Practical Tactics for Optimizing JVM Docker Images for Enhanced Efficiency, Performance, and Better Developer Experience

While writing a Dockerfile is easier than ever (AI can help you generate a sample in a second!), but the resulting images might not be good enough. In this talk, we will discover a comprehensive guide to optimizing Docker images for JVM applications. Learn a series of tactics to reduce image size, speed up build time, and enhance overall performance and developer experience. We will illustrate the performance improvements achieved by applying these tactics to commonly used web frameworks. Additionally, we will explore the use of Gradle plugins to streamline the optimization process.

Refining Data Structure & Algorithm Implementations in the Linux Kernel for Improved Performance

The library code of the Linux kernel contains numerous fundamental data structures and algorithms, mostly located in the lib/ directory. These have been refined by many skilled developers, resulting in highly efficient implementations. However, further optimization possibilities remain. This talk introduces how data structures and algorithms used in the Linux kernel are implemented, along with recent optimization contributions made to the Linux kernel.

Kuan-Wei Chiu

In his academic journey, Kuan-Wei Chiu has dedicated himself to contributing to the Linux Kernel, and actively participating in the development of the RISC-V simulator rv32emu. Currently pursuing a computer science master's degree, his focus lies in enhancing both the functionality and performance of these critical software components.

Learn Supply Chain Attacks Through XZ Utils Backdoor

"On March 29, 2024, Andres Freund, a Microsoft software developer, emailed Openwall informing the community of the discovery of an SSH backdoor in XZ Utils 5.6.0 and 5.6.1 (CVE-2024-3094). XZ Utils is a suite of open-source software that provides developers with lossless compression. The tool is very widely distributed as it comes installed by default on most Linux distributions and macOS systems.

Building Scalable and Efficient AI Platforms on Kubernetes and GKE

Have you ever wondered how large organizations and high tech unicorns are able to build platforms on Kubernetes to run all kinds of workloads - web, stateless, stateful, batch, and even AI?

Kubernetes’ strengths in dynamic resource scheduling, automated orchestration and vibrant ecosystem of frameworks make it ideal for building AI/ML platforms. This becomes highly scalable when it combines the power of GKE hosted in the Cloud with disposable GPUs and TPUs.

Lau Mei Yan, Mandy

Mandy is determined to become a cloud engineer, currently a Year 1 student of Higher Diploma in Cloud and Data Centre Administration. She like seeking to learn new technology skills, currently learning Terraform.

Building your own Jarvis? Exploring LLM integration options in Home Assistant

From ChatGPT, Llama, Gemma to Jetson, Amanda Lam from Women Techmakers Hong Kong will discuss the current options of integrating LLM into Home Assistant, what they can / will do for you, their pros and cons, and the future development in this area. If you want your smart home to become even smarter, don't miss this sequel to the 2 previous HKOSCon sessions on Home Assistant!

Introducing Advanced Techniques for Enhancing Large Language Models (LLMs)

In the rapidly evolving field of artificial intelligence, mastering Large Language Models (LLMs) requires a blend of optimization, graph technology, and Retrieval-Augmented Generation (RAG) techniques. This talk introduces key strategies for maximizing LLM performance, integrating graph databases, and advancing RAG methodologies. We’ll delve into topics such as optimization flows and prompt engineering while also examining the synergy between LLMs and graph technology through knowledge graphs and vector searches.