Java Project Loom

Java Project Loom is a proposed new feature for the Java platform that aims to improve the support for concurrent programming in Java. In this blog, we talk about a few examples of how Project Loom could be used in Java programs.

GraphQL has a role beyond API Query Language- being the backbone of application Integration
background Coditation

Java Project Loom

Project Loom is a proposed open-source project for the Java programming language that aims to improve the performance and scalability of Java applications by introducing lightweight threads, known as "fibers."
Fibers are similar to threads, but they are managed by the Java Virtual Machine (JVM) rather than the operating system, which allows for more efficient use of system resources and better support for concurrent programming. The goal of Project Loom is to make it easier for developers to write concurrent and high-performance applications in Java, without having to deal with the complexity of traditional threading models.

The Fiber Class

In the context of Project Loom, fiber is a type of lightweight thread that is managed by the Java Virtual Machine (JVM) rather than an operating system. Fibers are similar to threads in that they allow a program to execute multiple tasks concurrently, but they are more efficient and easier to use because they are managed by the JVM.

Fibers are created and managed using the new Fiber class in Project Loom, which provides methods for creating, scheduling, and synchronizing fibers. Each fiber is associated with a Runnable or Callable object, which defines the task that the fiber will execute. The Fiber class also provides methods for suspending and resuming fibers, as well as for canceling or completing them.

Here is an example of how you might use fibers in a Java program with Project Loom:

import java.util.concurrent.ForkJoinPool;

import java.util.concurrent.Fiber;

import java.util.concurrent.FiberExecutor;

import java.util.concurrent.TimeUnit;

public class FibersExample {

    public static void main(String[] args) {

        // Create a FiberExecutor using the global ForkJoinPool

        FiberExecutor executor = FiberExecutor.getForkJoinPoolExecutor();

        // Create a new fiber that runs a simple task

        Fiber<Void> fiber = new Fiber<>(executor, () -> {

            // Print a message

            System.out.println("Hello from a fiber!");

        });

        // Start the fiber

        fiber.start();

        // Wait for the fiber to complete

        try {

            fiber.join();

        } catch (InterruptedException e) {

            // Handle interruption

        }

        // Shutdown the executor

        executor.shutdown();

        try {

            executor.awaitTermination(1, TimeUnit.MINUTES);

        } catch (InterruptedException e) {

            // Handle interruption

        }

    }

}

This program creates a new fiber using the Fiber class, and then starts and joins the fiber to run the task it is associated with.
The fiber simply prints a message to the console, but in a real program, the task would likely be more complex and involve the concurrent execution of multiple fibers.

Structured Concurrency

The new Fiber class in Project Loom provides support for structured concurrency. Structured concurrency is a programming paradigm that focuses on the structure and organization of concurrent code, intending to make it easier to write and reason about concurrent programs. It emphasizes the use of explicit control structures and coordination mechanisms to manage concurrent execution, as opposed to the traditional approach of using low-level thread synchronization primitives.

The Fiber class allows developers to create and manage fibers, which are lightweight threads that are managed by the Java Virtual Machine (JVM) rather than the operating system. The Fiber class provides methods for creating, scheduling, and synchronizing fibers, as well as for suspending, resuming, canceling, and completing them.

Here is an example of how you might use structured concurrency in a Java program with Project Loom:

import java.util.concurrent.Fiber;

import java.util.concurrent.FiberExecutor;

public class StructuredConcurrencyExample {

    public static void main(String[] args) {

        // Create a FiberExecutor using the global ForkJoinPool

        FiberExecutor executor = FiberExecutor.getForkJoinPoolExecutor();

        // Create a new fiber that runs a simple task

        Fiber<Void> fiber = new Fiber<>(executor, () -> {

            // Print a message

            System.out.println("Hello from a fiber!");

        });

        // Start the fiber

        fiber.start();

        // Wait for the fiber to complete

        try {

            fiber.join();

        } catch (InterruptedException e) {

            // Handle interruption

        }

        // Create a new fiber that runs a more complex task

        Fiber<Integer> counter = new Fiber<>(executor, () -> {

            // Initialize a counter to 0

            int count = 0;

            // Loop until the fiber is suspended or terminated

            while (!Fiber.currentFiber().isDone()) {

                // Increment the counter

                count++;

                // Suspend the fiber for 1 second

                Fiber.currentFiber().sleep(1000);

            }

            // Return the final count

            return count;

        });

        // Start the fiber

        counter.start();

        // Wait for 5 seconds

        try {

            Thread.sleep(5000);

        } catch (InterruptedException e) {

            // Handle interruption

        }

        // Suspend the fiber

        counter.suspend();

        // Wait for another 5 seconds

        try {

            Thread.sleep(5000);

        } catch (InterruptedException e) {

            // Handle interruption

        }

        // Resume the fiber

        counter.resume();

        // Wait for the fiber to complete

        try {

            int finalCount = counter.join();

            System.out.println("Final count: " + finalCount);

        } catch (InterruptedException e) {

            // Handle interruption

        }

        // Shutdown the executor

        executor.shutdown();

    }

}

In this example, the program creates two fibers using the Fiber class. The first fiber simply runs a simple task, while the second fiber runs a more complex task that uses the built-in suspension and termination mechanisms in the Fiber class to control its execution. This demonstrates how fibers can be used to write concurrent code in a more structured and organized way.

Using the Fiber class, developers can write concurrent programs in a more structured and organized way, without having to deal with the complexity of traditional thread synchronization mechanisms. This can make it easier to write and reason about concurrent code and can improve the performance and scalability of Java applications.

Performance

In Java, a platform thread is a thread that is managed by the Java virtual machine (JVM) and corresponds to a native thread on the operating system. Platform threads are typically used in applications that make use of traditional concurrency mechanisms such as locks and atomic variables.

On the other hand, a virtual thread is a thread that is managed entirely by the JVM and does not correspond to a native thread on the operating system. Virtual threads allow for greater flexibility and scalability than platform threads, as the JVM can manage and schedule them in a way that is more efficient and lightweight. Virtual threads can be used in conjunction with the CompletableFuture API to simplify the creation and management of asynchronous tasks.

Here is an example of using virtual threads with CompletableFuture in Java:

// Import the necessary classes

import java.util.concurrent.CompletableFuture;

import java.util.concurrent.ExecutionException;

import java.util.concurrent.ForkJoinPool;

import java.util.concurrent.TimeUnit;

import java.util.concurrent.TimeoutException;

public class Main {

  public static void main(String[] args) {

    // Create a CompletableFuture and supply a lambda that returns a string

    CompletableFuture<String> future = CompletableFuture.supplyAsync(() -> {

      try {

        // Simulate a long-running task by sleeping for 5 seconds

        TimeUnit.SECONDS.sleep(5);

      } catch (InterruptedException e) {

        // Handle the interruption

      }

      return "Hello, world!";

    }, ForkJoinPool.commonPool());

    // Use the thenApply() method to transform the result of the CompletableFuture

    CompletableFuture<String> transformedFuture = future.thenApply(s -> s + " - from CompletableFuture");

    try {

      // Use the get() method to retrieve the result of the CompletableFuture,

      // blocking the calling thread until the result is available or the

      // specified timeout expires

      String result = transformedFuture.get(6, TimeUnit.SECONDS);

      System.out.println(result);

    } catch (InterruptedException | ExecutionException | TimeoutException e) {

      // Handle the exceptions

    }

    // Shut down the ForkJoinPool used by the CompletableFuture

    ForkJoinPool.commonPool().shutdown();

  }

}

In this example, we create a CompletableFuture and supply it with a lambda that simulates a long-running task by sleeping for 5 seconds. We specify that the lambda should be executed using a ForkJoinPool, which is the default Executor used by CompletableFuture for virtual threads. We then use the then apply () method to transform the result of the CompletableFuture, appending a string to it. Finally, we use the get() method to retrieve the result of the transformed CompletableFuture, blocking the calling thread until the result is available or the specified timeout expires.

Here is a quick comparison between platform threads and virtual threads.
We tested the following code on an Intel I5, 16 GB RAM ubuntu machine.

Running 100000 platform thread

try (var newThread = Executors.newThreadPerTaskExecutor(Executors.defaultThreadFactory())) {

IntStream.range(0, 100_000).forEach(thread -> executor.submit(() -> {

     Thread.sleep(Duration.ofSeconds(1));

     System.out.println(thread);

     return thread;

}));

}

# 'newThreadPerTaskExecutor' with 'defaultThreadFactory'

0:18.77 real,   18.15 s user,   7.19 s sys, 135% 3891pu,  743584 mmem

# 'newCachedThreadPool' with 'defaultThreadFactory'

0:11.52 real,   13.21 s user,   4.91 s sys, 157% 6019pu, 2215972 mmem

Running 100000 virtual thread

try (var newThread = Executors.newVirtualThreadPerTaskExecutor()) {

IntStream.range(0, 100_000).forEach( thread -> executor.submit(() -> {

     Thread.sleep(Duration.ofSeconds(1));

     System.out.println(thread);

     return thread;

}));

}

0:02.62 real,   6.83 s user, 1.46 s sys, 316% 14840pu,  350268 mmem 

What does these numbers mean

real: Real time to execute the code

user: Time taken to execute the code in user space

sys: Time taken to execute the code in kernel space

x%: Percentage of the cpu utilization

mmem: Main memory utilization

Virtual threads can utilize the CPU more efficiently, also resource utilization is much better.

Use cases:
Here are some examples of how Project Loom could be used in Java programs:
  • A web server application could use fibers to handle incoming requests from clients in a more efficient and scalable way. Each fiber would be responsible for processing a single request, and the application could easily manage thousands or even millions of fibers concurrently.
  • A data processing application could use fibers to parallelize its workload across multiple cores or processors. Each fiber could be assigned a chunk of data to process, and the application could use the built-in synchronization mechanisms in Project Loom to coordinate the fibers and ensure that the results are correct.
  • A game or other interactive application could use fibers to implement concurrent behavior without introducing complex thread synchronization code. For example, a game could use fiber to manage the movement of each game object, allowing the game to handle a large number of objects without bogging down the main thread.
Conclusion

Java Project Loom is a proposed new feature for the Java platform that aims to improve the support for concurrent programming in Java. Some of the key advantages of Java Project Loom are:

  1. Improved scalability and performance: Virtual threads in Java Project Loom allow for more efficient use of system resources, allowing applications to scale to more concurrent tasks without running into the limitations of the operating system's native threading model.
  2. Simpler and more intuitive concurrency model: Virtual threads in Java Project Loom can be used in conjunction with the CompletableFuture API, which simplifies the creation and management of asynchronous tasks. This makes it easier for developers to write concurrent code in Java without having to worry about low-level concurrency details.
  3. Better support for non-blocking I/O: Java Project Loom introduces the concept of Continuations, which allows applications to suspend and resume the execution of a task in a non-blocking manner. This can be used to improve the performance of applications that make use of non-blocking I/O, such as network servers.
  4. Backward compatibility: Java Project Loom is designed to be fully backward-compatible with existing Java applications, so developers can take advantage of its new features without having to modify their existing code.

Overall, Java Project Loom aims to make concurrent programming in Java more scalable, efficient, and intuitive, enabling developers to write better-performing and more maintainable applications.

Hi, I am Kunal Modi. I am a Java and Python developer with over 1 year of experience in software development. I have a strong background in object-oriented programming and have worked on a variety of projects, ranging from web applications to data analysis. In my current role, I am responsible for designing and implementing scalable and maintainable systems using Java and Python. I am highly skilled in both languages and have a passion for learning new technologies and solving complex problems. In my free time, I enjoy contributing to open-source projects and staying up-to-date with the latest developments in the tech industry.

Want to receive update about our upcoming podcast?

Thanks for joining our newsletter.
Oops! Something went wrong.

Latest Articles

Implementing Custom Instrumentation for Application Performance Monitoring (APM) Using OpenTelemetry

Application Performance Monitoring (APM) has become crucial for businesses to ensure optimal software performance and user experience. As applications grow more complex and distributed, the need for comprehensive monitoring solutions has never been greater. OpenTelemetry has emerged as a powerful, vendor-neutral framework for instrumenting, generating, collecting, and exporting telemetry data. This article explores how to implement custom instrumentation using OpenTelemetry for effective APM.

Mobile Engineering
time
5
 min read

Implementing Custom Evaluation Metrics in LangChain for Measuring AI Agent Performance

As AI and language models continue to advance at breakneck speed, the need to accurately gauge AI agent performance has never been more critical. LangChain, a go-to framework for building language model applications, comes equipped with its own set of evaluation tools. However, these off-the-shelf solutions often fall short when dealing with the intricacies of specialized AI applications. This article dives into the world of custom evaluation metrics in LangChain, showing you how to craft bespoke measures that truly capture the essence of your AI agent's performance.

AI/ML
time
5
 min read

Enhancing Quality Control with AI: Smarter Defect Detection in Manufacturing

In today's competitive manufacturing landscape, quality control is paramount. Traditional methods often struggle to maintain optimal standards. However, the integration of Artificial Intelligence (AI) is revolutionizing this domain. This article delves into the transformative impact of AI on quality control in manufacturing, highlighting specific use cases and their underlying architectures.

AI/ML
time
5
 min read