Process vs Thread: Understanding Concurrency in Modern Applications

    Process vs Thread: Understanding Concurrency in Modern Applications

    06/06/2025

    Introduction

    Processes and Threads are fundamental concepts in computer science that enable modern applications to run efficiently and concurrently. They allow software to perform multiple tasks simultaneously, improving performance and responsiveness.

    In simple terms, a process is like an independent program running on your computer (think of your web browser or a game). A thread, on the other hand, is like a mini-worker within that program, helping it do several things simultaneously (like loading images while you can still scroll a webpage).

    Understanding processes and threads is super important if you're curious about how software really works, or if you're learning to program. They are the building blocks that operating systems use to manage tasks and allow applications to perform multiple operations concurrently, making your software fast and responsive.

    Process vs Thread: Modern Concurrency Multi-Core CPU Managing Concurrent Execution PROCESS A (Web Browser) Thread 1 UI Updates Thread 2 Network T3 Scripts SHARED RESOURCES • Memory Space • Open Files • Network Connections PROCESS B (Media Player) Thread 1 Audio Thread 2 Video T3 UI SHARED RESOURCES • Memory Space • Media Files • Audio/Video Buffers ISOLATION BARRIER ● Process = Independent program with isolated memory ● Thread = Lightweight execution unit within process ● Shared Resources = Memory, files, and data accessible by all threads in process

    What is a Process?

    A process is an instance of a computer program that is being executed. When you run a program you coded or installed on your computer, the operating system creates a process for it. Each process has its own memory space, which means it operates independently of other processes. This isolation helps prevent one process from interfering with another, enhancing stability and security.

    From Program File to Running Process DISK STORAGE program.exe (Executable File) • Machine Code • Data Sections • Headers & Metadata OS Loads OPERATING SYSTEM Process Manager Memory Manager File System 1. Allocate memory space 2. Create Process Control Block 3. Load executable into memory Creates RUNNING PROCESS PID: 1234 STACK Local variables Function calls HEAP Dynamic memory (malloc, new) DATA Global variables CODE Program instructions Process Control Block (PCB) • Process ID • CPU registers • Memory info Process Creation Steps 1 Program exists as file on disk 2 OS loader reads executable file 3 Memory allocated & PCB created 4 Process starts execution Key Transformation: • Static File → Living Process with dedicated memory space • OS manages process lifecycle, memory isolation, and resource allocation • Each process gets unique PID and isolated virtual memory space

    The Operating System (OS) manages processes using a data structure called the Process Control Block (PCB), also known as a Task Control Block. The PCB is vital for context switching and stores all the essential information about a process. Key information stored in a PCB includes:

    • Process ID (PID): A unique identifier for the process.
    • Process State: The current state of the process (e.g., New, Ready, Running, Waiting, Terminated).
    • Program Counter (PC): The address of the next instruction to be executed for this process.
    • CPU Registers: Values of the processor's registers, which must be saved when the process is swapped out of the CPU and restored when it's swapped back in. This includes accumulators, index registers, stack pointers, and general-purpose registers.
    • CPU Scheduling Information: Process priority, pointers to scheduling queues, and other scheduling parameters.
    • Memory Management Information: Information such as page tables or segment tables, and the value of base and limit registers, depending on the memory system used by the OS.
    • I/O Status Information: Information about I/O devices allocated to the process, open files, etc.
    • Accounting Information: CPU and real time used, time limits, account numbers, job or process numbers, etc.

    Process States

    As a process executes, it changes state. The state of a process is defined in part by the current activity of that process. Common process states include:

    • New/Create: The process is being created. The OS is setting up the PCB and allocating initial resources.
    • Ready: The process has all the resources it needs to run and is waiting to be assigned to a processor.
    • Running: Instructions are being executed by the CPU.
    • Waiting/Blocked: The process is waiting for some event to occur (such as an I/O completion or receiving a signal). It cannot proceed until the event happens.
    • Terminated/Exit: The process has finished execution or has been terminated by the OS. Resources are deallocated.

    These states transition in a lifecycle: A new process moves to Ready. When the OS scheduler dispatches it, it moves to Running. From Running, it might be moved back to Ready (e.g., if its time slice expires or a higher priority process comes in), or to Waiting (if it needs an I/O operation). A waiting process moves to Ready once the event it was waiting for occurs. Eventually, a running process will complete and move to Terminated.

    Process State Lifecycle NEW Process Creation READY Waiting for CPU RUNNING Executing Instructions WAITING Blocked for I/O or Event TERMINATED Process Finished admit dispatch exit I/O or event wait I/O complete preemption State Transitions: ● admit: OS creates process resources ● dispatch: Scheduler assigns CPU ● exit: Process completes execution ● I/O wait: Process needs external resource ● I/O complete: External resource available ● preemption: Higher priority process needs CPU

    Process Memory Layout

    A process has its own virtual memory space, which is isolated from other processes. This memory space is typically organized into several segments:

    • Text (Code): This segment contains the compiled program code (instructions) read from non-volatile storage when the program is launched. It is often read-only to prevent accidental modification.
    • Data (Initialized Data Segment): This segment stores global, static, and constant variables that are initialized by the programmer before the program starts execution.
    • BSS (Block Started by Symbol / Uninitialized Data Segment): This segment stores global and static variables that are not initialized by the programmer. The OS initializes them to zero (or null) before the program starts.
    • Heap: This area is used for dynamic memory allocation during the runtime of the process (e.g., using malloc, new). The heap grows and shrinks as memory is allocated and deallocated by the program.
    • Stack: This segment is used for static memory allocation and stores temporary data such as function parameters, return addresses, and local variables. It operates on a Last-In, First-Out (LIFO) basis. Each thread within a process can have its own stack.
    Process Memory Layout STACK Local variables, function parameters return addresses (LIFO) grows down Free Space grows up HEAP Dynamic memory allocation (malloc, new, etc.) BSS (Uninitialized Data) Global/static variables (zero-initialized) DATA (Initialized Data) Global/static variables (pre-initialized) TEXT (Code Segment) Executable instructions (read-only) High Address Low Address • Each thread has its own stack • LIFO structure • Automatic memory management • Shared among all threads • Manual memory management • Can cause memory leaks • Zeroed by OS at startup • Contains constants/literals • Shared among all threads • Protected from modification

    A process has its own memory space, which means that it has its own copy of the program code, data, and other resources. This memory space is isolated from other processes, so one process cannot directly access the memory of another process.

    What is a Thread?

    A thread is the smallest unit of execution within a process, often referred to as a lightweight process. It represents a single path of execution within its parent process. A process can have multiple threads, all executing concurrently and cooperating to perform the overall task of the process.

    A process needs at least one thread to execute, but it can have many threads running simultaneously.

    Threads within the same process share many of the process's resources, including:

    • Code Segment (Text Segment): The executable instructions.
    • Data Segment: Global and static variables.
    • Heap: Dynamically allocated memory.
    • Open files and signals: File descriptors and signal dispositions are typically shared.

    However, each thread has its own distinct set of resources to allow independent execution:

    • Program Counter (PC): Keeps track of the instruction the thread is currently executing.
    • Registers: Values of the CPU registers for the thread's current computation.
    • Stack: A dedicated stack space for local variables, function call parameters, and return addresses. This is crucial because each thread will execute different functions or the same functions with different local data.
    • Thread-Specific Data: Data that is private to the thread.
    Multi-threaded Process Architecture PROCESS SHARED RESOURCES CODE SEGMENT Executable Instructions (Read-only) DATA SEGMENT Global Variables Static Variables HEAP Dynamic Memory (malloc, new) OPEN FILES & I/O RESOURCES SIGNAL HANDLERS & PID INDIVIDUAL THREAD RESOURCES THREAD 1 STACK Program Counter CPU Registers Thread State THREAD 2 STACK Program Counter CPU Registers Thread State THREAD N STACK Program Counter CPU Registers Thread State shared shared shared

    Benefits of Using MultiThreads:

    Using threads offers several significant advantages:

    • Concurrency: Threads allow multiple tasks within an application to execute (or appear to execute) simultaneously. This is achieved by the OS rapidly switching between threads, giving the illusion of parallel execution even on a single-core processor.
    • Responsiveness: In applications with a user interface (UI), threads can greatly improve responsiveness. For example, a long-running task (like saving a large file or performing a complex calculation) can be offloaded to a separate thread. This prevents the main UI thread from freezing, allowing the user to continue interacting with the application.
    • Resource Sharing: Threads share the memory and resources of their parent process. This allows for efficient communication between threads (e.g., by reading and writing to shared variables) without the need for complex Inter-Process Communication (IPC) mechanisms. It also means that creating and context-switching threads is generally less resource-intensive than for processes.
    • Scalability: On multi-core processors, threads can be truly executed in parallel, with each thread running on a different core. This allows applications to take full advantage of available CPU power and scale their performance with an increasing number of cores. This is often referred to as hardware parallelism.

    Threads are lightweight compared to processes, as they do not require a separate memory space for the shared resources. This makes them faster to create and manage.

    Differences between Process and Thread

    FeatureProcessThread
    Memory SpaceEach process has its own isolated memory spaceThreads share the memory space of the parent process
    IsolationProcesses are completely isolated from each otherThreads within a process are not isolated from each other
    Creation TimeLonger to create (requires OS resource allocation)Shorter to create (lightweight, shares existing resources)
    Context SwitchingHigher overhead (must save/restore entire process state)Lower overhead (only registers and stack pointer)
    CommunicationSlower - requires IPC mechanisms (pipes, sockets, shared memory)Faster - direct access to shared memory and variables
    Resource ConsumptionHigher memory and CPU overheadLower memory and CPU overhead
    Fault ToleranceIf one process crashes, others remain unaffectedIf one thread crashes, entire process may terminate
    Concurrency ModelAchieves concurrency through multiple independent processesAchieves concurrency through multiple threads in same process
    ScalabilityLimited by system resources and IPC overheadBetter scalability due to shared resources
    SecurityBetter security due to memory isolationShared memory can lead to security vulnerabilities
    DebuggingEasier to debug due to isolationMore complex debugging due to shared state

    Understanding Multithreading

    Multithreading is the ability of an operating system (or an application) to manage multiple threads of execution within a single process. These threads can run concurrently, meaning they appear to run at the same time. On a single-core processor, this concurrency is achieved through time-slicing (rapidly switching between threads). On a multi-core processor, threads can run truly in parallel, with different threads executing on different cores simultaneously.

    Multithreading on Multi-Core CPU CPU CORE 1 Thread 1 CPU CORE 2 Thread 2 CPU CORE 3 Thread 3 CPU CORE 4 Thread 4 Time-Slicing on Single Core SINGLE CPU CORE T1 T2 T3 T4 Time Performance Comparison Single-threaded: 8 seconds Multi-threaded: 2 seconds Benefits of Multithreading: • Parallel execution on multi-core systems • Better resource utilization • Improved responsiveness • Concurrent I/O operations

    Advantages of Multithreading

    Multithreading offers several key benefits:

    • Enhanced Performance: Especially on multi-core processors, multithreading can significantly boost performance by allowing parallel execution of tasks. Even on single-core systems, it can improve perceived performance by allowing background tasks to run without interrupting the main execution flow.
    • Concurrency: Multiple threads can perform different tasks simultaneously, which is particularly useful in applications that require handling multiple operations at once, such as web servers processing multiple client requests or GUI applications responding to user input while performing background tasks.
    • Improved Responsiveness: Applications, particularly those with graphical user interfaces (GUIs), can remain responsive to user input while performing long-running operations in separate threads. For instance, a desktop application can continue to respond to button clicks while a complex calculation or file download happens in the background.
    • Efficient Resource Utilization: Since threads within a process share the same memory space (code, data, heap) and other resources like open files, the overhead for creating and managing threads is much lower than for creating separate processes. This leads to more efficient use of system resources.
    • Simplified Program Structure (for certain problems): Some complex tasks can be broken down into simpler, more manageable sub-tasks, each handled by a separate thread. This can sometimes lead to a more intuitive and cleaner program structure, especially for applications that inherently involve concurrent activities (e.g., a web server handling multiple client requests).

    Challenges in Multithreading

    While powerful, multithreading introduces complexities and potential pitfalls:

    Race Conditions

    A race condition occurs when the behavior of a software system depends on the unpredictable sequence or timing of operations by multiple threads. If multiple threads access and manipulate shared data concurrently, and at least one of the accesses is a write, the outcome can be non-deterministic and lead to erroneous results. The "race" is to see which thread accesses/modifies the data last.

    Example: Simple Counter (Java)

    Consider a shared counter that multiple threads try to increment:

    class Counter { private int count = 0; // Unsynchronized method - potential race condition! public void increment() { int temp = count; // Read temp = temp + 1; // Modify count = temp; // Write } public int getCount() { return count; } } class WorkerThread extends Thread { private Counter counter; public WorkerThread(Counter counter) { this.counter = counter; } @Override public void run() { for (int i = 0; i < 1000; i++) { counter.increment(); } } } public class RaceConditionDemo { public static void main(String[] args) throws InterruptedException { Counter sharedCounter = new Counter(); WorkerThread t1 = new WorkerThread(sharedCounter); WorkerThread t2 = new WorkerThread(sharedCounter); t1.start(); t2.start(); t1.join(); // Wait for t1 to finish t2.join(); // Wait for t2 to finish // Expected count: 2000, Actual count: often less due to race condition System.out.println("Final count: " + sharedCounter.getCount()); } }

    In the increment() method, the read-modify-write sequence (int temp = count; temp = temp + 1; count = temp;) is not atomic. If Thread A reads count, then Thread B reads count (getting the same value) before Thread A writes its updated value, both threads might write the same incremented value, leading to a lost update.

    Race Condition: Lost Update Problem SHARED COUNTER count = 5 THREAD A Read: temp = 5 Modify: temp = 6 Write: count = 6 THREAD B Read: temp = 5 Modify: temp = 6 Write: count = 6 reads 5 reads 5 Execution Timeline A reads B reads A writes B writes ⚠️ PROBLEM: Expected = 7, Actual = 6 (Lost Update)

    Deadlocks

    A deadlock is a state in which two or more threads are blocked forever, each waiting for the other to release a resource that it holds. This typically occurs when threads attempt to acquire multiple locks in different orders.

    The Coffman conditions describe four necessary conditions for a deadlock to occur:

    1. Mutual Exclusion: At least one resource must be held in a non-sharable mode; that is, only one thread at a time can use the resource. If another thread requests that resource, the requesting thread must be delayed until the resource has been released.
    2. Hold and Wait: A thread must be holding at least one resource and waiting to acquire additional resources that are currently being held by other threads.
    3. No Preemption: Resources cannot be preempted; that is, a resource can be released only voluntarily by the thread holding it, after that thread has completed its task.
    4. Circular Wait: A set of waiting threads {T0, T1, ..., Tn} must exist such that T0 is waiting for a resource held by T1, T1 is waiting for a resource held by T2, ..., Tn-1 is waiting for a resource held by Tn, and Tn is waiting for a resource held by T0.
    Deadlock Scenario THREAD A THREAD B RESOURCE 1 (Database) RESOURCE 2 (File Lock) HOLDS HOLDS WAITS FOR WAITS FOR ⚠️ DEADLOCK Circular Wait! Deadlock Scenario: 1. Thread A holds Resource 1, wants Resource 2 2. Thread B holds Resource 2, wants Resource 1 Result: Both threads wait forever → System deadlock!

    Need for Synchronization

    Because threads share memory, it's crucial to control access to shared data to prevent issues like race conditions, deadlocks, and data corruption. This control mechanism is called synchronization. Synchronization ensures that only one thread can access a critical section (a piece of code that accesses shared resources) at any given time, or that operations on shared data are performed in a coordinated manner.

    Thread Synchronization Primitives

    Operating systems and programming languages provide various synchronization primitives to help manage concurrent access:

    • Mutexes (Mutual Exclusion Locks): A mutex is like a key that only one thread can hold at a time. If a thread wants to access a shared resource, it must first acquire the mutex. If the mutex is already held by another thread, the requesting thread will block until the mutex is released. Once the thread finishes with the resource, it releases the mutex.
    • Semaphores: A semaphore is a more general synchronization tool. It maintains a counter. Threads can "wait" on a semaphore (decrementing the counter, and blocking if it's zero) or "signal" a semaphore (incrementing the counter, potentially waking up a blocked thread). They can be used to control access to a pool of resources or to signal between threads.
    • Monitors: A monitor is a higher-level synchronization construct that encapsulates shared data and the methods to operate on that data, along with built-in mutual exclusion and condition variables for waiting and signaling. Java's synchronized keyword and wait()/notify() methods are based on the monitor concept.

    These primitives are essential tools for writing correct and robust multithreaded applications.

    Creating and Managing Threads in Java (Examples)

    Java provides built-in support for multithreading. Here are two common ways to create threads:

    1. Extending the Thread Class

    You can create a new class that extends java.lang.Thread and override its run() method.

    class MyThread extends Thread { private String threadName; public MyThread(String name) { this.threadName = name; System.out.println("Creating " + threadName ); } @Override public void run() { System.out.println("Running " + threadName ); try { for(int i = 4; i > 0; i--) { System.out.println("Thread: " + threadName + ", " + i); // Let the thread sleep for a while. Thread.sleep(50); } } catch (InterruptedException e) { System.out.println("Thread " + threadName + " interrupted."); } System.out.println("Thread " + threadName + " exiting."); } } public class ThreadExample { public static void main(String args[]) { MyThread thread1 = new MyThread("Thread-1"); MyThread thread2 = new MyThread("Thread-2"); thread1.start(); // Calls run() method thread2.start(); // Calls run() method } }

    2. Implementing the Runnable Interface

    A more flexible approach is to implement the java.lang.Runnable interface. This is often preferred because Java does not support multiple inheritance of classes, so if your class already extends another class, it cannot extend Thread.

    class MyRunnable implements Runnable { private String threadName; public MyRunnable(String name) { this.threadName = name; System.out.println("Creating " + threadName ); } @Override public void run() { System.out.println("Running " + threadName ); try { for(int i = 4; i > 0; i--) { System.out.println("Thread: " + threadName + ", " + i); Thread.sleep(50); } } catch (InterruptedException e) { System.out.println("Thread " + threadName + " interrupted."); } System.out.println("Thread " + threadName + " exiting."); } } public class RunnableExample { public static void main(String args[]) { MyRunnable runnable1 = new MyRunnable("Thread-A"); MyRunnable runnable2 = new MyRunnable("Thread-B"); Thread thread1 = new Thread(runnable1); Thread thread2 = new Thread(runnable2); thread1.start(); thread2.start(); // Example of join(): wait for threads to finish try { thread1.join(); thread2.join(); } catch (InterruptedException e) { System.out.println("Main thread interrupted."); } System.out.println("Main thread exiting."); } }

    To start a thread created via Runnable, you create a Thread object, pass the Runnable instance to its constructor, and then call start() on the Thread object. The join() method can be used to make the current thread wait until the specified thread completes its execution.

    Seeing it in Action: Java Processes and Threads

    Now let's see how we can observe some of these concepts with a running Java application. First, let's create a simple Java program named SimpleJavaProcess.java that creates a couple of worker threads:

    // SimpleJavaProcess.java class SimpleWorker implements Runnable { private String name; private int iterations; private long sleepMillis; public SimpleWorker(String name, int iterations, long sleepMillis) { this.name = name; this.iterations = iterations; this.sleepMillis = sleepMillis; } @Override public void run() { System.out.println("Thread: " + name + " starting."); try { for (int i = 0; i < iterations; i++) { System.out.println("Thread: " + name + " - iteration " + (i + 1) + "/" + iterations); Thread.sleep(sleepMillis); } } catch (InterruptedException e) { System.out.println("Thread: " + name + " interrupted."); Thread.currentThread().interrupt(); // Preserve interrupt status } System.out.println("Thread: " + name + " finishing."); } } public class SimpleJavaProcess { public static void main(String[] args) { System.out.println("Main thread starting."); // Worker threads will run for 6 iterations * 5 seconds/iteration = 30 seconds Thread worker1 = new Thread(new SimpleWorker("Worker-1", 6, 5000)); Thread worker2 = new Thread(new SimpleWorker("Worker-2", 6, 5000)); worker1.start(); worker2.start(); System.out.println("Main thread waiting for worker threads to complete..."); try { worker1.join(); // Wait for worker1 to finish worker2.join(); // Wait for worker2 to finish } catch (InterruptedException e) { System.out.println("Main thread interrupted while waiting for worker threads."); Thread.currentThread().interrupt(); // Preserve interrupt status } System.out.println("Main thread finishing."); } }

    To try this out, save the code above into a file named SimpleJavaProcess.java. Then, open your terminal or command prompt and compile it: javac SimpleJavaProcess.java

    Once it compiles successfully, you can run it: java SimpleJavaProcess

    You should see output from the main thread and the two worker threads, progressing over about 30 seconds.

    While the SimpleJavaProcess program is running (it will run for about 30 seconds), open another terminal window to try these commands:

    Finding the Process ID (PID):

    First, you'll need the Process ID (PID) of your running SimpleJavaProcess.

    • Using jps (Java Virtual Machine Process Status Tool - Recommended): This tool is part of the JDK and specifically lists Java processes. jps -l This command lists the PID and the full main class name or JAR file name, making it easy to identify your SimpleJavaProcess.

    • On Linux/macOS: If jps isn't available or you prefer a general OS tool: ps aux | grep SimpleJavaProcess Look for the line corresponding to your java SimpleJavaProcess command. The PID is usually the second column.

    • On Windows (Task Manager):

      1. Open Task Manager (Ctrl+Shift+Esc).
      2. Go to the "Details" tab.
      3. Look for java.exe (or javaw.exe). The "Command line" column (if added) can help you confirm it's running SimpleJavaProcess.
      4. The "PID" column will show the Process ID.
    ~/Projects/codewiz/java-thread-examples jps 57691 Launcher 19676 Main 57692 SimpleJavaProcess

    Once you have the PID, replace <PID> in the commands below with the actual ID.

    Inspecting Threads: On Linux/macOS

    • Using ps: The ps command can display threads associated with a process. ps -M <PID>

    • Using top: The top command provides a dynamic real-time view of running processes and can also show threads. top -H -p <PID> The -H option toggles thread visibility. This will show you the resource usage (CPU, memory) per thread.

    You'll notice the main thread, your "Worker-1" and "Worker-2" threads, and also several other threads managed by the JVM itself (e.g., Garbage Collection threads like "GC Thread", JIT (Just-In-Time) compiler threads like "C2 CompilerThread", signal dispatchers, etc.).

    Using jstack (Cross-Platform)

    The jstack utility (part of the JDK) is excellent for getting a "thread dump" of a Java process. This shows the stack trace for each thread, which tells you what each thread is doing at that moment, including its state (e.g., RUNNABLE, TIMED_WAITING for Thread.sleep(), WAITING, BLOCKED).

    jstack <PID>

    Run this command while SimpleJavaProcess is active. In the output, you should be able to identify:

    • The "main" thread (likely waiting for your worker threads to join()).
    • "Worker-1" and "Worker-2" (likely in TIMED_WAITING state due to Thread.sleep()).
    • Various JVM internal threads (Signal Dispatcher, Finalizer, Reference Handler, GC threads, Compiler threads, etc.).

    jstack is particularly useful for diagnosing issues like application hangs or deadlocks, as it shows exactly where each thread is stuck.

    These tools give you a glimpse into how the OS and JVM manage the threads you create and the ones they use internally.

    ~/Projects/codewiz/java-streams-examples jstack 57692 "main" #3 [10243] prio=5 os_prio=31 cpu=48.34ms elapsed=18.34s tid=0x0000000131809200 nid=10243 in Object.wait() [0x000000016dc46000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait0(java.base@24/Native Method) - waiting on <0x00000006f1fb98a0> (a java.lang.Thread) at java.lang.Object.wait(java.base@24/Object.java:389) at java.lang.Thread.join(java.base@24/Thread.java:1860) - locked <0x00000006f1fb98a0> (a java.lang.Thread) at java.lang.Thread.join(java.base@24/Thread.java:1936) at com.codewiz.examples.SimpleJavaProcess.main(SimpleJavaProcess.java:44) "Thread-0" #26 [43267] prio=5 os_prio=31 cpu=6.03ms elapsed=18.29s tid=0x0000000132009000 nid=43267 waiting on condition [0x0000000170236000] java.lang.Thread.State: TIMED_WAITING (sleeping) at java.lang.Thread.sleepNanos0(java.base@24/Native Method) at java.lang.Thread.sleepNanos(java.base@24/Thread.java:482) at java.lang.Thread.sleep(java.base@24/Thread.java:513) at com.codewiz.examples.SimpleWorker.run(SimpleJavaProcess.java:21) at java.lang.Thread.runWith(java.base@24/Thread.java:1460) at java.lang.Thread.run(java.base@24/Thread.java:1447) "Thread-1" #27 [43011] prio=5 os_prio=31 cpu=5.77ms elapsed=18.29s tid=0x0000000125012400 nid=43011 waiting on condition [0x0000000170442000] java.lang.Thread.State: TIMED_WAITING (sleeping) at java.lang.Thread.sleepNanos0(java.base@24/Native Method) at java.lang.Thread.sleepNanos(java.base@24/Thread.java:482) at java.lang.Thread.sleep(java.base@24/Thread.java:513) at com.codewiz.examples.SimpleWorker.run(SimpleJavaProcess.java:21) at java.lang.Thread.runWith(java.base@24/Thread.java:1460) at java.lang.Thread.run(java.base@24/Thread.java:1447) "C2 CompilerThread0" #19 [26627] daemon prio=9 os_prio=31 cpu=17.73ms elapsed=18.33s tid=0x000000013200ae00 nid=26627 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE No compile task "C1 CompilerThread0" #22 [27139] daemon prio=9 os_prio=31 cpu=31.26ms elapsed=18.33s tid=0x000000013183b200 nid=27139 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE No compile task ... (other threads)

    Conclusion

    Understanding processes, threads, and multithreading is fundamental to grasping how modern software and operating systems work their magic. These concepts are pivotal for anyone looking to develop efficient, robust, and responsive applications, whether you're building a complex server-side system, a snappy desktop application, or a smooth mobile app.

    Keep exploring, keep learning, and happy coding!


    I'm passionate about sharing knowledge on modern software development practices, Java, Spring, and system design. If you found this guide helpful, I'd love to connect and share more insights with you!

    🔗 Blog - Deep dive into more tutorials and guides
    🔗 YouTube - Video tutorials and live coding sessions
    🔗 LinkedIn - Industry insights
    🔗 Medium - Technical articles and thought pieces
    🔗 GitHub - Projects and code examples

    Follow for regular updates on Java, Spring Framework, microservices, system design, and the latest in software engineering! 🚀

    Summarise

    Transform Your Learning

    Get instant AI-powered summaries of YouTube videos and websites. Save time while enhancing your learning experience.

    Instant video summaries
    Smart insights extraction
    Channel tracking