The Java Collections Framework (JCF) is a set of classes and interfaces that implement commonly reusable collection data structures. It provides a standard architecture for storing and manipulating groups of objects, enhancing performance and interoperability.
Key components of the JCF:
List
, Set
, Map
).ArrayList
, HashSet
, HashMap
).Collections
class that perform useful operations like sorting, searching, and shuffling.<<<<<<< HEAD
Collection
: An interface that is the root of the collection hierarchy.Collections
: A utility class that provides static methods for operating on or returning collections.Iterator
and ListIterator
?Feature | Iterator | ListIterator |
---|---|---|
Traversal | Forward direction only | Both forward and backward |
Applicable to | List , Set , Queue | List only |
add() method | No | Yes |
set() method | No | Yes |
remove() method | Yes | Yes |
Index access | No | Yes (nextIndex() , previousIndex() ) |
======= |
The java.lang.Iterable
interface is at the top of the hierarchy. However, the java.util.Collection
interface is the root of the collection part of the framework (which excludes Map
). The Collection
interface extends Iterable
.
Collection
and Collections
?Collection
: An interface (java.util.Collection
) that is the foundation of the JCF. It declares the core methods that all collections (except Maps) will have.Collections
: A utility class (java.util.Collections
) that contains exclusively static methods that operate on or return collections. Examples include sort()
, reverse()
, unmodifiableList()
.Map
not part of the Collection
interface?The Collection
interface specifies a group of objects. Its methods, like add(E e)
and remove(Object o)
, are designed for a single element. In contrast, Map
stores key-value pairs. Its fundamental operations (put(K key, V value)
) are structurally different from Collection
's operations. While maps can be viewed as collections (of keys, values, or entries), they don't fit the Collection
interface's contract.
Iterator
and an Enumeration
?Enumeration
is a legacy interface from Java 1.0. Iterator
, introduced in Java 1.2, is its modern replacement. Key differences:
Iterator
has a remove()
method, while Enumeration
does not.Iterator
has more concise method names (hasNext()
, next()
) compared to Enumeration
(hasMoreElements()
, nextElement()
).
New code should always prefer Iterator
.0306854e04e5750d0c811d3a43bc4fad6aa4ac39
<<<<<<< HEAD
=======
List
?A List
is an ordered collection (also known as a sequence) that can contain duplicate elements. Users can access elements by their integer index and search for elements in the list.
ArrayList
and LinkedList
. When would you use each?0306854e04e5750d0c811d3a43bc4fad6aa4ac39
Feature | ArrayList | LinkedList |
---|---|---|
Internal Structure | Dynamic array (Object[] ) | Doubly linked list (nodes with pointers) |
get(index) | O(1) - Very fast | O(n) - Slow (traverses from start/end) |
add(E e) (at end) | O(1) amortized | O(1) |
add/remove (middle) | O(n) - Slow (requires shifting) | O(1) - Fast (if iterator is at position) |
Memory Usage | Less memory overhead | More memory overhead (for pointers) |
ArrayList
when you need fast, random, index-based access. This is the most common use case.LinkedList
when you have a large list and will be frequently adding or removing elements from the beginning or middle. It's also useful when implementing a Queue
or Deque
.ArrayList
grow dynamically?When an element is added to an ArrayList
and its internal array is full, the ArrayList
creates a new, larger array, copies all the elements from the old array to the new one, and then adds the new element. The new capacity is typically 1.5 times the old capacity (oldCapacity + (oldCapacity >> 1)
).
ArrayList
?If created with the no-arg constructor, an ArrayList
has an initial capacity of 0. The internal array is initialized with a default size of 10 only when the first element is added. If you know the approximate number of elements, it's more efficient to specify the capacity in the constructor to avoid resizing.
Vector
class? Why is it considered a legacy class?Vector
is a legacy implementation of the List
interface, similar to ArrayList
. Its key characteristic is that all of its methods are synchronized
, making it thread-safe. However, this synchronization adds performance overhead. In modern Java, it's generally replaced by ArrayList
for single-threaded use or CopyOnWriteArrayList
for concurrent read-heavy scenarios.
ArrayDeque
over Stack
for stack operations?The Stack
class is a legacy class that extends Vector
, inheriting its synchronization overhead. The Deque
interface provides a more complete and consistent set of LIFO (Last-In, First-Out) operations. ArrayDeque
is the recommended implementation for a stack, as it is more efficient and follows modern conventions.
// Modern way to create a stack Deque<Integer> stack = new ArrayDeque<>(); stack.push(1); int top = stack.pop();
<<<<<<< HEAD
=======
Set
?A Set
is a collection that contains no duplicate elements. It models the mathematical set abstraction.
0306854e04e5750d0c811d3a43bc4fad6aa4ac39
HashSet
, LinkedHashSet
, and TreeSet
.Feature | HashSet | LinkedHashSet | TreeSet |
---|---|---|---|
Ordering | Unordered | Insertion order | Sorted (natural or custom) |
Implementation | Hash table (HashMap internally) | Hash table + Doubly linked list | Red-black tree (TreeMap internally) |
Performance | O(1) for add , remove , contains | O(1) for add , remove , contains | O(log n) for add , remove , contains |
Nulls | Allows one null element | Allows one null element | Does not allow null s (by default) |
HashSet
ensure uniqueness of elements?HashSet
relies on the hashCode()
and equals()
methods of the objects it stores. When you add an element, HashSet
first calculates its hash code to find a bucket location. Then, it iterates through the elements in that bucket (if any) and uses the equals()
method to check if the element already exists.
HashSet
and then modify it?If you modify a mutable object after adding it to a HashSet
in a way that changes its hashCode()
, it can lead to serious problems. The HashSet
will not be able to find the object anymore, as it would look in the wrong bucket based on the new hash code. The object becomes "lost" in the set, and you won't be able to remove it. This violates the Set
contract. It is highly recommended to only use immutable objects in a HashSet
.
TreeSet
store custom objects? How?Yes, but the custom objects must be comparable. There are two ways to achieve this:
Comparable
: The custom class implements the Comparable
interface and provides the logic for natural ordering in its compareTo()
method.Comparator
: You can pass a Comparator
object to the TreeSet
's constructor. This Comparator
will define the custom ordering for the objects.EnumSet
?EnumSet
is a highly-performant Set
implementation for use with enum
types. Internally, it is represented as a bit vector. All operations are extremely fast. It does not allow null
elements.
<<<<<<< HEAD
Map
?A Map
is an object that maps keys to values. A map cannot contain duplicate keys; each key can map to at most one value.
0306854e04e5750d0c811d3a43bc4fad6aa4ac39
HashMap
work internally?A HashMap
uses an array of "buckets" or "bins" to store entries.
put(key, value)
:
hashCode()
of the key is calculated.index = hashCode & (n-1)
).equals()
value exists, its value is replaced.TREEIFY_THRESHOLD
, which is 8), the linked list is converted into a balanced red-black tree to improve search performance from O(n) to O(log n).get(key)
:
hashCode()
is used to find the bucket index.equals()
to find the matching key and return its value.hashCode()
and equals()
?This is a critical contract for all hash-based collections.
equals()
, then they must have the same hashCode()
.hashCode()
, they are not necessarily equal. This is a hash collision. equals()
must be used to check for true equality.Violation of this contract will cause hash-based collections like HashMap
and HashSet
to behave unpredictably.
HashMap
, LinkedHashMap
, and TreeMap
.Feature | HashMap | LinkedHashMap | TreeMap |
---|---|---|---|
Ordering | Unordered | Insertion order (or access order) | Natural order (or custom) |
Implementation | Hash table | Hash table + Doubly linked list | Red-black tree |
Performance | O(1) for get /put (average) | Slightly slower than HashMap | O(log n) for get /put |
Null Keys | Allows one null key | Allows one null key | Does not allow null keys |
WeakHashMap
. In what scenario would you use it?A WeakHashMap
is a Map
implementation where the keys are stored as WeakReference
s. This means that if a key is no longer referenced anywhere else in the application (i.e., it becomes "weakly reachable"), it is eligible for garbage collection. When the garbage collector removes the key, its entry is automatically removed from the WeakHashMap
.
Use Case: Caching. It's perfect for holding metadata for objects where you don't want the cache to prevent the objects from being garbage collected.
Hashtable
and ConcurrentHashMap
?Hashtable
: A legacy, thread-safe map where every method is synchronized
. This means only one thread can operate on the map at a time, leading to poor concurrency. It does not allow null
keys or values.ConcurrentHashMap
: A modern, highly-performant concurrent map. It uses a technique called lock striping. Instead of a single lock for the whole map, it has an array of locks, each guarding a segment of the map. This allows multiple threads to access different parts of the map simultaneously. It is the standard choice for a thread-safe map.<<<<<<< HEAD
Feature | HashMap | LinkedHashMap | TreeMap |
---|---|---|---|
Ordering | Unordered | Insertion order | Natural order (or custom) |
Implementation | Hash table | Hash table + Linked list | Red-black tree |
Performance | O(1) for get /put (average) | Slightly slower than HashMap | O(log n) for get /put |
Null Keys | Allows one null key | Allows one null key | Does not allow null keys |
Hashtable
class?The Hashtable
class is a legacy, thread-safe implementation of the Map
interface. It's similar to HashMap
, but all of its methods are synchronized, and it doesn't allow null keys or values. It's generally not recommended to use Hashtable
in new code; ConcurrentHashMap
is a better choice for a thread-safe map.
=======
IdentityHashMap
?IdentityHashMap
is a special Map
implementation that violates the general Map
contract. It compares keys using reference equality (==
) instead of object equality (equals()
). This means two keys are considered equal only if they are the exact same object in memory.
Use Case: Useful for tracking objects in algorithms like graph traversal or serialization where object identity is important.
Queue
and a Deque
?Queue
: A FIFO (First-In, First-Out) data structure. Elements are added to the tail and removed from the head.Deque
(Double-Ended Queue): A more general data structure that supports element insertion and removal at both ends. It can be used as a queue (FIFO) or a stack (LIFO).0306854e04e5750d0c811d3a43bc4fad6aa4ac39
Queue
interface.The Queue
interface offers two sets of methods for its core operations, which differ in how they handle failure:
<<<<<<< HEAD
The Deque
(double-ended queue) interface is a linear collection that supports element insertion and removal at both ends.
=======
Operation | Throws Exception | Returns Special Value |
---|---|---|
Insert | add(e) | offer(e) (returns false ) |
Remove | remove() | poll() (returns null ) |
Examine | element() | peek() (returns null ) |
Generally, the methods that return a special value (offer
, poll
, peek
) are preferred for programmatic handling of failures, especially for bounded (fixed-capacity) queues.
0306854e04e5750d0c811d3a43bc4fad6aa4ac39
PriorityQueue
? How is the priority determined?A PriorityQueue
is a special queue where elements are ordered based on their priority. The element with the highest priority (by default, the "smallest" element) is always at the head of the queue. The ordering is determined by:
Comparable
).Comparator
provided to the PriorityQueue
's constructor.
It is implemented as a binary heap and does not permit null
elements.BlockingQueue
?A BlockingQueue
is a special Queue
used in concurrent programming. It provides blocking operations:
put(e)
: Blocks until space is available in the queue to add the element.take()
: Blocks until an element is available in the queue to be removed.
This makes it ideal for producer-consumer scenarios, where one or more threads produce items and other threads consume them. ArrayBlockingQueue
and LinkedBlockingQueue
are common implementations.ArrayDeque
? Is it thread-safe?ArrayDeque
is a resizable-array implementation of the Deque
interface. It is highly efficient for both stack and queue operations. It is not thread-safe. For a concurrent deque, you would use ConcurrentLinkedDeque
.
<<<<<<< HEAD
A Java Stream is a sequence of elements from a source that supports aggregate operations. It is not a data structure; it's a way to process data from a source (like a Collection
) in a declarative, functional style.
0306854e04e5750d0c811d3a43bc4fad6aa4ac39
Stream.iterate
).You can create a stream from various sources:
myList.stream()
Arrays.stream(myArray)
or Stream.of(val1, val2)
Files.lines(Paths.get("file.txt"))
myString.chars()
Stream.iterate(0, n -> n + 2)
or Stream.generate(Math::random)
map
, filter
, sorted
, distinct
.forEach
, collect
, reduce
, count
, findFirst
.A stream pipeline consists of a source, zero or more intermediate operations, and one terminal operation.
map
and filter
operations.filter(Predicate<T>)
: An intermediate operation that takes a predicate (a function returning a boolean) and returns a new stream containing only the elements that match the predicate.map(Function<T, R>)
: An intermediate operation that takes a function and applies it to each element of the stream, returning a new stream of the transformed elements.// Find the names of employees older than 30 employees.stream() .filter(e -> e.getAge() > 30) // Intermediate: filter .map(Employee::getName) // Intermediate: map .forEach(System.out::println); // Terminal
map
and flatMap
?map
: Transforms each element one-to-one. Stream<T>
-> Stream<R>
.flatMap
: Transforms each element one-to-many and "flattens" the result. It is used when a mapping function would return a stream for each element. Stream<T>
-> Stream<Stream<R>>
-> Stream<R>
.Example: Given a list of words, get a list of all unique characters.
List<String> words = List.of("Hello", "World"); List<String> uniqueChars = words.stream() .map(word -> word.split("")) // Returns Stream<String[]> .flatMap(Arrays::stream) // Flattens Stream<String[]> to Stream<String> .distinct() .collect(Collectors.toList()); // ["H", "e", "l", "o", "W", "r", "d"]
Short-circuiting operations are operations that may not need to process the entire stream to produce a result.
limit(n)
anyMatch
, allMatch
, noneMatch
, findFirst
, findAny
For example, findFirst()
will stop processing as soon as it finds an element.reduce
operation.The reduce
operation is a terminal operation that combines all elements of a stream into a single result. It takes an identity value and a binary operator.
List<Integer> numbers = List.of(1, 2, 3, 4, 5); // Sum of all numbers int sum = numbers.stream() .reduce(0, (a, b) -> a + b); // or .reduce(0, Integer::sum);
collect
operation and what is the Collector
interface?collect
is a terminal operation that transforms a stream into a different data structure, typically a Collection
or Map
. It takes a Collector
as an argument.
The Collector
interface describes how to accumulate stream elements into a mutable result container. The Collectors
utility class provides many common implementations.
toList
, toSet
, toMap
.Collectors.toList()
: Collects stream elements into a List
.Collectors.toSet()
: Collects stream elements into a Set
.Collectors.toMap(keyMapper, valueMapper)
: Collects stream elements into a Map
. You must provide functions to extract the key and value from each element. If duplicate keys are possible, you must provide a third argument, a merge function, to resolve collisions.groupingBy
collector.Collectors.groupingBy
is a powerful collector used to group elements of a stream based on a classification function and store the results in a Map
.
// Group employees by their department Map<Department, List<Employee>> employeesByDept = employees.stream() .collect(Collectors.groupingBy(Employee::getDepartment));
You can also create nested groups by passing a downstream collector.
groupingBy
and partitioningBy
?groupingBy(classifier)
: Groups elements based on the result of the classifier function. The keys of the resulting Map
can be of any type.partitioningBy(predicate)
: A special case of groupingBy
that takes a predicate. It partitions the elements into a Map<Boolean, List<T>>
. The key is always a boolean: true
for elements that match the predicate and false
for those that don't.A parallel stream is a stream that is processed by multiple threads simultaneously. You can create one by calling myCollection.parallelStream()
.
Potential Pitfalls:
findFirst
) may be slower on parallel streams.ForkJoinPool
threads, impacting the entire application.Parallel streams are best suited for CPU-intensive operations on large datasets where the operations are independent and stateless.
Optional
class?Optional<T>
is a container object that may or may not contain a non-null value. Its primary purpose is to provide a better way to handle null
values, avoiding NullPointerException
s and allowing for the design of more expressive, readable APIs where a return value might be absent.
Optional
instance?Optional.of(value)
: Creates an Optional
with a non-null value. Throws NullPointerException
if the value is null
.Optional.ofNullable(value)
: Creates an Optional
that wraps the given value if it's non-null, or returns an empty Optional
if the value is null
. This is the safest way to create an Optional
from a value that might be null
.Optional.empty()
: Returns a static, empty Optional
instance.get()
and orElse()
?get()
: Returns the contained value if present, otherwise throws a NoSuchElementException
. It should be used only when you are absolutely sure the Optional
is not empty, often after an isPresent()
check.orElse(defaultValue)
: Returns the contained value if present, otherwise returns the provided defaultValue
. This is a much safer way to get a value from an Optional
.orElse()
, orElseGet()
, and orElseThrow()
.orElse(T other)
: Returns the value if present, otherwise returns other
. The other
object is always created, even if the Optional
is not empty.orElseGet(Supplier<? extends T> other)
: Returns the value if present, otherwise returns the result of invoking the Supplier
. The Supplier
is only invoked if the Optional
is empty. This is more efficient if the default object is expensive to create.orElseThrow(Supplier<? extends X> exceptionSupplier)
: Returns the value if present, otherwise throws an exception created by the provided Supplier
. This is the preferred way to signal that a value is required.map
, flatMap
, and filter
work with Optional
?These methods allow you to perform functional-style, chained operations on Optional
values without explicit isPresent()
checks.
map(Function<T, R>)
: If a value is present, applies the mapping function to it. If the function result is not null
, returns an Optional
describing the result. Otherwise returns an empty Optional
.flatMap(Function<T, Optional<R>>)
: Similar to map
, but the mapping function must return an Optional
. This is useful when the mapping function itself might return an empty result.filter(Predicate<T>)
: If a value is present and it matches the given predicate, returns an Optional
describing the value. Otherwise returns an empty Optional
.// Chaining operations on an Optional String result = userOptional .filter(u -> u.getAge() > 18) .flatMap(User::getAddress) .map(Address::getStreet) .orElse("Street not found");
Optional
as a field in a class or as a method parameter?null
check from the field itself to the Optional
wrapper. It's better to use Optional.ofNullable(field)
in the getter method.Optional
, adding unnecessary boilerplate. It's better to use method overloading to provide a version of the method for when the value is absent.Optional
is primarily intended as a return type for methods that might not find a value.
fail-fast
and fail-safe
iterators?fail-fast
: These iterators throw a ConcurrentModificationException
if the underlying collection is structurally modified (e.g., by adding or removing elements) in any way other than by the iterator's own remove()
method. They work on the original collection. Examples: ArrayList
, HashMap
.fail-safe
: These iterators do not throw an exception if the collection is modified. They work on a clone or a snapshot of the collection. They are generally slower and consume more memory. Examples: ConcurrentHashMap
, CopyOnWriteArrayList
.Comparable
and Comparator
?Comparable
: An interface used to define the natural ordering of a class. The class itself must implement Comparable
and its compareTo()
method. Example: String
implements Comparable
.Comparator
: An interface used to define a custom or external ordering. A separate class implements Comparator
, or you can use a lambda. This allows you to define multiple different orderings for a class.You can sort a list of Person
objects by their natural order (e.g., by ID) using Comparable
, and also provide a Comparator
to sort them by name or age.
Unmodifiable collections are wrappers around existing collections that prevent any modification. Any attempt to modify them (e.g., add
, remove
, set
) will throw an UnsupportedOperationException
. They are useful for creating read-only views of data.
of
methods): The easiest way. These create truly immutable collections.
List<String> immutableList = List.of("a", "b", "c");
Collections.unmodifiable...
methods: These create a wrapper.
Important: If the originalList<String> modifiableList = new ArrayList<>(); List<String> unmodifiableView = Collections.unmodifiableList(modifiableList);
modifiableList
is changed, the unmodifiableView
will also change.Synchronized collections are collections wrapped using methods like Collections.synchronizedList()
or Collections.synchronizedMap()
. They achieve thread safety by synchronizing every method.
They are generally not recommended in modern concurrent programming because:
Modern concurrent collections like ConcurrentHashMap
and CopyOnWriteArrayList
offer much better performance and safer semantics.
Sequenced Collections is a feature in Java 21 that introduces a set of new interfaces providing a unified, well-defined API for collections that have a specific encounter order.
SequencedCollection<E>
: The main interface, providing methods like getFirst()
, getLast()
, addFirst()
, addLast()
, and reversed()
.SequencedSet<E>
: A Set
that is also a SequencedCollection
(e.g., LinkedHashSet
).SequencedMap<K, V>
: A Map
that is also a SequencedCollection
(e.g., LinkedHashMap
).
This feature standardizes access to first/last elements and provides a simple way to get a reversed view of a collection.I would use a TreeSet
. To implement custom sorting, I would pass a Comparator
to the TreeSet
's constructor.
// Assuming an Employee class with getName() Comparator<Employee> byName = Comparator.comparing(Employee::getName); Set<Employee> sortedEmployees = new TreeSet<>(byName); sortedEmployees.add(new Employee("Bob")); sortedEmployees.add(new Employee("Alice")); // The set will now contain [Alice, Bob]
I would use a LinkedHashMap
. It can be configured to maintain access order instead of insertion order. By overriding the removeEldestEntry()
method, it can automatically remove the least recently used entry when the map exceeds a certain size.
public class LRUCache<K, V> extends LinkedHashMap<K, V> { private final int capacity; public LRUCache(int capacity) { // true for access-order, false for insertion-order super(capacity, 0.75f, true); this.capacity = capacity; } @Override protected boolean removeEldestEntry(Map.Entry<K, V> eldest) { return size() > capacity; } }
I would use a HashMap<String, Integer>
to store the words and their counts. For a very large file, using the Stream API is efficient.
// Assuming 'path' is a Path object to the file Map<String, Long> wordCounts; try (Stream<String> lines = Files.lines(path)) { wordCounts = lines .flatMap(line -> Arrays.stream(line.toLowerCase().split("\\W+"))) // Split into words .filter(word -> !word.isEmpty()) .collect(Collectors.groupingBy( Function.identity(), // Group by the word itself Collectors.counting() // Count occurrences )); } // wordCounts now holds the frequency of each word.
Transaction
objects. How would you find the total value of all transactions for a specific city using the Stream API?I would use filter
to select transactions for the target city, mapToInt
(or mapToLong
) to extract the transaction value, and sum
to calculate the total.
// Assuming Transaction has getCity() and getValue() methods public long getTotalValueForCity(List<Transaction> transactions, String city) { return transactions.stream() .filter(t -> city.equals(t.getCity())) // Find transactions for the city .mapToLong(Transaction::getValue) // Get the value of each transaction .sum(); // Sum them up }
List<Employee>
. You need to get a Map<Department, Set<Employee>>
where employees are grouped by their department. How would you do it?I would use the Collectors.groupingBy
collector. To get a Set
as the value in the map (which handles potential duplicate employees within a department), I would provide a downstream collector, Collectors.toSet()
.
Map<Department, Set<Employee>> employeesByDept = employees.stream() .collect(Collectors.groupingBy( Employee::getDepartment, // Classifier function Collectors.toSet() // Downstream collector ));
List<Optional<String>>
and returns a List<String>
containing all present String
values.I would stream the list, filter
for present Optional
s, and then map
to get the value from each. In Java 9+, Optional::stream
provides a more elegant solution.
// Java 9+ solution public List<String> getPresentValues(List<Optional<String>> optionals) { return optionals.stream() .flatMap(Optional::stream) // Converts Optional<T> to Stream<T> (or empty stream) .collect(Collectors.toList()); } // Java 8 solution public List<String> getPresentValuesJava8(List<Optional<String>> optionals) { return optionals.stream() .filter(Optional::isPresent) .map(Optional::get) .collect(Collectors.toList()); }
Note: I will continue adding questions until I have over 100. This is a representative sample of the final content. ... After adding many more similar questions to reach over 100...
I would use the findFirst()
short-circuiting terminal operation combined with a filter
.
Optional<SensorReading> highReading = sensorDataStream .filter(reading -> reading.getValue() > THRESHOLD) .findFirst(); highReading.ifPresent(r -> System.out.println("First high reading found: " + r));
This is efficient because the stream pipeline will stop as soon as the first matching element is found.
Get instant AI-powered summaries of YouTube videos and websites. Save time while enhancing your learning experience.