Topic: General Level: All
Welcome to the world of cutting-edge technology! Every bi-week, we bring you the latest and most incredible advancements in the tech industry that are sure to leave you feeling inspired and empowered.
Stay ahead of the game and be the first to know about the newest innovations shaping our world. Discover new ways to improve your daily life, become more efficient, and enjoy new experiences.
This time, we've got some exciting news to share with you!
Modelling common behaviors between the List and the Set interface has been partially provided by LinkedHashSet.
Now from JDK21 with the new interface SequencedCollection extending the Collection interface and is also extended by the List, SortedSet via SequencedSet (for reversal operation), Deque.
The SequencedMap interface extends the Map interface by providing the below methods,
1. SequencedKeySet
2. SequencedEntrySet
3. SequencedValues
4. Reversed
5. putLast
6. putFirst
7. firstEntry
8. lastEntry
9. pollFirstEntry
10. pollLastEntry
The SequencedCollections offers the below methods,
1. getFirst()
2. getLast()
3. removeFirst()
4. removeLast()
5. addFirst(e)
6. addLast(e)
7. reversed()
Addressing the shortcomings of current collection interfaces,
1. Accessing the Last elements of a List
2. Add/Remove from the First and the Last of a Set
3. Reversing the collecting elements to return the SequencedSet interface. Further, the LinkedHashSet implements SequencedSet
Additionally, talking about immutability there are,
1. UnmodifiableSequencedCollection
2. UnmodifiableSequencedSet
3. UnmodifiableSequencedMap
Methods introduced in the Collections interface
Fetching paginated data entries by specifying the limits on the rows returned by setting setFirstResult() and setMaxResult() methods, on a query combining multiple entities via entity association/ JOIN FETCH clause/EntityGraph would return complete row data entries ie., the limits never get applied over the entire result (this happens later in the memory) and even if it were applied the result would be partial neglecting/truncating the associative join fetch entries.
When using the JOIN FETCH clause query and specifying setFirstResult() and setMaxResult() methods the complete data would be fetched, not applying the limits (pagination), as it would be applied in the memory, when we do getResultList() call and exposing to memory and performance bottleneck risks in the application.
Also logging, HHH000104: firstResult/maxResults specified with collection fetch; applying in memory!
Alternatively, this can be avoided by splitting the query into 2 parts (of the entity association) and applying the pagination limits via setFirstResult() and setMaxResult() on the 1st query results and passing it through the 2nd query associated. This prevents fetching the entire result set and applying the pagination in memory. That resolves the warning and improves your application’s performance if you’re working with a huge database.
The shared mutable state variables are subjected to inconsistent updates from interleaved operations of multiple threads from the thread pools, leading to unpredictable results.
Conditions in which Atomic operations are favorable vs. situations where synchronization rescues.
1. Unifying the operations on the variable (check-then-act or fetch-then-update etc., are multi-step sequences that create a window for thread interleaving) with the functionalities from the Atomic package treat the operations to be performed as a single unit.
2. Interleaving external to the Atomic operations still creates inconsistent results, meaning the complete sequence of operations should be executed in a single thread to do away with interleaving causing data state changes on multiple accessing threads.
3. Establishing class-level locks (on static methods) or object-level locks (on non-static methods) allows the thread to run the operations as a single unit of work, while the other probing threads have to block until the thread-held locks are released to get authorized to run the methods.
Java provides the below packages for working with dates and times,
1. java.util.Date and java.util.Calendar
2. java.sql.Date, java.sql.Time, and java.sql.Timestamp
3. java.time.LocalDate, java.time.LocalTime, java.time.LocateDateTime, java.time.OffsetTime, java.time.OffsetDateTime, java.time.ZonedDateTime, java.time.Duration and java.time.Instant
Interoperability with the DATE, TIME, and TIMESTAMP data types of the database implementation with the above Java package implementation can be achieved by JPA specifications and Hibernate entity definitions respectively in order,
1. Temporal annotation with TemporalType as DATE (for util.Calendar) and TIMESTAMP(for util.Date)
2. Date, Time, and Timestamp of sql package can be directly mapped to entity fields of DATE, TIME, and TIMESTAMP
3. For time package, LocalDate - DATE, LocalTime - TIME, LocalDateTime - TIMESTAMP, OffsetTime - TIME_WITH_TIMEZONE, OffsetDateTime - TIMESTAMP_WITH_TIMEZONE, Duration - BIGINT, Instant, and ZonedDateTime - TIMESTAMP (last three of time API are provided only by Hibernate implementation but not from JPA)
The exception is ZonedDateTime which converts into the local time zone of the JVM and then stores it in the database. And when it reads the TIMESTAMP, it adds the local time zone information to it, creating inconsistencies in value persisted and read.
This issue can be avoided by specifying a property in persistence XML "hibernate.jdbc.time_zone"
The Spring Data Query By Example feature is meant to offer a way to decouple the data filtering logic from the query processing engine so that you can allow the data access layer clients to define the filtering criteria using a generic API that doesn’t depend on the JPA Criteria API.
Until the transaction gets committed, the lock on the row-level entity for UPDATE or DELETE will be held by the initiating transaction such that the new transaction tries to act on the same row-level entity it will be blocked until the lock-held transaction commits.
This can be worked around with a NO WAIT clause that throws an error when the transaction requesting for row UPDATE/DELETE is unable to get the lock until released by the transaction holding the lock.
1. Oracle - FOR UPDATE NOWAIT
2. SQL Server - WITH (UPDLOCK,HOLDLOCK,ROWLOCK,NOWAIT)
3. PostgreSQL - FOR NO KEY UPDATE NOWAIT
4. MySQL - FOR UPDATE NOWAIT
Instead of using the DB-specific native query with the above NOWAIT clause, we can use LockOptions.NO_WAIT on JPA and Hibernate while fetching the entity
Working with Date and time for applications that are distributed across the world,
Distributed applications with REST API calls using date and time,
How Arrays behave in JVM with respect to Objects,
Method Handles for Java Reflection,
Enhanced application startup by appropriately packaging cloud native java application
JSON Processing with JSON-P API in Jakarta EE 10
Extending the Java 8 Streams API
Disclaimer:
This is a personal blog. Any views or opinions represented in this blog are personal and belong solely to the blog owner and do not represent those of people, institutions or organizations that the owner may or may not be associated with in a professional or personal capacity, unless explicitly stated. Any views or opinions are not intended to malign any religion, ethnic group, club, organization, company, or individual. All content provided on this blog is for informational purposes only. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. The owner will not be liable for any errors or omissions in this information nor for the availability of this information. The owner will not be liable for any losses, injuries, or damages from the display or use of this information.
Downloadable Files and ImagesAny downloadable file, including but not limited to pdfs, docs, jpegs, pngs, is provided at the user’s own risk. The owner will not be liable for any losses, injuries, or damages resulting from a corrupted or damaged file.
- Comments are welcome. However, the blog owner reserves the right to edit or delete any comments submitted to this blog without notice due to :
- Comments deemed to be spam or questionable spam.
- Comments including profanity.
- Comments containing language or concepts that could be deemed offensive.
- Comments containing hate speech, credible threats, or direct attacks on an individual or group.
The blog owner is not responsible for the content in the comments. This blog disclaimer is subject to change at any time.
Comments
Post a Comment