That one boring Saturday I wanted to learn something more about agents and thought it would be a really cool idea to “unfinalize” java.lang.String.
I started working on the project, developed simple transformer:
if (name.equals("java/lang/String")) { ClassPool classPool = ClassPool.getDefault(); final CtClass ctClass; try { ctClass = classPool.get(name.replace("/", ".")); } catch (NotFoundException e) { throw new RuntimeException(e); } int modifiers = ctClass.getModifiers(); if (Modifier.isFinal(modifiers)) { System.out.println(name + " modifiers: " + modifiers); ctClass.
A Java project with published container image that contains intentionally leaky native code to observe symptoms of a memory leak in Java in podman/docker or Kubernetes.
Native code intentionally “leaks” provided number of megabytes in a loop. The project runs by default with -XX:NativeMemoryTracking=summary enabled.
I wanted to observe how JVM will report native memory, crash and what pod and JVM metrics will look like.
Java doesn’t allocate many objects, almost none.
There are a few reasons why OOM might happen in a JVM. For some of them a JVM will crash with an option to write heap dump to a file system. None of us wants to get OOM on prod, and have to reconfigure deployments and hope for the worst to happen again, this time with some fallback plan.
In a JVM this can be configured with:
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/heapDumpDirectory' but the problem is that each heap dump will be by default saved to a file with the same or similar name: /heapDumpDirectory/<process_id>.
In this post I’ll describe things you want to consider to let JVM use own ergonomic configuration, without drastically overriding them, for which you need more advanced tuning and more metrics.
Pod sizing for GC The limit numbers of processors and memory impacts how JVM will tune its own performance characteristics.
Most importantly it impacts what GC will be used and how many threads it will start to clean up memory, which impacts how frequent and long GC pauses are.
This page lists deployment strategies I use to run JVM on Kubernetes.
Below you will find 3 sections describing more common deployment practices of JVM. The practices are listed from the most to the least expensive to run, but each strategy has other drawbacks too.
The described practices are more “realistic”, as cost-optimised ways to run Kubernetes deployments. I am for a predictable utilisation over load. There are more strategies, that argue for never setting CPU limits.
Our test pack is configured dynamically from environment variables. Each scenario can be configured independently with different target VUs, duration or even executor.
Let’s start from a file called main.js. It imports all our scenarios, each as a default function:
export { default as cacheCreateAll } from './runners/cacheCreateAll.js'; export { default as cacheCreateUpdateRemove } from './runners/cacheCreateUpdateRemove.js'; export { default as userSearch } from './runners/userSearch.js'; The main.js file is our entry point to the application.
My team is preparing our company to acquire another customer who at initial stages will be 5x bigger than our current biggest customer. To do it, we had to rewrite our performance tests from Gatling to k6. Improve reporting, metrics and scalability of our whole infrastructure and tune set of microservices.
To test our infrastructure we had to scale up our perf test runners too and to do that we developed a set of containerised performance tests and run our performance test pack on dedicated kubernetes nodes.
Our performance tests project is complex, we have +40 .js files, csv feeder files and use custom extensions, so we need to bundle all of that in a single image.
We want things to be version controlled, deploy performance tests in a cloud-native way and the image to be compatible with official k6 image.
# Build the k6 binary with the extension FROM golang:1.20 as builder RUN go install go.k6.io/xk6/cmd/xk6@v0.8.1 # Update README.