An example of HPA that scales up and down depending on CPU and memory consumption.
apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: identity spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: deploymentname minReplicas: 2 maxReplicas: 10 behavior: scaleUp: stabilizationWindowSeconds: 300 policies: - type: Pods value: 1 periodSeconds: 300 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 1 periodSeconds: 300 selectPolicy: Min metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 50 - type: Resource resource: name: memory target: type: AverageValue averageValue: 1000Mi
Run Archive Warrior in your Kubernetes cluster as DaemonSet
apiVersion: apps/v1 kind: DaemonSet metadata: name: warrior namespace: archive labels: app: warrior spec: selector: matchLabels: app: warrior template: metadata: labels: app: warrior spec: nodeSelector: kubernetes.io/arch: amd64 terminationGracePeriodSeconds: 60 containers: - image: atdr.meo.ws/archiveteam/warrior-dockerfile:latest name: warrior resources: requests: cpu: "200m" memory: "128Mi" limits: cpu: "400m" memory: "256Mi" env: - name: DOWNLOADER value: your_name - name: SELECTED_PROJECT value: auto - name: CONCURRENT_ITEMS value: "4" ports: - containerPort: 8001 imagePullPolicy: Always
A less known thing about deploying a JVM in a container is what garbage collector will be set, if you do not specify one. Let’s look at the cases of JVM running in a container and see what GC will be set by default as I experiment with different Java versions and memory limits.
Java 8 - OpenJDK8-alpine With memory limit 1791Mb podman run --memory=1791m -ti openjdk:8-alpine java -XX:+PrintFlagsFinal -XX:+UseContainerSupport | grep 'Use.
That one time I have to extract, transform and load a massive CSV file into a bunch of database entities and it was kinda slow…
The class had position based CSV bindings, loaded into beans and streamed from a pretty big CSV file (+10Gb):
public class CSVUserEntry { @CsvBindByPosition(position = 0) private String userId; @CsvBindByPosition(position = 1) private String username; @CsvBindByPosition(position = 2) private String deviceId; @CsvBindByPosition(position = 3) private String keyAlias; @CsvBindByPosition(position = 4) private String passcodeKeyAlias; @CsvBindByPosition(position = 5) private String confirmationId; } Then I opened the stream in generic way, with a simple and fluent interfaces of Java stream API:
This post doesn’t contain full context of the works performed, only benchmarking part
I had to test how number of iterations impacts login request time to KeyCloak and if or how we can improve it. After investigating a few other options I decided to check what’s the difference for password hashing times using default hashing mechanism in KeyCloak. I found and extracted parts of password hashing mechanism from KeyCloak to my repo, developed small parametrised JMH benchmark, comparing:
I have this setup in a single project which handles backend and frontend generation of server and client code. This requires to run openapi-generator twice, once for backend with spring generator and once for frontend with typescript-angular generator. I need backend code to be generated to build directory - so it is not committed to version control. TypeScript code needs to be reformatted and committed to git.
TypeScript code also requires additional mapping due to non-standard structure of my specification.
@Test public void abstractClassesAreAbstract() { final JavaClasses importedClasses = new ClassFileImporter() .importPackages("net.agilob.project"); LoggingRulesTest.ABSTRACT_CLASS_MUST_BE_ABSTRACT.check(importedClasses); } public static final ArchRule ABSTRACT_CLASS_MUST_BE_ABSTRACT = classes() .that() .haveSimpleNameContaining("Abstract").or().haveSimpleNameContaining("abstract") .should() .haveModifier(JavaModifier.ABSTRACT);
Using postgres specific SQL syntax we can create autogenerated column which subtracts two dates and stores them as interval.
The age function is also null-safe, so if time_ended or time_started it will not crash.
ALTER TABLE session ADD COLUMN duration interval GENERATED ALWAYS AS (age(time_ended, time_started)) STORED;