Your host during the day is:
We love Java because Java is fully object-oriented and type-safe. All the time we teach clean code, good testable code, elegant object models, and avoiding dependencies. But as soon as we want to store only some data on disk, the horror begins. Now, we need an external database system that works totally differently from Java and comes with its own philosophy. All databases provide their own data structure which is incompatible with Java objects in any case. We must map our objects to limited data structures which limits our object models strongly. They come with their own query language and force us to submit queries by using strings that are not typesafe or we have to use a Java wrapper API that is mostly bloated, limited, and horrible use. The vendors tell us to put the business logic into the database by using stored procedures and let the database care for concurrency and validation. All of these leads to many technical problems that increase the complexity, effort, and costs of development. Additionally, monolithic database servers do not match with the microservice philosophy. Now, Heldion comes with a Java-native high-performance persistence for persisting real Java object graphs on disk and introduces a pure Java in-memory data processing approach for microservices. Java developers love it.
Unit testing is fine, but without proper integration testing, especially if you work with external resources like databases and other services, you might not know how your application will actually behave once it has been deployed to the real production environment. Before Docker, configuring the environment for integration testing was painful – people were using fake database implementations, mocking servers, usually it was not cross-platform as well. However, thanks to Docker, now we can quickly prepare the environment for our tests.
In this talk, I would like to show how you can use Testcontainers ( https://github.com/testcontainers/testcontainers-java ) – a popular JVM testing library that harnesses Docker to easily, reliably, spin up test dependencies. As a special focus, we want to have a deeper look at the development and the addition of new features to the Testcontainers library in the last year. But that’s not all, we will also share an outlook at the horizon of the future of Testcontainers and might even get a glimpse at some brand new features that are currently in active development.
Again and again, I am asked how one can start with the topic of security in an agile project environment. What are the essential first steps, and what should you focus on at the beginning? Of course, this raises the question of suitable methodologies and tools. At the same time, the strategic orientation of the company must be included in this security strategy. We have also learned in the recent past that attacks like the “Solarwinds Hack” are becoming more and more sophisticated and that the attackers now focus on the entire value chain. What tools are there, and where should they be used? How can I start tomorrow to prepare myself for the future against the challenges of cyber attacks? And that’s exactly what you will get an answer to here.
A stateful streaming data pipeline needs both a solid base and an engine to drive the data. Apache Kafka is an excellent choice for storing and transmitting high throughput and low latency messages. Apache Flink adds the cherry on top with a distributed stateful compute engine available in a variety of languages, including SQL.
In this session we’ll explore how Apache Flink operates in conjunction with Apache Kafka to build stateful streaming data pipelines, and the problems we can solve with this combination. We will explore Flink’s SQL client, showing how to define connections and transformations with the most known and beloved language in the data industry.
This session is aimed at data professionals who want to reduce the barrier to streaming data pipelines by making them configurable as set of simple SQL commands.
Let’s install YugabyteDB, a PostgreSQL compatible database on the Oracle Cloud Kubernetes (OKE). And explain the reasons for it: distribute data to many active nodes , sharded and replicated, in sync, to the region’s data centers for high availability, or geo-distributed to reduce latency for worldwide users. Today, this is possible with NewSQL databases, bringing powerful SQL and ACID features to the same scalability as the NoSQL datastores. And OCI has many advantages for this.
Oracle Database In-Memory is the industry-leading in-memory database technology, seamlessly accelerating analytics as well as improving mixed workload enterprise OLTP applications on databases of any size. Join Oracle Product Management to learn how to get started with Database In-Memory, how to identify which objects to populate in memory, how to identify which indexes to drop, and details on how to integrate with other performance enhancing features of Oracle Database. This session will also describe how Database In-Memory works to speed up your analytic workload and will provide a step-by-step guide on how to configure Oracle Database to get the biggest performance benefits and help you avoid having to re-invent the wheel.