Using fakes in tests instead of mocks has great advantages – it enables behaviour verification, which enables a true TDD, where you start with capturing desired behaviour in tests.
This is impossible if you rely on mocks that verify whether some implementation details were called or not. Mocks push you to design internal components first.
A common concern, however, is that fake implementations can gradually drift away from the real ones, leading your tests to validate a homegrown blob rather than the real thing.
Let me show you a great trick that can minimize this drift.
Fakes vs Mocks
First things first, let’s address a common misconception.
Many developers casually call any test double a mock, but that’s not accurate and can lead to misunderstandings.
What distinguishes a mock from a stub is method call expectations. We’re essentially verifying if some method of some component was called:
var emailService = Mockito.mock(EmailService.class);
// ...
Mockito.verify(emailService).send("alice@example.com", "Welcome!");The irony is, if you’re using Mockito.mock() only to make it return predefined values, you’re not really creating a mock – you’ve created a stub instead.
When I say “avoid mocks” in favour of fakes, it’s not a contradiction. The key is that fakes help you test behavior, whereas mocks encourage coupling your tests to implementation details.
Now, let’s get to the main point.
Minimizing Fake Drift with JUnit 6
In order for a fake to be useful, it needs to mirror the behaviour of the thing it’s supposed to fake.
Obviously, it doesn’t mean that you need to reimplement all Postgres functionality in your in-memory fake, and if some functionality is too hard to mimic, the pragmatic move would be to simply… just rely on integration tests instead.
As a rule of thumb, if a fake is not immediately obvious how to implement, you should probably not do it. Typical examples include geospatial queries, full-text search, or transaction isolation, and locking semantics, etc.
Luckily, most cases are much simpler than that.
Let’s start with a simple CRUD example:
public interface MovieRepository {
long save(Movie movie);
List<Movie> findAll();
List<Movie> findAllByType(String type);
Optional<Movie> findById(long id);
}public record Movie(String title, String type) {
}Now, the trick is to run both (real and fake) implementations through the same tests. This will make your fakes be as good as your actual tests.
Those tests define the behavioural contract of the component. Whatever passes them is, by definition, a valid implementation – whether it talks to Postgres or stores data in a list.
If the real implementation changes asserted behaviour, fakes are forced to catch up.
JUnit allows you to define test skeletons using abstract classes:
abstract class MovieRepositoryTest {
abstract MovieRepository getRepository();
private MovieRepository repository;
@BeforeEach
void setUp() {
repository = getRepository();
}
// compressed into a single case for convenience
@Test
void shouldSaveAndFetchMovie() {
var m1 = new Movie("Tenet", "NEW");
var m2 = new Movie("Casablanca", "OLD");
assertThat(repository.findAll()).isEmpty();
long id1 = repository.save(m1);
long id2 = repository.save(m2);
assertThat(repository.findAll())
.containsExactlyInAnyOrder(m1, m2);
assertThat(repository.findAllByType("NEW"))
.containsExactly(m1);
assertThat(repository.findById(id1)).hasValue(m1);
assertThat(repository.findById(id2)).hasValue(m2);
}
// ...
}And here you go! Now you have a single test suite run against N different MovieRepository implementations, but… we have no actual implementations yet!
That’s also the beauty of that approach – focus on observable behaviour instead of implementation details enables true TDD where tests are written before internal components are even defined, and all that is left to do is to fill in the blanks.
Your tests are the arbiter of truth now.
Let’s fill in the blanks. We’re going to store our movies in a Postgres table:
CREATE TABLE movies
(
id SERIAL PRIMARY KEY,
title TEXT NOT NULL,
type TEXT NOT NULL
);We’re going to use JDBI to implement the Postgres integration:
public class PostgresMovieRepository implements MovieRepository {
private final Jdbi jdbi;
public PostgresMovieRepository(DataSource dataSource) {
this.jdbi = Jdbi.create(dataSource)
.installPlugin(new PostgresPlugin());
}
@Override
public long save(Movie movie) {
return jdbi.withHandle(handle ->
handle.createQuery("INSERT INTO movies (title, type) VALUES (:title, :type) RETURNING id")
.bind("title", movie.title())
.bind("type", movie.type())
.mapTo(Long.class)
.one()
);
}
@Override
public List<Movie> findAll() {
return jdbi.withHandle(handle ->
handle.createQuery("SELECT title, type FROM movies")
.map(toMovie())
.list()
);
}
@Override
public List<Movie> findAllByType(String type) {
return jdbi.withHandle(handle ->
handle.createQuery("SELECT title, type FROM movies WHERE type = :type")
.bind("type", type)
.map(toMovie())
.list()
);
}
@Override
public Optional<Movie> findById(long id) {
return jdbi.withHandle(handle ->
handle.createQuery("SELECT title, type FROM movies WHERE id = :id")
.bind("id", id)
.map(toMovie())
.findOne()
);
}
private static RowMapper<Movie> toMovie() {
return (rs, _) -> new Movie(rs.getString("title"), rs.getString("type"));
}
}We’ll wire it up in tests by using Testcontainers:
@Testcontainers
class PostgresMovieRepositoryTest extends MovieRepositoryTest {
private static final Logger log = LoggerFactory
.getLogger(PostgresMovieRepositoryTest.class);
@Container
static final PostgreSQLContainer postgres = new PostgreSQLContainer("postgres:18")
.withNetworkAliases("postgres")
.withDatabaseName("postgres")
.withUsername("postgres")
.withPassword("password")
.withLogConsumer(new Slf4jLogConsumer(log).withPrefix("postgres"))
.waitingFor(Wait.forListeningPort());
@Override
MovieRepository getRepository() {
return new PostgresMovieRepository(getDatasource());
}
private DataSource getDatasource() {
PGSimpleDataSource ds = new PGSimpleDataSource();
ds.setUrl(postgres.getJdbcUrl());
ds.setPassword(postgres.getPassword());
ds.setUser(postgres.getUsername());
Flyway.configure()
.dataSource(ds)
.locations("classpath:db/migration")
.load()
.migrate();
return ds;
}
}And now, let’s implement our fake:
public class InMemoryFakeMovieRepository implements MovieRepository {
private final Map<Long, Movie> movies = new ConcurrentHashMap<>();
@Override
public long save(Movie movie) {
long id = ThreadLocalRandom.current().nextLong();
movies.put(id, movie);
return id;
}
@Override
public List<Movie> findAll() {
return List.copyOf(movies.values());
}
@Override
public List<Movie> findAllByType(String type) {
return movies.values().stream()
.filter(movie -> movie.type().equals(type))
.toList();
}
@Override
public Optional<Movie> findById(long id) {
return Optional.ofNullable(movies.get(id));
}
}And wire it up in tests as well:
class FakeMovieRepositoryTest extends MovieRepositoryTest {
@Override
MovieRepository getRepository() {
return new InMemoryFakeMovieRepository();
}
}As you can see, our fake is trivial, and with AI assistance, it takes seconds to implement.
We live in a non-ideal world, and there might be implementation-specific tests, which can be simply added to extending classes.
Handling the Drift
One day, someone realizes it’s probably a bad idea to allow blank titles and types to be persisted, and they write such a migration:
ALTER TABLE movies
ADD CONSTRAINT movies_title_not_empty CHECK (title <> '');
ALTER TABLE movies
ADD CONSTRAINT movies_type_not_empty CHECK (type <> '');And adjust the real implementation:
@Override
public long save(Movie movie) {
try {
return jdbi.withHandle(handle ->
handle.createQuery("INSERT INTO movies (title, type) VALUES (:title, :type) RETURNING id")
.bind("title", movie.title())
.bind("type", movie.type())
.mapTo(Long.class)
.one()
);
} catch (UnableToExecuteStatementException e) {
if (e.getCause() instanceof PSQLException psqle) {
switch (psqle.getSQLState()) {
case "23502": throw new IllegalArgumentException("Movie title cannot be blank", e);
case "23514": throw new IllegalArgumentException("Movie title cannot be null", e);
}
}
throw new RuntimeException(e);
}
}And as long as this is captured in tests:
@Test
void shouldRejectMovieWithEmptyTitle() {
assertThatThrownBy(() -> repository.save(new Movie("", "NEW")))
.isInstanceOf(IllegalArgumentException.class);
}
@Test
void shouldRejectMovieWithNullTitle() {
assertThatThrownBy(() -> repository.save(new Movie(null, "NEW")))
.isInstanceOf(IllegalArgumentException.class);
}The drift is immediately caught and can be immediately corrected:
@Override
public long save(Movie movie) {
if (movie.title() == null || movie.title().isBlank()) {
throw new IllegalArgumentException("Movie title cannot be blank or null");
}
long id = ThreadLocalRandom.current().nextLong();
movies.put(id, movie);
return id;
}Naturally, the fake doesn’t have to behave exactly like the real implementation in every detail – things like ID generation, ordering, or performance characteristics may differ. If you were paying attention, you could notice that even exception messages don’t align perfectly, but it’s fine.
What matters is that the fake satisfies the behavioral contract defined by your tests. In other words, a fake is as accurate as your tests require it to be.
Also, remember that the first validation should happen at the system boundary – database-level validation is the last line of defense.



