Saturday, January 18, 2020

lambdas and streams master class

If you want to master lambdas and streams from Java 8 & 9, you can check these java koans. That's one of the best koans I could find for a deeper learning. Also there are two videos on these koans: part1 and part2.

Tuesday, January 7, 2020

send your data async on kafka

For a project, I'm trying to log the basic transactions of the user such as addition and removal of an item and for multiple types of items and sending a message to kafka for each transaction. The accuracy of the log mechanism is not crucial and I don't want it to block my business code in the case of kafka server downtime. In this case an async approach for sending data to kafka is a better way to go.

My kafka producer code is in its boot project. For making it async, I just have to add two annotations: @EnableAsync and @Async.

@EnableAsync will be used in your configuration class (also remember that your class with @SpringBootApplication is also a config class) and will try to find a TaskExecutor bean. If not, it creates a SimpleAsyncTaskExecutor. SimpleAsyncTaskExecutor is ok for toy projects but for anything larger than that it's a bit risky since it does not limit concurrent threads and does not reuse threads. So to be safe, we will also add a task executor bean.

So,
@SpringBootApplication
public class KafkaUtilsApplication {
    public static void main(String[] args) {
        SpringApplication.run(KafkaUtilsApplication.class, args);
    }
}
will become
@EnableAsync
@SpringBootApplication
public class KafkaUtilsApplication {
    public static void main(String[] args) {
        SpringApplication.run(KafkaUtilsApplication.class, args);
    }

    @Bean
    public Executor taskExecutor() {
        ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
        executor.setCorePoolSize(2);
        executor.setMaxPoolSize(2);
        executor.setQueueCapacity(500);
        executor.setThreadNamePrefix("KafkaMsgExecutor-");
        executor.initialize();
        return executor;
    }
}
As you can see there's not much change here. The default values I set should be tweaked based on your app's needs.

The second thing we need is addition of @Async.


My old code was:
@Service
public class KafkaProducerServiceImpl implements KafkaProducerService {

    private static final String TOPIC = "logs";

    @Autowired
    private KafkaTemplate<String, KafkaInfo> kafkaTemplate;

    @Override
    public void sendMessage(String id, KafkaType kafkaType, KafkaStatus kafkaStatus) {
        kafkaTemplate.send(TOPIC, new KafkaInfo(id, kafkaType, kafkaStatus);
    }
}
As you can see the sync code is quite straightforward. It just takes the kafkaTemplate and sends a message object to the "logs" topic. My new code is a bit longer than that.
@Service
public class KafkaProducerServiceImpl implements KafkaProducerService {

    private static final String TOPIC = "logs";

    @Autowired
    private KafkaTemplate kafkaTemplate;

    @Async
    @Override
    public void sendMessage(String id, KafkaType kafkaType, KafkaStatus kafkaStatus) {
        ListenableFuture<SendResult<String, KafkaInfo>> future = kafkaTemplate.send(TOPIC, new KafkaInfo(id, kafkaType, kafkaStatus));
        future.addCallback(new ListenableFutureCallback<>() {
            @Override
            public void onSuccess(final SendResult<String, KafkaInfo> message) {
                // left empty intentionally
            }

            @Override
            public void onFailure(final Throwable throwable) {
                // left empty intentionally

            }
        });
    }
}
Here onSuccess() is not really meaningful for me. But onFailure() I can log the exception so I'm informed if there's a problem with my kafka server.

There's another thing I have to share with you. For sending an object through kafkatemplate, I have to equip it with the serializer file I have.


public class KafkaInfoSerializer implements Serializer<kafkainfo> {

    @Override
    public void configure(Map map, boolean b) {
    }

    @Override
    public byte[] serialize(String arg0, KafkaInfo info) {
        byte[] retVal = null;
        ObjectMapper objectMapper = new ObjectMapper();
        try {
            retVal = objectMapper.writeValueAsString(info).getBytes();
        } catch (Exception e) {
            // log the exception
        }
        return retVal;
    }

    @Override
    public void close() {
    }
}
Also, don't forget to add the configuration for it. There are several ways of defining serializers for kafka. One of the easiest ways is adding it to application.properties. 

spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer spring.kafka.producer.value-serializer=com.sezinkarli.kafkautils.serializer.KafkaInfoSerializer

Now you have a boot project that can send async objects to the desired topic.

Monday, December 2, 2019

spring annotations i never had the chance to use part 2: @ConfigurationProperties

Few days ago, I accidentally stumbled upon a Spring annotation from Spring Boot project while I was checking something else.

We all know how to bind property values with "@Value" to the classes and we all know that this can be quite cumbersome if there are multiple properties to bind. Spring Boot is here to help. You can use "@ConfigurationProperties" and bind multiple values quite concisely. We will give a prefix to differentiate other configs from ours. e.g. "@ConfigurationProperties(prefix = "jdbc")".
Any field this annotated class has is populated with property values from the property resource. For instance if it has a username parameter then property resource with "jdbc.username" key will populate this field. The most practical way of using this annotation is using it with "@Configuration".


You can check how we create the config class.
package com.sezinkarli.tryconfigprops;

import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Configuration;

import javax.annotation.PostConstruct;
import java.util.HashMap;
import java.util.Map;

@Configuration
@ConfigurationProperties(prefix = "jdbc")
public class JdbcConfig
{
    private String user;
    private String password;
    private String url;
    private String driver;

    public String getUser()
    {
        return user;
    }

    public void setUser(String user)
    {
        this.user = user;
    }

    public String getPassword()
    {
        return password;
    }

    public void setPassword(String password)
    {
        this.password = password;
    }

    public String getUrl()
    {
        return url;
    }

    public void setUrl(String url)
    {
        this.url = url;
    }

    public String getDriver()
    {
        return driver;
    }

    public void setDriver(String driver)
    {
        this.driver = driver;
    }

    public String getProperty(String key)
    {
        return propertyMap.get(key);
    }
}

And below you can check the properties we map from application properties
jdbc.user=myJdbcUser
jdbc.password=myPwd
jdbc.url=myUrl
jdbc.driver=myJdbcDriver
After that you can easily get these values by injecting the configuration class to somewhere.
@Service
public class YourService
{

    @Autowired
    private JdbcConfig jdbcConfig;
}
You can also check here for a working toy project using "@ConfigurationProperties".

Wednesday, October 23, 2019

benchmark for new string methods of java 11

While I was checking what's new in Java 11, I saw that there are several new methods for String class. So I wanted to do a microbenchmark with old way of doing things and by using new methods. These new methods are;

boolean isBlank()

String strip()

Stream lines()


isBlank() is tested agains trim().isEmpty(), strip() is tested agains trim() and lines() is tested agains split().

Here are the results:

Benchmark Score
lines 3252919
split 2486539
strip 18280130
trim 18222362
isBlank 25126454
trim + isEmpty 19854156

Scores are based on operations per second so the more the better.
As you can see lines() is much faster than split().
strip() and trim() performed quite similar.
isBlank() outperformed trim() + empty().

You can check the benchmark code here.

Tuesday, February 12, 2019

cool elasticsearch client

I was searching for an elastic client so I can have auto-complete for queries and a graphical way of seeing how things are going for our test elastic servers. Few years ago we had Sense plug-in which I think now a part of Kibana. After a little googling, I found head plug-in for chrome and it is so cool. I highly recommend it to everyone with this kind of need.