When using Infinispan (or any other jBoss library for that matter)  and Logback in the same project you may end up getting this warning when running the project:

ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console

Maybe you log to a persistent storage and you may miss important error messages, because they are logged to the console.

Solution 1: Start the application with specific parameter

One solution is to add the following parameter when starting the application:

-Dorg.jboss.logging.provider=slf4j

The relevant code is in org.jboss.logging.LoggerProviders which looks for the “org.jboss.logging.provider” environment variable.

 

Solution 2: Exclude log4j libraries from classpath

Another solution is to exclude all logging related classes from the classpath, as in org.jboss.logging.LoggerProviders it looking for various logging related classes and it looks for the logabck related classes at the very end!

When using gradle it is enough to add the following into build.gradle

configurations {
all*.exclude group: ‘org.apache.logging.log4j’
all*.exclude group: ‘org.apache.log4j’
all*.exclude group: ‘org.jboss.logmanager’
}

Like a lot of Mac users I backup my Mac using Time Machine. But for safety reasons I would like to backup the whole volume of the Time Machine to another network drive (NAS).

One way to do so is by using the Disk Utility to create an Image file (img) of the whole Time Machine volume. However, when trying to do so one gets an “Operation cancelled” message and the process stops without any more information.

The solution is to add the Disk Utility application to the list of applications that can have Full Disk Access.

The steps to take a backup of the volume successfully are the following:

  1. Stop the Automatic Backups of the Time Machine.
  2. Go to Settings -> Security & Privacy. Go to the Privacy tab and scroll down to the “Full Disk Access”.
  3. Add the “Disk Utility” to the applications that can full disk access.
  4. Open the “Disk Utility” and select the Time Machine volume on the left hand side.
  5. In Disk Utility go to: File -> New Image-> Image from…
  6. Select where the image will be saved (it’s a good idea to encrypt it with a password).
  7. That’s it! Make sure you remove the Disk Utility from the Full Disk Access afterwards for security reasons.

 

This solution can also be used to move from raw format back to qcow2 format.

I used to have Docker on MacOS Sierra, which created the image files using the qcow2 format. Then I upgraded to Yosemite, but due to a disk failure of the external SSD that has the qcow2 images file, it created a raw format image file (for more see this one: https://docs.docker.com/docker-for-mac/faqs/#qcow2-or-raw)

This shouldn’t be a problem in general, but in my case I use it for development purposes and I still have uncommitted images. Also using the Docker Preferences GUI I can only select the directory, not the actual file. Solution is pretty simple:

Navigate. to:

~/Library/Group\ Containers/group.com.docker/settings.json

and change the diskPath file to the qcow2 file. That should do it.

 

I have recently started using MongoDB for a very demanding task. Storing all Forex pair ticks (e.g. every change in bid/ask prices). I know there are frameworks designed for this task (e.g. kdb+), but since I wanted to avoid the learning curve. Besides I already use Spring Data in my project and it works with a minimal number of changes for Mongo.

In Mongo I have a collection with more than 3.5 billion records (and growing) and I want to find the latest date for each pair. I tried using the aggregation framework of Mongo, but it doesn’t seem to use the indexes and takes ages (didn’t finish after one day).

Relational Structure

In relational DB the table structure  would look something like:

idpairdateTimebidask
1EUR/USD2015-04-03 21:32:31.4561.141411.14142
............

Then you would have to run the following query:

 

SELECT  t.pair, MAX(t.dateTime)
FROM tick_data t
GROUP by t.pair;

MongoDB Aggregation Framework

In MongoDB the document structure is the same. I am a very very novice user of Mongo, but I gather we could use the aggregation framework for this query:

db.tick_data.aggregate(
    {$group:{_id:"pair", "maxValue": {$max:"dateTime"}}}
);

However, this takes ages, even though I have used a composite index on pair and dateTime.

 

Very Fast Result Using MongoShell

I tried using a sort of iterative approach using MongoShell:

db.tick_data.distinct( "pair" ).forEach(function(per_pair) { 
  var lastTickPerPair =  db.tick_data.find({ "pair": per_pair }).sort({"dateTime": -1}).limit(1);
 var lastTickOfPair = lastTickPerPair.hasNext() ? lastTickPerPair.next() : null;
  print( "pair: " + lastTickOfPair.pair + ", dateTime:" + lastTickOfPair.dateTime); 
  } 
 );

This approach seems to use the composite index on pair and dateTime I defined and the results are lightning fast (for 3.5 billion records).

Maybe there are other ways, but after some digging around I couldn’t find any other method that would use indexes.