Training your ontolections at IBM Watson Explorer

Ontolection Trainer its a nice stuff that people who are using ontolections to Improve the Queries at Watson Explorer need to know. This utility help us to analyze text body and create Thesaurus files, that can be used to create ontolections. Also, you can extract key-phrases or Acronyms that you can use with query-modifier and at some ontolection.

If you don’t know NLQ capabilities at Watson Explorer (WEX) or don’t know what is a Ontolection, I recommend that you read my 2 posts:

Backing to Ontolection Trainer, at NLQ folder (/opt/IBM/dataexplorer/WEX-11_0_2/Engine/nlq in my case) from your WEX installation (since rel 11.0.1), you can find the jar file ontolectiontrainer.jar. Obviously you will need Java to run it. Make sure that the JAVA from WEX installation are configured at your path.

The utility have several arguments, but, the basics are:

  • the type of extraction
  • the corpus that you will use: The corpus are your text file. In my case, I have a file with 1000 Resumes that Ill use to train WEX (RESUME_TEXT_1000.TXT ).
  • the pear file: Pear file consist in the dictionary that the trainer will user to extract terms.
  • the output path: Where it will create the file.

I have used a file called blacklist containing the words that I want to be ignored.

You can have problems with CPU and Memory utilization, for this cases, there are parameters to setup the number of iterations that trainer will do.

To be very objective, here is my commands:

  • To extract the ontolection:

java -jar ontolectiontrainer.jar –trainOntolection –corpus RESUME_TEXT_1000.TXT –pear /opt/IBM/dataexplorer/WEX-11_0_2/Engine/data/pears/en.pear –blacklist blacklist –outputPath generatedOntolection_1000

  • To extract Acronyms:

java -jar ontolectiontrainer.jar –extractAcronyms –corpus RESUME_TEXT_1000.TXT –pear /opt/IBM/dataexplorer/WEX-11_0_2/Engine/data/pears/en.pear –blacklist blacklist –outputPath generatedOntolectionAcronyms_1000

  • To extract Phrases:

java -jar ontolectiontrainer.jar –learnPhrases –corpus RESUME_TEXT_1000.TXT –pear /opt/IBM/dataexplorer/WEX-11_0_2/Engine/data/pears/en.pear –blacklist blacklist –outputPath generatedOntolectionPhrases_1000

For more reference:



Improving your queries at Watson Explorer using Ontolections

A good approach to enrich your queries at Watson Explorer its use Ontolections. A ontolection provides a set of related terms that are specific to the domain of an application or enterprise, and identifies the relationships between them. Basically, Wex Engine query the ontolection with the query terms, then, add this terms to the final query, and then, query your original collection.

For example: lets suppose that you have a synonym configured as: ALM → Application Lyfecycle Management. If user search for ALM, WEX engine will search also for Application Lyfecycle Management.

A ontolection can have also more than synonyms, we can have related terms, rewrite, spelling, etc. I recommend start with synonyms, then, improve your ontolection.

The first step to start playing with ontolections is create a Thesaurus file. This file will be used to create the ontolection. You can generate a thesaurus from several ways. The most common is create your own XML file manually, but, you can use something called Ontolection Trainer (Ill show how to use in the next posts).

For my example, I have created the following ontolection, it is called practitioner2.xml:

<?xml version="1.0" encoding="utf-8" ?>
<thesaurus name="practitioner1" language="english" domain="general">
<word name=".NET">
<synonym>.Net development</synonym>
<synonym>.Net<span style="font-family: Droid Sans Fallback;"><span style="font-size: small;"><span lang="zh-CN">開発</span></span></span></synonym>
<synonym>.NET<span style="font-family: Droid Sans Fallback;"><span style="font-size: small;"><span lang="zh-CN">開発</span></span></span></synonym>
<word name="Virtual Private Network">
<synonym><span style="font-family: Droid Sans Fallback;"><span style="font-size: small;"><span lang="zh-CN">バーチャル プライベート ネットワーク</span></span></span></synonym>
<word name="DNS">
<synonym>Domain Name Service</synonym>
<synonym><span style="font-family: Droid Sans Fallback;"><span style="font-size: small;"><span lang="zh-CN">ドメインネーム・サービス</span></span></span></synonym>
<synonym><span style="font-family: Droid Sans Fallback;"><span style="font-size: small;"><span lang="zh-CN">ドメインネームサービス</span></span></span></synonym>

Using this as an example, if user search for DNS, Ill also search for Domain Name Services.

After create your thesaurus file, you need to create a new collection at your WEX Server. Select generic-ontolection at Copy defaults from:

Then, add a new seed, pointing to the thesaurus file, in my case, I select FILES and add /opt/IBM/dataexplorer/WEX-11_0_1/Engine/nlq/practitioner2.xml

Go to collection overview → Configuration → Converting, click edit and set the values as:

Finally, go to Overview and Click Start at Live Status (you can also test before start). You will see Craw and Index running, and Documents being added.

Thats it, your ontolection is ready to use. You can test at your application and at WEX query utility. Here is a simple REST call using my ontolection, see that I’m searching for DNS and automatically WEX will search also for Domain Name Service | ドメインネーム・サービス | ドメインネームサービス.

http://MY_SERVER:9080/vivisimo/cgi-bin/velocity? or () CONTAINING CONTENT %field%: + NEAR – NOT NOTCONTAINING NOTWITHIN OR0 quotes regex stem THRU BEFORE FOLLOWEDBY weight wildcard wildchar WITHIN WORDS site less-than less-than-or-equal greater-than greater-than-or-equal equal range&sources=MY_COLLECTION &output-contents=FIELD1 FIELD2&output-bold-contents=FIELD1 FIELD2&query=dns&query-condition-xpath=$FIELD3=’XXXXX’&query-object=&num-per-source=20&start=0&num=20&query-modification-macros=query-modification-expansion&extra-xml=<declare name=”query-expansion.enabled” /><set-var name=”query-expansion.enabled”>true</set-var><declare name=”query-expansion.user-profile” /><set-var name=”query-expansion.user-profile”>on</set-var><declare name=”query-expansion.ontolections” /><set-var name=”query-expansion.ontolections”>onto_practitioner</set-var><declare name=”query-expansion.max-terms-per-type” /><set-var name=”query-expansion.max-terms-per-type”>3</set-var><declare name=”query-expansion.automatic” /><set-var name=”query-expansion.automatic”>synonym:0.8,alternative:0.8,spelling:0.8,narrower:0.5,translation:0.5,broader:0.5,related:0.5</set-var><declare name=”query-expansion.suggestion” /><set-var name=”query-expansion.suggestion”></set-var><declare name=”query-expansion.query-match-type” /><set-var name=”query-expansion.query-match-type”>terms</set-var><declare name=”query-expansion.conceptual-search-similarity-threshold” /><set-var name=”query-expansion.conceptual-search-similarity-threshold”>0.1</set-var><declare name=”query-expansion.conceptual-search-metric” /><set-var name=”query-expansion.conceptual-search-metric”>euclidean-dot-product</set-var><declare name=”query-expansion.conceptual-search-candidates-max” /><set-var name=”query-expansion.conceptual-search-candidates-max”>euclidean-dot-product</set-var><declare name=”query-expansion.conceptual-search-sources” /><set-var name=”query-expansion.conceptual-search-sources”>MY_COLLECTION </set-var><declare name=”query-expansion.stem-expansions” /><set-var name=”query-expansion.stem-expansions”>false</set-var><declare name=”query-expansion.stemming-dictionary” /><set-var name=”query-expansion.stemming-dictionary”>english/wildcard.dict</set-var><declare name=”reporting.track-spelling” /><set-var name=”reporting.track-spelling”>false</set-var><declare name=”meta.stem-expand-stemmer” /><set-var name=”meta.stem-expand-stemmer”>delanguage+english+depluralize</set-var><declare name=”query-expansion.stemming-weight” /><set-var name=”query-expansion.stemming-weight”>0.8</set-var>

See that this parameter turn on the ontolection:


And at &extra-xml I have some specific settings.

Special attention to where I use onto_practitioner, use your ontolection name.

Also, pay attention that if you have more than one server or shards, settings can change.

Calling this REST API, analysing results you will see some output like:

<op-exp logic=”or” middle-string=”OR” name=”OR” precedence=”2″><term field=”query” input-type=”user” processing=”strict” str=”dns”/><term field=”query” relation=”synonym” str=”Domain Name Service”/><term field=”query” relation=”synonym” str=”ドメインネーム・サービス”/><term field=”query” relation=”synonym” str=”ドメインネームサービス”/></op-exp>

If you would like to test at WEX query utility, you should edit the project query-meta and add the following flags:

Enable query stopword removal → true

Query expansion match type Terms

Enable semantic expansion true

And set the configurations like the following:

Thats it. Enjoy!

For more information about ontolection:

Implementing Natural Language Query with IBM Watson Explorer

If you have a Watson Explorer (WEX) collection and want to be able to handle with Natural Query Language, you need to know that since WEX release 11.0.1, it have a native component to handle with this – its the query-modifier service.

Basically, this service parse the queries and apply some strategies, transforming the query in Keywords that WEX can understand and apply in the queries. Lets suppose that user search is:

“I’m looking for a Java Developer that know Struts and Spring and work from Brazil.”

The service will extract the keywords, based on configurations, and will search for:

Java Developer + Struts + spring + Brazil

We need to keep in mind that NLQ is different from Cognitive. This service will not understand questions, it will just extract terms. If you are looking for cognitive, you are looking for Watson ( With Watson we can understand the text and apply filter using location, range, etc. This also can be done using Machine Learning Models created at Watson Knowledge studio. But, Ill talk about this soon.

Backing to Query-Modifier, if you look at the folder nlq, inside Engine folder from your WEX installation, you will find the configuration stuff. Query Modifier work this way:

You make a request to WEX telling that you will use QM, the request pass through QM that apply the strategies, then, it forward the request to WEX Engine, who respond to you.

Here is a simple REST call that is using query-modifier:


See that the following make WEX use Query Modifier:


In order to configure, go to <your WEX install folder>/Engine/nlq , in my case /opt/IBM/dataexplorer/WEX-11_0_1/Engine/nlq

Run “chmod +x”

Then “./” (as root)

You will see this kind of output:

Copying /opt/IBM/dataexplorer/WEX-11_0_1/Engine/examples/nlq/querymodifier/querymodifier-production.yml.defaults to /opt/IBM/dataexplorer/WEX-11_0_1/Engine/nlq/querymodifier-production.yml…

Configuring port to 9080…

Configuring path to vivisimo/cgi-bin/velocity…

Configuring PEARs path to /opt/IBM/dataexplorer/WEX-11_0_1/Engine/data/pears…

Copying querymodifier-2.1.9.jar to /opt/IBM/dataexplorer/WEX-11_0_1/Engine/nlq/querymodifier.jar…

Giving executable permissions to /opt/IBM/dataexplorer/WEX-11_0_1/Engine/nlq/querymodifier.jar…

Removing any existing /etc/init.d/querymodifier…

Linking /etc/init.d/querymodifier to …


Its important to change owner of the created files to WEX instance owner, in my case dataexp, so, as root: chown -R dataexp: <your WEX install folder>/Engine/nlq/

The configuration file is called querymodifier-production.yaml

In the first part of the file, you will see the WEX server setting, like IP, port and user.

After this you can setup the strategies, in my case I have this setup:

#The strategies to apply, by default, to each query. Can also be customized on a per-request basis (“workplan” GET parameter):


default: PhraseWhitelistStrategy POSBasedNoiseWordRemoverStrategy DictionaryBasedNoiseWordRemoverStrategy DisjunctifyStrategy

The first strategy it the Disjunctify. It converts AND operators into OR operators, if the operator has more terms than a threshold. For example, if you set minimumRequiredTerms = 4, if user search for less terms than 4, query will be (A AND B AND C AND D), if you search for more than 4 terms, query will be (A OR B OR C OR D OR X OR …..).

The Dictionary-Based Noiseword Removal strategy, basically remove words from the query. For example, if you add BANANA to the list, then if user search for BANANA, it will be ignored. Usually we add to this section the common STOPWORDS, you can find several lists, I recommend use the google one. Another good list is here.

The Phrase Whitelist Strategy its interesting, you can have some external config files for some keyphrases, for example, lets suppose that you want that “Project Manager” be searched and “Project Manager”, and not “Project” and “Manager”, so, you need to add this word in the config file.

We have a secret here: you need to separate the words with <TAB> instead of space, else it will not work.

After configure your strategies, you just need to start the service (usually /etc/init.d/query-modifier start) and perform the REST Calls to test. You can follow the log at /var/log/querymodifier.log.

Every time that you change this setting, you need to recycle query modifier.

Your best friend to help with development and test, its the Api Runner interface from WEX engine. You can access this at:


See the parameters there and ENJOY!

For more references:

Pequeno exemplo de Threads em Java

Compartilhando uma pequena solução utilizada em POCs (provas de conceito) quando preciso demonstrar alguma coisa utilizando Threads, segue um pequeno trecho que pode ser útil para alguém, e certamente para mim mesmo (quem escreve e compartilha – nunca esquece… ou quase isso).

Criei uma classe para ser minha gerenciadora de thread:


import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;

public class ThreadExecutorMaganer {
private ExecutorService executor;
private long timeout;

private List<Callable<String>> callables;

public ThreadExecutorMaganer(int maxThreads, long timeoutInSeconds) {
this.executor = Executors.newFixedThreadPool(maxThreads);
this.callables = new ArrayList<Callable<String>>();
this.timeout = timeoutInSeconds;

public void add(Callable<String> callable) {

public List<Future<String>> start() {
List<Future<String>> futures = null;

try {
futures = executor.invokeAll(callables, timeout, TimeUnit.SECONDS);

} catch (InterruptedException e) {


return futures;


Esta é minha Thread em si (veja que tem um IF la com um sleep só pra provocar erro e testar), ela é do tipo Callable;


import java.util.concurrent.Callable;

public class CallableTask implements Callable<String>{

private final String tarefa;

public CallableTask(String tarefa) {
this.tarefa = tarefa;

public String call() throws Exception {
System.out.println("Inside call-->" + tarefa);
if (tarefa.equals("C")){
System.out.println("Sleeping 6 seconds");
return tarefa;


E esta é minha classe principal que invoca o circo:


import java.util.List;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Future;


public class CallThreadExecutorMaganer {

public static void main(String[] args) {
String[] restfulUrls = "A,B,C,D,E".split(",");
ThreadExecutorMaganer tem = new ThreadExecutorMaganer(100, 5);

for (String url : restfulUrls) {
tem.add(new CallableTask(url));
List<Future<String>> futureResponses = tem.start();

for (Future<String> futureResponse : futureResponses) {
try {
String resp = futureResponse.get();

}catch (ExecutionException ex){
System.out.println("ExecutionException while getting WEX response="+ex.getCause().getMessage());
}catch (Exception e) {
System.out.println("Fail to query WEX server:"+e.getMessage());



Antes que a patrulha critique: É um SIMPLES exemplo, não deve ser utilizado profissionalmente sem uma análise e adequação para seu caso, como tipagens adequadas, tratamento de erros, etc…


Categorias:JAVA Tags:, , , ,

Reading XML with Java – Quick and simple example

I always need some code to read XML with Java. This is a place holder to me, but, maybe can be useful to other people.

Here is my XML example:

<operator logic="and">
<operator logic="or">
<term field="query" input-type="user" processing="strict" str="は" />
<term field="query" input-type="user" phrase="phrase" processing="strict" str="銀行業務" weight="1" />
<term field="query" input-type="user" processing="strict" str="持つ" />
<term field="query" input-type="user" phrase="phrase" processing="strict" str="java開発者" weight="1.69" />
<term field="query" input-type="user" processing="strict" str="探して" />

Here is my Java code:

import org.w3c.dom.*;
import org.xml.sax.InputSource;

import javax.xml.parsers.*;

public class ParseXML {

	public static void main(String[] args) {
		String xml = "<operator logic=\"or\"><term field=\"query\" input-type=\"user\" processing=\"strict\" str=\"は\" /><term field=\"query\" input-type=\"user\" phrase=\"phrase\" processing=\"strict\" str=\"銀行業務\" weight=\"1\" /><term field=\"query\" input-type=\"user\" processing=\"strict\" str=\"持つ\" /><term field=\"query\" input-type=\"user\" phrase=\"phrase\" processing=\"strict\" str=\"java開発者\" weight=\"1.69\" /><term field=\"query\" input-type=\"user\" processing=\"strict\" str=\"探して\" /></operator>";
		try {	
			Document doc = DocumentBuilderFactory.newInstance().newDocumentBuilder().parse(new InputSource(new StringReader(xml)));
			System.out.println("Root element :" + doc.getDocumentElement().getNodeName());
			NodeList nList = doc.getElementsByTagName("term");
			for (int temp = 0; temp < nList.getLength(); temp++) {
				Node nNode = nList.item(temp);
				System.out.println("\nCurrent Element :" + nNode.getNodeName());
				if (nNode.getNodeType() == Node.ELEMENT_NODE) {
					Element eElement = (Element) nNode;
					System.out.println("processing : " + eElement.getAttribute("processing"));
					System.out.println("str : " + eElement.getAttribute("str"));
	      } catch (Exception e) {
Categorias:JAVA Tags:, , , , , , ,

Monitoring top 10 Linux CPU consuming processes

I always need to check the processes that are consuming CPU at my machine, using ps its easy. With the following command, you can write a script and then send email, take action, etc.

ps aux –sort=-pcpu | head -n 10

If you want to sort by memory:

ps aux –sort=-rss | head -n 10

You can play with TOP also, but I prefer PS for this case.

top -b -c -n 1 | head -n 17 | tail -n 10


Categorias:AIX, Linux Tags:, , , , ,

Utilizando melhor o comando TOP no Linux/Unix/Solaris

Utilizo muito o comando TOP (dentre outros) para medir a “saúde” de nossos servidores. Duas opções que gosto muito são o “1” e o “I” (maiusculo).

Apertando 1, o TOP mostra todos os cores de seu processador, o que ajuda a ver sua utilização como um todo.


Utilizando o I (letra í maiuscula), você desabilita o “Irix mode”, apertando novamente você o habilita. Basicamente desabilitando o Irix mode, você mostra a utilização da CPU levando em conta sua capacidade real em %. Dando um exemplo, no Irix mode que é o padrão, você pode observar que alguns processos podem consumir mais que 100% de utilização. Isso acontece pois nesse modo ele considera o total de cores que você tem * 100%. Desabilitando o mesmo, o TOP divide a utilização do processo pelo numero todal de CPUs que você tem, levando a um numero mais realista e que não vai passar de 100%. As imagens abaixo mostram primeiro o TOP com a opção padrão (Irix Mode) e logo após, desabilitando o Irix Mode, note que os processos marcados tiveram sua utilização de CPU “diminuída”, porém, não é o caso, ele simplesmente está mostrando a utilização da CPU como um todo.