Installing Scotas OLS at 12c/Docker

Scotas OLS is Solr Lucene + Oracle

Docker is the new way for DevOps operation specially during development process.

There is a possibility to run Scotas OLS with Oracle 12c running on Docker following this post On docker, Ubuntu and Oracle RDBMS.

Assuming that there is a Docker – Oracle 12c template ready for use:

# docker images
REPOSITORY TAG
oracle-12102 latest
solr/ols latest
alvinhenrick/hadoop-nn-dn latest
alvinhenrick/hadoop-dn latest
alvinhenrick/hadoop-base latest
alvinhenrick/serf latest
java openjdk-6-jdk
ubuntu latest
ubuntu trusty
oraclelinux 6

once We have the image ready first step is to prepare a shared directory at the host machine to be public in a test container, here the steps:

# mkdir -p /var/lib/docker/db/test
# chown 54321:54321 /var/lib/docker/db/test
# cp /var/lib/docker/dockerfiles/build-oracle-12102/db_install-simple.dbt /var/lib/docker/db/test/db_install.dbt

db_install.dbt must have enabled at least these two features JSERVER and XDB_PROTOCOLS, first feature is obviously required because Scotas OLS is entirely written in Java (Apache Solr) and second property is because Scotas OLS storage could be exported using XMLDB repository (WebDAV/ftp).

First Docker image run operation will create an empty database named test, for example:

# docker run –privileged=true –ipc=host –volume=/var/lib/docker/db/test:/u01/app/oracle/data –name test –hostname test –detach=true –publish=1521:1521 –publish=9099:9099 oracle-12102

It will take a long time according to your hardware, in the mean time a log operation could be checked using:

# docker logs -f test

once the Oracle RDBMS is created a good option is to stop and start again to see if everything is fine:

# docker stop test
# docker rm test

Note that docker rm operation will not remove your created DB, it only removes changes at the docker image created during the operation, you could check that by inspecting the shared directory:

# cd /var/lib/docker/db/test
# ls
DATABASE_IS_SETUP db_install.dbt test

now to install Scotas OLS We have to copy install zip file, license and patched ODCI.jar as is described in the post Upgrading system libraries…, for example:

# cp /tmp/lucene-ols-bin-2.0.1.zip /var/lib/docker/db/test
# cp /tmp/ODCI.jar /var/lib/docker/db/test
# cp /tmp/valid.ols

these files copied at the host machine will available at the Docker image during next run:

# docker run –privileged=true –ipc=host –volume=/var/lib/docker/db/test:/u01/app/oracle/data –name test –hostname test –detach=true –publish=1521:1521 –publish=9099:9099 oracle-12102
# docker exec -ti test bash
[root@test oracle]# id
uid=0(root) gid=0(root) groups=0(root)

by executing last step We attach to a running images interactive using a bash command, according to the  oracle-12102 template image the shell script is connected as root.

First step is to log as oracle owner and check SQLNet connectivity and SYS password, oracle-12102 template left SYS password defined as oracle:

[root@test oracle]# su -l oracle
[oracle@test ~]$ tnsping test
TNS Ping Utility for Linux: Version 12.1.0.2.0 – Production on 01-SEP-2015 13:49:19
Copyright (c) 1997, 2014, Oracle. All rights reserved.
Used parameter files:

Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = test)))
OK (0 msec)
[oracle@test ~]$ sqlplus “sys/oracle@test as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Tue Sep 1 13:49:30 2015
Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

if everything is fine We can start installing Scotas OLS, first patching the ODCI.jar, updating Scotas OLS license and running the installer:

[oracle@test ~]$ cd /tmp
[oracle@test tmp]$ unzip /u01/app/oracle/data/lucene-ols-bin-2.0.1.zip
Archive: /u01/app/oracle/data/lucene-ols-bin-2.0.1.zip
… lots of messages here …
[oracle@test tmp]$ cp /u01/app/oracle/data/ODCI.jar ols/
[oracle@test tmp]$ cp /u01/app/oracle/data/valid.ols ols/bin/com/scotas/license/valid.ols
[oracle@test tmp]$ cd ols
[oracle@test ols]$ loadjava -f -r -v -s -g public -u sys/oracle@test ODCI.jar
…. lots of messages here ….
[oracle@test ols]$ ant -Ddb.str=test -Ddb.tbl=SYSAUX -Ddba.usr=sys -Ddba.pwd=oracle
…. lots of output here ….

last step start Scotas OLS installer using sys user, oracle password and the target database test using SYSAUX tablespace (created by default during installation).

At this point Scotas OLS should be installed you can start using it by shutdown/startup your database or if its not possible enabling manually the OLS Parallel Shared Server process:

[oracle@test ols]$ ant -Ddb.str=test -Ddba.usr=sys -Ddba.pwd=oracle enable-jobs
… some SQLPlus messages here ….
BUILD SUCCESSFUL
Total time: 1 second

Now We are ready to test Scotas OLS running in a Docker container, there are some test prepared to run as Scott but Scott user is now included by default into the RDBMS installation, We can create them doing these SQL commands (logged as SYS):

SQL> create user scott identified by tiger
default tablespace SYSAUX
temporary tablespace TEMP
quota unlimited on SYSAUX;
SQL> grant connect,resource,luceneuser to scott;

SYSAUX tablespace is not a right place to hold scott’s data but is the tablespace available into a default RDBMS installation.

With this user We can run some test using Ant tool, for example Scotas OLS tutorial by editing the file db/sqlunit.properties

[oracle@test ols]$ cat db/sqlunit.properties
# SQLUnit test
sqlunit.driver = oracle.jdbc.driver.OracleDriver
sqlunit.url = jdbc:oracle:oci:@test
sqlunit.user = scott
sqlunit.password = tiger
[oracle@test ols]$ ant test-tutorial

One or more SQLUnit Tests will fail, see sqlunit-test-tutorial.txt for details, this is caused some time by different timezone configuration but most of the test will pass.

Another test is test-parallel-ols, this test simulates a heavy multi-tier application which insert, update, delete, search and count hits on a simple table, the parameters for this should be changed directly into build.xml file, for example:

<target name=”test-parallel-ols” description=”Run Test on Domain Index parallel functionality OLS”>
<junit printsummary=”yes” fork=”no” haltonfailure=”yes” showoutput=”yes”>
<sysproperty key=”db.usr” value=”scott“/>
<sysproperty key=”db.pwd” value=”tiger“/>
<sysproperty key=”db.str” value=”test“/>
<sysproperty key=”idx.upd” value=”0″/>
<sysproperty key=”idx.sch” value=”0,1″/>
<sysproperty key=”db.online-mode” value=”true”/>
<sysproperty key=”db.commitOnSync” value=”false”/>
<sysproperty key=”db.bufferedQueue” value=”true”/>
<sysproperty key=”db.batch-size” value=”200″/>

Sample test output:

[oracle@test ols]$ ant test-parallel-ols
Buildfile: build.xml

test-parallel-ols:
[junit] Running com.scotas.solr.index.TestDBIndexParallel
[junit] Table created: T1
[junit] Index altered: B$SIT1
[junit] OnLine: true CommitOnSync: false
[junit] Avg time on Solr deletes: 2
[junit] Avg time on Solr deletes: 1
[junit] Avg time on Solr updates: 2
[junit] Avg time on Solr updates: 2
[junit] Avg time on Solr updates: 2
[junit] Avg time on Solr deletes: 16
[junit] Avg time on Solr inserts: 0
[junit] Avg time on Solr inserts: 0
[junit] Avg time on Solr inserts: 0
[junit] Avg time on Solr count hits: 26
[junit] Avg time on Solr search: 33
[junit] Avg time on Solr search: 33
[junit] Avg time on Solr count hits: 27
[junit] Avg time on Solr search: 32
[junit] Avg time on Solr count hits: 27
[junit] Index droped: B$SIT1
[junit] Table droped: T1
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 310.189 sec

BUILD SUCCESSFUL
Total time: 5 minutes 10 seconds

And that’s all Scotas OLS works perfect using Oracle 12c on Docker.

New Oracle Lucene Domain Index release based on Lucene 3.0.2

Just a few words to announce a new release of Oracle Lucene Domain Index, this zip is valid for 10g and 11g database version (10g using back-ported classes from 1.5 to 1.4)
This release is compiled using Lucene 3.0.2 version and incorporates a set of new features added, here the list:

  • Added a long awaited functionality, a parallel/shared/slave search process used during a start-fetch-close and CountHits function
  • Added lfreqterms ancillary operator returning the freq terms array of rows visited
  • Added lsimilarity ancillary operator returning a computed Levenshtein distance of the row visited
  • Added a ldidyoumean pipeline table function using DidYouMean.indexDictionary storage
  • Added test using SQLUnit

The bigger addition is the Parallel-Shared-Slave search process, this architectural change was in my to-do list for a long time and finally I added in this release :)
The idea behind this is to have a new Oracle process started by the DBMS_SCHEDULER sub-system during the database startup process and stopped immediately before shutdown.
Now this process is responsible for implementing the ODCI methods start-fetch-close/count-hit on behalf of the client process (process associated to an specific user session) which connect to the shared-slave process by using RMI.
With this new architecture we have two principal benefits:

  • Reduce memory consumption
  • Increase Lucene Cache Hits

Less memory consumption because the internal OJVM implementation is attached to a client session, so the Java space used by Lucene structures is isolated and independent from another concurrent session, now allLucene memory structures used during index scan process are created in a shared process and then not replicated.
Also if one session submits a Lucene search, this search is cached for subsequent queries, all subsequent queries coming from the same client session or any other which are associated to the same index and with the same Query string implies a hit.
I’ll explain more in detail this new architecture in another post also showing how many parallel process can work together when using Parallel Indexing and Searching.
On the other hand next week I’ll be at the Oracle OpenWorld 2010 in SFO presenting the session:

Schedule: Tuesday: 09:30AM
Session ID: S315660
Title: Database Applications Lifecycle Management
Event: JavaOne and Oracle Develop
Stream(s): ORACLE DEVELOP
Track(s): Database Development
Abstract: Complex applications, such as Java running inside the database, require an application lifecycle management to develop and delivery good code. This session will cover some best practices, tools, and experience managing and delivering code for running inside the database, including tools for debugging, automatic test, packaging, deployment, and release management. Some of the tools presented will include Apache Maven, JUnit, log4j, Oracle JDeveloper, and others integrated into the Oracle Java Virtual Machine (JVM) development environment.

See you there or at any of networking planned events :)

Dealing with JDK1.5 libraries on Oracle 10g

Modern libraries are compiled with JDK 1.5 and the question is How to deal with these libraries on an Oracle 10g OJVM.
Some examples are Lucene 3.x branch or Hadoop. The solution that I tested is using a Java Retro Translatorand some complementary libraries.
I have tested this solution in Lucene Domain Index 3.x branch with success.
As you can see on the CVS there is build.xml file which performs all the retro translator steps. Here an step by step explanation of the process:

  1. Load all required libraries provided by Retro translator project which implements features not available on JDK 1.4/1.3 runtime, this is done on the target load-retrotranslator-sys-code. This target loads many libraries on SYS schema due are immutable, or with low probability of change. It will change if we upgrade a retro-translator version. All libraries are then compiled to assembler using NCOMP utility, target ncomp-runtime-retrotranslator-sys-code.
  2. Then we can convert libraries compiled with JDK1.5, in this build.xml file the Lucene and Lucene Domain Index implementation, to a JDK1.4 target runtime. This is done on the targets backport-code-lucene andbackport-code-odi, on first target We converts all Lucene libraries excluding JUnit and Test code, these libraries require as a dependency JUnit and retro-translator jars. Second target converts Lucene Domain Index jar depending on Lucene core and Oracle’s libs. The back-port operation generates a file namedlucene-odi-all-${version}.jar with Lucene and Lucene Domain Index code ready to run on JDK1.4 runtime.
  3. Once We have the code back-ported to a JDK1.4 runtime We can upload and NCOMP into Oracle 10g, this is done on targets load-lucene-odi-backported-code and ncomp-lucene-all.

And that’s all!!, the code works fine on my Oracle 10.2 database – Linux :) , finally users of 11g and 10g databases can deploy Lucene Domain Index implementation using one distribution file.