Scotas OLS official Docker image build scripts

We uploaded to GitHub two official Docker build image scripts for Scotas OLS 2.0.1 (RDBMS 12c R1) and OLS 2.0.2 (RDBMS 12c R2).

Both scripts are based on Oracle official RDBMS 12c Docker build images scripts, and they make the installation of Oracle/Scotas OLS too easy.

It basically start by using your own Oracle 12c Docker image and install OLS’s binaries and license. Follow these steps to get your OLS up and running on Linux/Mac/Windows:

First check that you have the 12c image ready, for example:

[mochoa@localhost ols-12cr2]$ docker image ls|grep oracle
….
oracle/database                12.2.0.1-ee                 1a870e15682d                6 weeks ago            14.8 GB
….

then checkout our scripts from GitHub using:

[mochoa@localhost tmp]$ git clone https://github.com/scotas/docker-images.git
Cloning into ‘docker-images’…
remote: Counting objects: 39, done.
remote: Compressing objects: 100% (28/28), done.
remote: Total 39 (delta 14), reused 32 (delta 10), pack-reused 0
Unpacking objects: 100% (39/39), done.
Checking connectivity… done.
[mochoa@localhost tmp]$ cd docker-images/ols-12cr2/
[mochoa@localhost ols-12cr2]$ ./buildDockerImage.sh

.. Docker build steps here (requires internet connection)..

once you have Scotas OLS image ready at your local Docker images repository you can install OLS at first startup of your Docker container using:

docker run –name <container name> \
-p <host port>:1521 -p <host port>:5500 \
-e ORACLE_SID=<your SID> \
-e ORACLE_PDB=<your PDB name> \
-e ORACLE_PWD=<your database passwords> \
-e ORACLE_CHARACTERSET=<your character set> \
-v [<host mount point>:]/opt/oracle/oradata \
ols-official:2.0.2

Important: unlike Oracle official Docker image script first run requires ORACLE_PWD argument, it doesn’t work with random generated password. During this first startup your RDBMS will create all the datafiles at <host mount point> directory and unpack and install Scotas OLS, you will see some like this:

ODCI patched
Installing OLS…
OLS installed see /home/oracle/install-OLS.log and /home/oracle/install-OLS.err files for details
OLS Installed OK….
#########################
DATABASE IS READY TO USE!
#########################

after a successful Database creation you can start/stop the container, rm and run again without the ORACLE_PWD argument.

Note: that the CDB and PDB$SEED are patched against an update version of ODCI.jar library, the included library for RDBMS 12c R1 and R2 are outdated and doesn’t have a correct wrapper files for latest ODCI library, see this post for more details.

Note: During first startup you will see an ora_m00n_pid process consuming a lot of CPU, this is because once you start using Java inside the Database a JIT compilation phase start converting Java code to assembler, it includes a complete Java/Oracle stack and Scotas OLS code, after a week of OLS usage all Java code must be compiled to assembler and this process only takes some CPU usage, obviously when all the code is compiled your indexing and query time will be faster.

Installing Scotas OLS at 12c/Docker

Scotas OLS is Solr Lucene + Oracle

Docker is the new way for DevOps operation specially during development process.

There is a possibility to run Scotas OLS with Oracle 12c running on Docker following this post On docker, Ubuntu and Oracle RDBMS.

Assuming that there is a Docker – Oracle 12c template ready for use:

# docker images
REPOSITORY TAG
oracle-12102 latest
solr/ols latest
alvinhenrick/hadoop-nn-dn latest
alvinhenrick/hadoop-dn latest
alvinhenrick/hadoop-base latest
alvinhenrick/serf latest
java openjdk-6-jdk
ubuntu latest
ubuntu trusty
oraclelinux 6

once We have the image ready first step is to prepare a shared directory at the host machine to be public in a test container, here the steps:

# mkdir -p /var/lib/docker/db/test
# chown 54321:54321 /var/lib/docker/db/test
# cp /var/lib/docker/dockerfiles/build-oracle-12102/db_install-simple.dbt /var/lib/docker/db/test/db_install.dbt

db_install.dbt must have enabled at least these two features JSERVER and XDB_PROTOCOLS, first feature is obviously required because Scotas OLS is entirely written in Java (Apache Solr) and second property is because Scotas OLS storage could be exported using XMLDB repository (WebDAV/ftp).

First Docker image run operation will create an empty database named test, for example:

# docker run –privileged=true –ipc=host –volume=/var/lib/docker/db/test:/u01/app/oracle/data –name test –hostname test –detach=true –publish=1521:1521 –publish=9099:9099 oracle-12102

It will take a long time according to your hardware, in the mean time a log operation could be checked using:

# docker logs -f test

once the Oracle RDBMS is created a good option is to stop and start again to see if everything is fine:

# docker stop test
# docker rm test

Note that docker rm operation will not remove your created DB, it only removes changes at the docker image created during the operation, you could check that by inspecting the shared directory:

# cd /var/lib/docker/db/test
# ls
DATABASE_IS_SETUP db_install.dbt test

now to install Scotas OLS We have to copy install zip file, license and patched ODCI.jar as is described in the post Upgrading system libraries…, for example:

# cp /tmp/lucene-ols-bin-2.0.1.zip /var/lib/docker/db/test
# cp /tmp/ODCI.jar /var/lib/docker/db/test
# cp /tmp/valid.ols

these files copied at the host machine will available at the Docker image during next run:

# docker run –privileged=true –ipc=host –volume=/var/lib/docker/db/test:/u01/app/oracle/data –name test –hostname test –detach=true –publish=1521:1521 –publish=9099:9099 oracle-12102
# docker exec -ti test bash
[root@test oracle]# id
uid=0(root) gid=0(root) groups=0(root)

by executing last step We attach to a running images interactive using a bash command, according to the  oracle-12102 template image the shell script is connected as root.

First step is to log as oracle owner and check SQLNet connectivity and SYS password, oracle-12102 template left SYS password defined as oracle:

[root@test oracle]# su -l oracle
[oracle@test ~]$ tnsping test
TNS Ping Utility for Linux: Version 12.1.0.2.0 – Production on 01-SEP-2015 13:49:19
Copyright (c) 1997, 2014, Oracle. All rights reserved.
Used parameter files:

Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = test)))
OK (0 msec)
[oracle@test ~]$ sqlplus “sys/oracle@test as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Tue Sep 1 13:49:30 2015
Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

if everything is fine We can start installing Scotas OLS, first patching the ODCI.jar, updating Scotas OLS license and running the installer:

[oracle@test ~]$ cd /tmp
[oracle@test tmp]$ unzip /u01/app/oracle/data/lucene-ols-bin-2.0.1.zip
Archive: /u01/app/oracle/data/lucene-ols-bin-2.0.1.zip
… lots of messages here …
[oracle@test tmp]$ cp /u01/app/oracle/data/ODCI.jar ols/
[oracle@test tmp]$ cp /u01/app/oracle/data/valid.ols ols/bin/com/scotas/license/valid.ols
[oracle@test tmp]$ cd ols
[oracle@test ols]$ loadjava -f -r -v -s -g public -u sys/oracle@test ODCI.jar
…. lots of messages here ….
[oracle@test ols]$ ant -Ddb.str=test -Ddb.tbl=SYSAUX -Ddba.usr=sys -Ddba.pwd=oracle
…. lots of output here ….

last step start Scotas OLS installer using sys user, oracle password and the target database test using SYSAUX tablespace (created by default during installation).

At this point Scotas OLS should be installed you can start using it by shutdown/startup your database or if its not possible enabling manually the OLS Parallel Shared Server process:

[oracle@test ols]$ ant -Ddb.str=test -Ddba.usr=sys -Ddba.pwd=oracle enable-jobs
… some SQLPlus messages here ….
BUILD SUCCESSFUL
Total time: 1 second

Now We are ready to test Scotas OLS running in a Docker container, there are some test prepared to run as Scott but Scott user is now included by default into the RDBMS installation, We can create them doing these SQL commands (logged as SYS):

SQL> create user scott identified by tiger
default tablespace SYSAUX
temporary tablespace TEMP
quota unlimited on SYSAUX;
SQL> grant connect,resource,luceneuser to scott;

SYSAUX tablespace is not a right place to hold scott’s data but is the tablespace available into a default RDBMS installation.

With this user We can run some test using Ant tool, for example Scotas OLS tutorial by editing the file db/sqlunit.properties

[oracle@test ols]$ cat db/sqlunit.properties
# SQLUnit test
sqlunit.driver = oracle.jdbc.driver.OracleDriver
sqlunit.url = jdbc:oracle:oci:@test
sqlunit.user = scott
sqlunit.password = tiger
[oracle@test ols]$ ant test-tutorial

One or more SQLUnit Tests will fail, see sqlunit-test-tutorial.txt for details, this is caused some time by different timezone configuration but most of the test will pass.

Another test is test-parallel-ols, this test simulates a heavy multi-tier application which insert, update, delete, search and count hits on a simple table, the parameters for this should be changed directly into build.xml file, for example:

<target name=”test-parallel-ols” description=”Run Test on Domain Index parallel functionality OLS”>
<junit printsummary=”yes” fork=”no” haltonfailure=”yes” showoutput=”yes”>
<sysproperty key=”db.usr” value=”scott“/>
<sysproperty key=”db.pwd” value=”tiger“/>
<sysproperty key=”db.str” value=”test“/>
<sysproperty key=”idx.upd” value=”0″/>
<sysproperty key=”idx.sch” value=”0,1″/>
<sysproperty key=”db.online-mode” value=”true”/>
<sysproperty key=”db.commitOnSync” value=”false”/>
<sysproperty key=”db.bufferedQueue” value=”true”/>
<sysproperty key=”db.batch-size” value=”200″/>

Sample test output:

[oracle@test ols]$ ant test-parallel-ols
Buildfile: build.xml

test-parallel-ols:
[junit] Running com.scotas.solr.index.TestDBIndexParallel
[junit] Table created: T1
[junit] Index altered: B$SIT1
[junit] OnLine: true CommitOnSync: false
[junit] Avg time on Solr deletes: 2
[junit] Avg time on Solr deletes: 1
[junit] Avg time on Solr updates: 2
[junit] Avg time on Solr updates: 2
[junit] Avg time on Solr updates: 2
[junit] Avg time on Solr deletes: 16
[junit] Avg time on Solr inserts: 0
[junit] Avg time on Solr inserts: 0
[junit] Avg time on Solr inserts: 0
[junit] Avg time on Solr count hits: 26
[junit] Avg time on Solr search: 33
[junit] Avg time on Solr search: 33
[junit] Avg time on Solr count hits: 27
[junit] Avg time on Solr search: 32
[junit] Avg time on Solr count hits: 27
[junit] Index droped: B$SIT1
[junit] Table droped: T1
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 310.189 sec

BUILD SUCCESSFUL
Total time: 5 minutes 10 seconds

And that’s all Scotas OLS works perfect using Oracle 12c on Docker.

Apache Solr tutorial in a relational way

Many questions to be answered when starting with Scotas products is the lack of knowledge on how to implement typical Solr operations in a SQL way or vice-versa.

The idea of this tutorial is to do the same thing explained into the Apache Solr tutorial but using relational concepts using Scotas OLS.

Test Transparent Data Encryption on Scotas’s OLS

This example shows how Scotas OLS helps you to protect your data. Following this guide any sensitive data will never be exposed on Solr/Lucene and your application is complaint against Health Insurance Portability and Accountability Act (HIPAA) and Payment Card Industry (PCI) data security standards

– First enable TDE on Oracle database (Wallet config should be enabled first) connected as SYS

SQL> ALTER SYSTEM SET ENCRYPTION WALLET CLOSE IDENTIFIED BY test;
SQL> ALTER SYSTEM SET ENCRYPTION WALLET OPEN IDENTIFIED BY test;

- Check data files location to create a new small one

SQL> SELECT RPAD(tablespace_name,18)||’ ‘||RTRIM(file_name) Tablespace FROM dba_data_files ORDER BY tablespace_name;

TABLESPACE
——————————————————————————–
EXAMPLE /u01/app/oracle/oradata/orcl/example01.dbf
SYSAUX /u01/app/oracle/oradata/orcl/sysaux01.dbf
SYSTEM /u01/app/oracle/oradata/orcl/system01.dbf
UNDOTBS1 /u01/app/oracle/oradata/orcl/undotbs01.dbf
USERS /u01/app/oracle/oradata/orcl/users01.dbf


SQLCREATE TABLESPACE TEST
               DATAFILE ‘/u01/app/oracle/oradata/orcl/test01.dbf’ SIZE 128K
               AUTOEXTEND ON NEXT 64K;


SQL> CREATE USER test identified by test
               temporary tablespace temp
               default tablespace test
               quota unlimited on test;

 – Connect and test a non-secure storage by checking some text stored unencrypted on the datafile.

SQL> GRANT connect,resource,luceneuser TO test;

SQL> CONN test/test@orcl

SQL> CREATE TABLE tde_test (
               id NUMBER(10),
               data VARCHAR2(50)
               )
              TABLESPACE test;

SQL> INSERT INTO tde_test (id, data) VALUES (1, ‘This is a secret!’);

SQL> commit;

SQL> CREATE index tde_idx on tde_test(id)
               indextype is Lucene.SolrIndex
               parameters(‘LogLevel:INFO;IncludeMasterColumn:false;ExtraCols:data;HighlightColumn:DATA‘);

SQL> SELECT count(*) from tde_test where scontains(id,’secret’)>0;
             COUNT(*)
             ————
                     1

– Table and index was created, we can connect as SYS and flush any memory structure to disk and verify the string using grep utility.

SQL> conn / as sysdba
SQL> ALTER SYSTEM FLUSH BUFFER_CACHE;
              System altered.
SQL> !grep “This is a secret” /u01/app/oracle/oradata/orcl/test01.dbf
             Binary file /u01/app/oracle/oradata/orcl/test01.dbf matches

- Drop the user and datafile and start again the test


SQL> ALTER SYSTEM SET WALLET CLOSE IDENTIFIED BY test;

SQL> ALTER SYSTEM SET WALLET OPEN IDENTIFIED BY test;

SQL> drop user test cascade;

SQL> DROP TABLESPACE test INCLUDING CONTENTS AND DATAFILES;

————————– recreate here the tablespace test and the user ———————

– Connect again using the new user but now using TDE on the column and the index.

SQL> test/test@orcl

SQL> CREATE TABLE tde_test (
               id NUMBER(10),
               data VARCHAR2(50) ENCRYPT
               )
              TABLESPACE test;

SQL> INSERT INTO tde_test (id, data) VALUES (1, ‘This is a secret!’);

SQL> COMMIT;

SQL> SELECT * FROM tde_test;


ID DATA
———- ————————————————–
1 This is a secret!


SQL> create index tde_idx on tde_test(id)
               indextype is Lucene.SolrIndex
               parameters(‘PopulateIndex:false;IncludeMasterColumn:false;ExtraCols:data;HighlightColumn:DATA’);

SQL> select count(*) from tde_test where scontains(id,’secret’)>0;

COUNT(*)
———-
0

– PopulateIndex:false was used to not fill the Solr/Lucene structured until the encryption is enabled on the LOB column which stored Lucene files.

SQL> alter table tde_idx$t modify lob(data) (encrypt using ‘3des168′);

SQL> alter index tde_idx rebuild parameters(‘LogLevel:INFO’);

SQL> select count(*) from tde_test where scontains(id,’secret’)>0;

COUNT(*)
———-
1


SQL> select sscore(1),shighlight(1) from tde_test where scontains(id,’secret’,’score desc’,1)>0;

SSCORE(1)  SHIGHLIGHT(1)
———-        —————————-—————————————————-
.306852818 This is a <em>secret</em>!

– Now connect again and verify that sensitive text are not visible on the table information and the index structures

SQL> conn / as sysdba

SQL> ALTER SYSTEM FLUSH BUFFER_CACHE;

System altered.

– Now grep utility can’t find any matches on the complete text or part of them (keywords)

SQL> !grep “This is a secret” /u01/app/oracle/oradata/orcl/test01.dbf

SQL> !grep “secret” /u01/app/oracle/oradata/orcl/test01.dbf

Conclusion: All sensitive data will never be exposed on Solr/Lucene and your application is complaint against Health Insurance Portability and Accountability Act (HIPAA) and Payment Card Industry (PCI) data security standards.

How to…? – Scotas and Apex

This example  shows how easily you can add typical free text searching features such as faceting, more like this, auto-complete among others using Scotas Push Connector.

The example was tested with:

Assumptions:

  • You already have installed Scotas PC.
  • You have created one user, in our case “scott”.
  • You have craeted a table FILM_ADMIN_TEST (Download here).
  • Solr instance running in localhost, port 8983.

Step one – Indexing

  • as sysdba,  grant role PCUSER to the user scott. 

                      SQL> grant PCUSER to scott;

  • as scott user, create an PC index on FILM_ADMIN_TEST table

SQL> CREATE INDEX FILM_ADMIN_TEST_SIDX ON SCOTT.FILM_ADMIN_TEST(TEXT) INDEXTYPE IS PC.SOLR PARAMETERS(‘{Updater: “localhost@8983″, Searcher: “localhost@8983″, CommitOnSync:true, SyncMode:OnLine, LockMasterTable:false, IncludeMasterColumn:false, HighlightColumn:” \”text_t\””, DefaultColumn:”text”, ExtraCols:”TEXT \”text_t\”,GENRE \”genre_s\”,COUNTRY \”country_s\”,LANGUAGE \”language_s\”,DIRECTED_BY \”directed_by_s\””}’);

Note: this sentence create an index over the field “text” and also, we included other fields to get facets.

Step two – Apex, the demo application should be ready to use on the Oracle XE installation.

  • Edit Apex Appplication login into

http://127.0.0.1:8080/apex/

  • Select Application Builder ->Create
  • Enter a name for the application.

Step 01

 

  • Create a new Page

Select a Report form type, and use SQL query in the Page Source.  Insert the Sql query with the Scotas facets.

Query:

SELECT QRYTEXT AS LANGUAGE, HITS AS HITS FROM TABLE( facet_to_tbl( json_list( json( SOLRPUSHCONNECTOR.FACET(‘FILM_ADMIN_TEST_SIDX’, ‘*’, ‘facet.field=language_s’).FIELDS).get(1))))

Note: Push Connector returns facets in json format. To use facets as a table, we provide a the function facet_to_tbl.

  • Select add page

Select the button “Add Page”, to create this page. After that, you should see the page in the list.

 

  • Create tabs

 

  • Shared components

 

  • Application Attributes

 

  • User interface

 

  • Create Application

 

Step two – Run application

Run the application using your username/password, in our case scott/tiger.

The page will look like:

here yo for example doing the result clickable to select Laguage, or filtering by range.

Conclusion: This is only a simple example to use Scotas features with Appex. From now you can make use of  the best of each technologies.

Scotas OLS and Oracle BigData architecture

Following the Oracle Information Architecture: An Architect’s Guide to Big Data.

I modified the figure 3 to show how Scotas OLS product fit into this reference architecture.

as you can see Scotas OLS Search integration is at the decision layer proving:

  • Rich free text searching a la Google
  • Faceted Search and Filtering
  • Scalable index maintenance, no index fragmentation, NRT data changes
  • Advanced, Configurable Text Analysis
  • A Real Data Schema, with Numeric Types, Dynamic Fields, Unique Keys
  • Powerful Extensions to the Lucene Query Language
  • Geospatial Search

with these additions decision makers have in hand a native SQL integrated Solr solution which can be used on any Oracle Analytic tool.

5 minutes tutorial with ElasticSearch Push Connector

Continuing with the 5 minutes tutorial started in previous post this is about how to start working with Scotas ElasticSearch Push Connector technology.

After you install the Push Connector and your Elastic Search (ES) server, you could do a simple test using examples included into the script located at your distribution directory – db/tutorial/esTutorial.sql script.

Assuming that the ES server is installed by default we can start them using -f flag to see the log output at the console, for example:

[mochoa@localhost elasticsearch-0.19.4]$ export JAVA_HOME=/usr/local/jdk1.6
[mochoa@localhost elasticsearch-0.19.4]$ export PATH=$JAVA_HOME/bin:$PATH
[mochoa@localhost elasticsearch-0.19.4]$ bin/elasticsearch -f
[2012-06-22 11:00:40,089][INFO ][node ] [Grim Hunter] {0.19.4}[4061]: initializing …

[2012-06-22 11:00:46,899][INFO ][http ] [Grim Hunter] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.100.55:9200]}
[2012-06-22 11:00:46,900][INFO ][node ] [Grim Hunter] {0.19.4}[4061]: started
[2012-06-22 11:00:46,907][INFO ][gateway ] [Grim Hunter] recovered [0] indices into cluster_state

As you see above, ES server is listening on 9200 TCP/IP port which is the default value used by the Push Connector (PC).

First, on another console we need to connect with an user with DBA role to grant the PCUSER role to scott. Then, we are able to start working with the connector examples using the traditional scott user, let see:

-bash-4.2$ sqlplus “/ as sysdba”

SQL*Plus: Release 11.2.0.3.0 Production on Fri Jun 22 11:01:10 2012

Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> grant PCUSER TO scott;

Grant succeeded.

Now we are ready to start creating a table to store a simple tweet structure as is shown in many ES examples.

CREATE TABLE ES_TUTORIAL (
USER_ID VARCHAR2(30) PRIMARY KEY,
MESSAGE VARCHAR2(140),
POST_DATE TIMESTAMP);

and populate it with some rows:

insert into ES_TUTORIAL values (
‘kimchy’,
‘trying out Elastic Search’,
TO_TIMESTAMP(‘2009-11-15T14:12:12′,’YYYY-MM-DD”T”HH24:MI:SS’));

insert into ES_TUTORIAL (
select ‘kimchy-‘||rownum,’trying out Elastic Search ‘||rownum,TO_TIMESTAMP(sysdate+rownum,’YYYY-MM-DD”T”HH24:MI:SS’) from dual connect by rownum <= 100);

insert into ES_TUTORIAL (
select ‘kimchy-‘||(rownum+100),’trying out Elastic Search ‘||rownum,TO_TIMESTAMP(sysdate+rownum,’YYYY-MM-DD”T”HH24:MI:SS’) from dual connect by rownum <= 100);

insert into ES_TUTORIAL (
select ‘kimchy-‘||(rownum+200),’trying out Elastic Search ‘||rownum,TO_TIMESTAMP(sysdate+rownum,’YYYY-MM-DD”T”HH24:MI:SS’) from dual connect by rownum <= 100);

commit; 

Note that we are using TO_TIMESTAMP SQL function to format an Oracle timestamp value to a Java compatible timestamp format.

Now we are ready to index the above table:

[mochoa@localhost pc]$ sqlplus scott/tiger@test

SQL*Plus: Release 11.2.0.3.0 Production on Fri Jun 22 11:38:21 2012

Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> set define off timing on serverout on
SQL> CREATE INDEX TWEET_ESIDX ON ES_TUTORIAL(MESSAGE) INDEXTYPE IS PC.ES
PARAMETERS(‘{Settings:{“number_of_shards” : 1},SyncMode:OnLine,LockMasterTable:false,IncludeMasterColumn:false,ExtraCols:”user_id \”user\”,to_char(post_date,”YYYY-MM-DD\”T\”HH24:MI:SS”) \”post_date\”,message \”message\””}’);

Index created.

Elapsed: 00:00:00.74

we can see the output at the ES console telling us that a new index has been created:

[2012-06-22 11:38:47,837][INFO ][cluster.metadata ] [Grim Hunter] [scott.tweet_esidx] creating index, cause [api], shards [1]/[1], mappings []
[2012-06-22 11:38:48,404][INFO ][cluster.metadata ] [Grim Hunter] [scott.tweet_esidx] update_mapping [row] (dynamic)

Note that the index was created at ES server with the sintax [schema].[index_name] in lower case.

Here the explanation of the parameters used during index creation time:

  • Settings, any ES settings parameter in the example the Sharding configuration
  • SyncMode, OnLine means Near Real Time (NRT) mode for inserts/updates and Real Time (RT) mode for deletes, if you need information about this configuration please read the Scotas OLS White Paper, section Working modes replacing Solr by ES.
  • LockMasterTable, false means that during the table scan for selecting rows there is no row level locking, when this parameter is true rows are selected with “for update” mode.
  • IncludedMasterColumn, false means that the values of the master column (defined in create index syntax, MESSAGE) will not be included in the index structure, please see next parameter.
  • ExtraCols, is the list of the columns, formats and field names used at ES index structure. Above example shows that ES index will include three fields user, post_date and message. Note that field are case sensitive and now we are lower casing it.

Now the index is ready to use and has been populated with the 301 rows which are into the table structure. To check if all rows were properly indexed we can query for the rowid field, this field is always indexed and works as id into the ES index, let see:

SQL> select ElasticSearch.countHits(‘TWEET_ESIDX’,’rowid:*’) from dual;
301
Elapsed: 00:00:00.04

also we can select rows that match with a specific string, for example look for the 53 string:

SQL> select escore(1),message from ES_TUTORIAL where econtains(message,’message:53‘,1)>0;
2.327857                      trying out Elastic Search 53
2.327857                      trying out Elastic Search 53
2.327857                      trying out Elastic Search 53
Elapsed: 00:00:00.14

a few comments about the above syntax for querying ES:

  • econtains, operator is used to query the ES using the Push Connector,
    first argument is the master column of the index, the one which is included into the create index syntax
    second argument is the query string using ES q parameter in the _search? url.
    third argument is the ancillary operator identifier (used by escore(), see next item)
    return value is >0 when the row match against the input query
  • escore, is used to get the score computed by econtains() operator at the specific row, note the input parameter must be correlated to the last argument in econtains().

Now we are trying to explain what the NRT means in a few queries, let see what happens during an update:

SQL> update ES_TUTORIAL set message=message||’-mod‘ where rownum<=10;
10 rows updated.

the RDBMS tell us that 10 rows where updated but NRT deferred this update into the ES server until we commit the transaction, we can easily check using above select to see if it returns mod as positive hit:

SQL> select escore(1),message from ES_TUTORIAL where econtains(message,’message:mod‘,1)>0;
no rows selected

now what happens when we commit pending changes at the current transaction:

SQL> commit;
Commit complete.
SQL> select escore(1),message from ES_TUTORIAL where econtains(message,’message:mod‘,1)>0;
no rows selected
SQL> select escore(1),message from ES_TUTORIAL where econtains(message,’message:mod‘,1)>0;

1.6282115     trying out Elastic Search 76-mod
1.6282115     trying out Elastic Search 77-mod
1.6282115     trying out Elastic Search 78-mod
1.6282115     trying out Elastic Search 79-mod
1.6282115     trying out Elastic Search 80-mod
1.6282115     trying out Elastic Search 81-mod
1.6282115     trying out Elastic Search 82-mod
1.6282115     trying out Elastic Search 83-mod
1.6282115     trying out Elastic Search 84-mod
1.6282115     trying out Elastic Search 85-mod
10 rows selected.

after a few milliseconds the ES layer receives the update and rows are returned as positive hits.

Now we can test RT deletes, for example:

SQL> delete from es_tutorial where rowid in (select rowid from ES_TUTORIAL where econtains(message,’message:mod‘,1)>0);
10 rows deleted.
SQL> select escore(1),message from ES_TUTORIAL where econtains(message,’message:mod‘,1)>0;
no rows selected

as you can see after 10 rows where deleted, these rows where not returned as positive hits immediately (Real Time), for the current transaction these 10 rows are not available except if we do a rollback.

To checks what happens at the ES side we can open another concurrent transaction in parallel and check if these rows are visible for other users (remember that the transaction is not committed yet), let see a second connection:

SQL> select escore(1),message from ES_TUTORIAL where econtains(message,’message:mod’,1)>0;
1.6282115     trying out Elastic Search 76-mod
…..
1.6282115     trying out Elastic Search 85-mod
10 rows selected.

now we will commit deleted rows at  first connection:

SQL> commit;
Commit complete.
SQL> select escore(1),message from ES_TUTORIAL where econtains(message,’message:mod’,1)>0;
no rows selected

and see what happens at the second connection:

SQL> select escore(1),message from ES_TUTORIAL where econtains(message,’message:mod’,1)>0;
no rows selected

Second connection gets no hits because rows where deleted definitely and the ES server receives the changes.

Finally by dropping the index, the ES server drops the index structure too, here the SQL command:

SQL> drop index TWEET_ESIDX;
Index dropped.

and the ES server console:

[2012-06-22 14:52:59,460][INFO ][cluster.metadata ] [Grim Hunter] [scott.tweet_esidx] deleting index

and that’s all, as you can see Elastic Search Push Connector works as all Scotas searching products, functionality not covered into this blog post where faceting, sorting using score or others fields, detecting changes on multiples columns, etc. but they are supported by the connector.

 

5 minutes tutorial with Scotas OLS

If you already have Scotas OLS installed this is a 5 minutes guide for using Oracle-Solr integration.

Scotas OLS binary distribution have a sample configuration files located at db/tutorial directory which are used to simulate all operations described into Solr Tutorial but using Scotas OLS.

Here a complete list of steps:

1) grant LUCENEUSER role to SCOTT (this step could be avoided if was already done):

-bash-4.2$ sqlplus “/ as sysdba”
SQL*Plus: Release 11.2.0.3.0 Production on Thu May 10 08:07:51 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> grant LUCENEUSER to SCOTT;
Grant succeeded.

2) upload Solr related files to XMLDB repository (assuming that we will create an index named SCOTT.TUTORIAL_SIDX):

# cd db/tutorial/LUCENE.TUTORIAL_SIDX/conf
# ftp localhost 2100
Connected to localhost.localdomain.
220- localhost.localdomain
Unauthorised use of this FTP server is prohibited and may be subject to civil and criminal prosecution.
220 localhost.localdomain FTP Server (Oracle XML DB/Oracle Database) ready.
Name (localhost:mochoa): scott
331 pass required for SCOTT
Password: *****
230 SCOTT logged in
Remote system type is Unix.
ftp> cd public/solr
250 CWD Command successful
ftp> mkdir SCOTT.TUTORIAL_SIDX
250 CWD Command successful
ftp> cd SCOTT.TUTORIAL_SIDX
250 CWD Command successful
ftp> mkdir conf
250 CWD Command successful
ftp> cd conf
250 CWD Command successful
ftp> mput *
mput elevate.xml [anpqy?]? a
………
ftp> dir
227 Entering Passive Mode (127,0,0,1,249,27)
150 ASCII Data Connection
-rw-r–r– 1 SCOTT oracle 1274 MAY 10 11:15 elevate.xml
-rw-r–r– 1 SCOTT oracle 78514 MAY 10 11:15 mapping-FoldToASCII.txt
-rw-r–r– 1 SCOTT oracle 2868 MAY 10 11:15 mapping-ISOLatin1Accent.txt
-rw-r–r– 1 SCOTT oracle 367 MAY 10 11:15 mb_attributes.txt
-rw-r–r– 1 SCOTT oracle 121 MAY 10 11:15 mb_letterBuckets.txt
-rw-r–r– 1 SCOTT oracle 873 MAY 10 11:15 protwords.txt
-rw-r–r– 1 SCOTT oracle 30717 MAY 10 11:15 schema.xml
-rw-r–r– 1 SCOTT oracle 921 APR 13 13:46 scripts.conf
-rw-r–r– 1 SCOTT oracle 13267 MAY 10 11:15 solrconfig.xml
-rw-r–r– 1 SCOTT oracle 13 MAY 10 11:15 spellings.txt
-rw-r–r– 1 SCOTT oracle 1171 MAY 10 11:15 stopwords.txt
-rw-r–r– 1 SCOTT oracle 1132 MAY 10 11:15 synonyms.txt
226 ASCII Transfer Complete
ftp> pwd
257 “/public/solr/SCOTT.TUTORIAL_SIDX/conf” is current directory.
ftp> quit

3) Create and populate a sample table. This table have a relational version of the Solr tutorial documents (these SQL statements are at db/tutorial/schemaTutorial.sql file).

[mochoa@localhost tutorial]$ sqlplus scott/tiger@orcl
SQL*Plus: Release 11.2.0.3.0 Production on Thu May 10 08:22:44 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> set define off
SQL> CREATE TABLE OLS_TUTORIAL ( ID VARCHAR2(30) PRIMARY KEY,
NAME VARCHAR2(400),
MANU VARCHAR2(4000),
CAT VARCHAR2(400),
FEATURES CLOB,
INCLUDES VARCHAR2(4000),
WEIGHT NUMBER,
PRICE NUMBER,
POPULARITY NUMBER,
INSTOCK CHAR(5), — true or false
MANUFACTUREDATE_DT TIMESTAMP,
PAYLOADS VARCHAR2(4000),
STORE VARCHAR2(200)) ;
Table created.
SQL> insert into ols_tutorial values ( ‘SP2514N’, ‘Samsung SpinPoint P120 SP2514N – hard drive – 250 GB – ATA-133′, ‘Samsung Electronics Co. Ltd.’, ‘electronics hard-drive’, ‘7200RPM, 8MB cache, IDE Ultra ATA-133 NoiseGuard, SilentSeek technology, Fluid Dynamic Bearing (FDB) motor’, NULL, NULL, 92, 6, ‘true’, TO_TIMESTAMP(‘2006-02-13T15:26:37Z’,’YYYY-MM-DD”T”HH24:MI:SS”Z”‘), NULL, ‘35.0752,-97.032′) ;
…….
1 row created.
SQL> commit;
Commit complete.
SQL> select count(*) from OLS_TUTORIAL;
COUNT(*)
———-
16

4) Indexing using Scotas OLS.

SQL> CREATE INDEX TUTORIAL_SIDX ON OLS_TUTORIAL(ID) INDEXTYPE IS LUCENE.SOLRINDEX
parameters(‘Updater:localhost@9099;Searcher:localhost@9099;SyncMode:OnLine;LockMasterTable:false;IncludeMasterColumn:false;LogLevel:ALL;MltColumn:title;HighlightColumn:name,features;DefaultColumn:text;ExtraCols:id “id”,cat “cat”,name “name”,features “features”,manu “manu”,includes “includes”,price “price”,popularity “popularity”,inStock “inStock”,to_char(manufacturedate_dt,”YYYY-MM-DD”T”HH24:MI:SS”Z””) “manufacturedate_dt”‘);
Index created.

5) Doing some queries (equivalent to the section  …and now you can search for all sorts of Solr Tutorial, there are more query examples at db/tutorial/testTutorial.xml file in a form o SQLUnit syntax):

SQL> SELECT /*+ DOMAIN_INDEX_SORT */ id FROM OLS_TUTORIAL T where scontains(id,’video’)>0;
ID
——————————
MA147LL/A
100-435805
EN7800GTX/2DHTV/256M

SQL> SELECT id FROM OLS_TUTORIAL T WHERE SCONTAINS(ID,’name:video’)>0;
ID
——————————
MA147LL/A
100-435805

SQL> SELECT id,price FROM OLS_TUTORIAL WHERE SCONTAINS(ID,’video AND price:[* TO 400]’)>0;
ID PRICE
—————————— ———-
MA147LL/A                           399

6) Other complex queries (Faceted Search example):

SQL> SELECT FIELD,SJOIN(T.FACETS) F FROM TABLE(SFACETS(USER||’.TUTORIAL_SIDX’,’*:*’,’facet.field=cat&facet.field=inStock’)) T;
cat
electronics(14),memory(3),connector(2),graphics-card(2),hard-drive(2),monitor(2) ,search(2),software(2),camera(1),copier(1),multifunction-printer(1),music(1),pri nter(1),scanner(1)
inStock
true(12),false(4)

7) More queries (Text Analysis)

SQL> SELECT /*+ DOMAIN_INDEX_SORT */ SSCORE(1),name FROM OLS_TUTORIAL T WHERE SCONTAINS(ID,’power-shot‘,1)>0;
.683216989
Canon PowerShot SD500

SQL> SELECT /*+ DOMAIN_INDEX_SORT */ SSCORE(1),NAME,features FROM OLS_TUTORIAL T WHERE SCONTAINS(ID,’features:recharging‘,1)>0;
.336813927
Apple 60 GB iPod with Video Playback Black iTunes, Podcasts, Audiobooks Stores up to 15,000 songs, 25,000 photos, or 150 hours of video 2.5-inch, 320×240 color TFT LCD display with LED backlight Up to 20 hours of battery life Plays AAC, MP3, WAV, AIFF, Audible, Apple Lossless, H.264 video Notes, Calendar, Phone book, Hold button, Date display, Photo wallet, Built-in g ames, JPEG photo playback, Upgradeable firmware, USB 2.0 compatibility, Playback speed control, Rechargeable capability, Battery level indication

And that’s all. Here some notes on the index parameters defined above:

  • IncludeMasterColumn:false, defined that the column ID which is the master column of the index defined when the SQL DDL is used as CREATE INDEX TUTORIAL_SIDX ON OLS_TUTORIAL(ID)… is not included during index construction as Solr field.
  • MltColumn:title, defines which column is used with More Like This ancillary operator mlt(), not showed at the examples.
  • HighlightColumn:name,features, columns used for highlighting with shighlight() ancillary operator, not showed at the examples.
  • DefaultColumn:text, which column is used as default for searching when using Solr Query Syntax at scontains() operator, note that text is not table column, is a virtual column defined at schema.xml with the Solr functionality copyField and works as bag including all Solr fields (cat, name, manu, features, includes).
  • ExtraCols:id “id“,cat “cat“,name “name“,features “features“,manu “manu“,includes “includes“,price “price“,popularity “popularity“,inStock “inStock“,to_char(manufacturedate_dt,”YYYY-MM-DD”T”HH24:MI:SS”Z””) “manufacturedate_dt“, these are table columns defined as Solr field, note that we are using double quoted (“) to defined case sensitive Solr Field names and an special to_char mask to format dates includes in manufacterdate_dt column in Solr date format.

 

Scaling OLS (Oracle/Solr Integration)

OLS have differents ways to deal with large data-sets by using some Oracle built in features available on RAC or multi-core environments.

A difference with Solr distributed search/sharding is that OLS is using a transactional/shared Lucene storage which means that if one process commits a change on them it is visible immediately to all other processes, is not necessary a copy post process, we can see this like Solr over SAN, but with additional of the SQL behavior.

To explain above functionality we can use the index optimize task. This task takes a long time to execute and can hit Out Of Memory exceptions if some Solr parameter are not fine tuned, lets see the example.

On a console 1 a running test-suite simulates multiple users requesting searches on a table which has around 600K rows:

[mochoa@localhost ols]$ ant test-queryhits-solr
Buildfile: build.xml
[echo] Building ols…
test-queryhits-solr:
[junit] Running com.scotas.solr.indexer.TestQueryHits
[junit] Hits: 36707

[junit] Elapsed time: 335
….
[junit] Query time: 59
[junit] Elapsed time: 12983
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 13.323 sec
BUILD SUCCESSFUL
Total time: 14 seconds

on a parallel console 2 an optimize task is started

[mochoa@localhost Download]$ sqlplus scott/tiger@orcl
SQL*Plus: Release 11.2.0.3.0 Production on Tue Apr 24 08:46:26 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> exec SolrDomainIndex.optimize(‘SOURCE_BIG_SIDX’);

and we see that the CPU usage of processor 4 is at the top of CPU (attached to the optimize task, PID 10567) and other Oracle process are on top of the execution processing request for searching from the client side (Java test-suite, oracleorcl[LOCAL=NO]) and the Shared Searcher process (ora_j010_orcl)

Note that other processes are not affected by the optimize task running with PID 10567 and to show what happens in the worst scenario we will kill the process trying to show a catastrophic problem, such as an OOM exception or a RAC node failure:

[mochoa@localhost ~]$ su
Password:
[root@localhost mochoa]# kill -9 10567
———————————————————————————
SQL> exec SolrDomainIndex.optimize(‘SOURCE_BIG_SIDX’);
BEGIN SolrDomainIndex.optimize(‘SOURCE_BIG_SIDX’); END;
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 10567
Session ID: 77 Serial number: 109
Elapsed: 00:01:05.96

by starting again the test suite, the Oracle RDBMS automatically did a rollback on any uncommitted changes on the Lucene storage so the optimization task didn’t break the storage and can be started again without any problem, lets see:

[mochoa@localhost ols]$ ant test-queryhits-solr
Buildfile: build.xml
[echo] Building ols…
test-queryhits-solr:

[junit] Running com.scotas.solr.indexer.TestQueryHits
[junit] Hits: 36707
[junit] Elapsed time: 275
….
[junit] Query time: 20
[junit] Elapsed time: 11293
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 11.571 sec
BUILD SUCCESSFUL
Total time: 12 seconds

SQL> set timing on
SQL> exec SolrDomainIndex.optimize(‘SOURCE_BIG_SIDX’);
PL/SQL procedure successfully completed.
Elapsed: 00:04:34.63
SQL> commit;
Commit complete.

Another feature of OLS is the parallel execution and could be used to speedup heavy indexes by configuring multiples processes listening on different ports and altering the index to use them, for example we have 3 OLS parallel/shared server processes running:

-bash-4.2$ netstat -anp|grep ora_j0
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 :::9099 :::* LISTEN 4479/ora_j010_orcl
tcp 0 0 :::9199 :::* LISTEN 4483/ora_j012_orcl
tcp 0 0 :::9299 :::* LISTEN 4485/ora_j013_orcl

now by executing an alter command above index will be configured to use two processes for reading, and one process for updating the index.

SQL> alter index SOURCE_BIG_SIDX parameters(‘Searcher:localhost@9099,localhost@9199;
Updater:localhost@9299′);
Index altered.
Elapsed: 00:00:00.28

any successful write operations committed by the writer process (PID 4485) will cause that the other two readers (PID 4479/4483) hot-reload their indexes structure in RAM to see changes transparently.

Above examples shows how easily OLS can scale on multi-core environments.

Another important  functionality is known as partitioning/sharding, OLS automatically inherits partitioning from tables on Oracle Enterprise Edition, for example we have a table with 24 partitions created as:

CREATE TABLE “SCOTT”.”MB_SALES”
( “ID” VARCHAR2(20 BYTE),
“S_DATE” DATE,
…..
“T_TRM_LOOKUPS” NUMBER(10,0)
) COMPRESS
PARTITION BY RANGE (s_date)
(
PARTITION SALES_2010_01 VALUES LESS THAN (TO_DATE(’01-FEB-2010′,’DD-MON-YYYY’)),
….
PARTITION SALES_2011_12 VALUES LESS THAN (TO_DATE(’01-JAN-2012′,’DD-MON-YYYY’)),
PARTITION SALES_2012 VALUES LESS THAN (MAXVALUE)
) NOLOGGING;

by creating an OLS index using:

CREATE INDEX SIDX ON MB_SALES (T_NAME)
INDEXTYPE IS LUCENE.SOLRINDEX
PARAMETERS (‘PopulateIndex:false;DefaultColumn:t_name;IncludeMasterColumn:false;
BatchCount:4000;LockMasterTable:false;LogLevel:ALL;SyncMode:OnLine;
HighlightColumn:t_name;ExtraCols:t_name “t_name”,t_duration “t_duration”,
t_a_name “t_a_name”,t_r_name “t_r_name”,t_r_attributes “r_type”,t_r_tracks “t_r_tracks”,
t_trm_lookups “t_trm_lookups”‘)
LOCAL parallel 2;

and setting PopulateIndex:false we can define later an specific parameter for each partition:

alter index SIDX REBUILD PARTITION SALES_2010_01 parameters (‘Searcher:localhost@9099;Updater:localhost@9099′);
alter index SIDX REBUILD PARTITION SALES_2010_02 parameters (‘Searcher:localhost@9199;Updater:localhost@9199′);

at the time of populating the index port 9099 is used for serving request on partition SALES_2010_01, 9199 for SALES_2010_02, and so on.

Finally, Oracle automatically controls which partitions of the shard are visited when doing a query, for example:

SQL> explain plan for
SELECT SHIGHLIGHT(1),SSCORE(1) FROM MB_SALES WHERE S_DATE BETWEEN TO_DATE(’01-JAN-2010′,’DD-MON-YYYY’) AND TO_DATE(’31-MAY-2010′,’DD-MON-YYYY’) AND SCONTAINS(T_NAME,’rownum:[1 TO 10] AND love’,1)>0
ORDER BY SSCORE(1) DESC;

……..

——————————————————–
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
——————————————————–
| 0 | SELECT STATEMENT | | 26026 | 1092K| | 2841 (1)| 00:00:35 | | |
| 1 | SORT ORDER BY | | 26026 | 1092K| 1448K| 2841 (1)| 00:00:35 | | |
| 2 | PARTITION RANGE ITERATOR | | 26026 | 1092K| | 2553 (1)| 00:00:31 | 1 | 5 |
|* 3 | TABLE ACCESS BY LOCAL INDEX ROWID| MB_SALES | 26026 | 1092K| | 2553 (1)| 00:00:31 | 1 | 5 |
|* 4 | DOMAIN INDEX | SIDX | | | | | | | |
——————————————————–

Predicate Information (identified by operation id):
——————————————————–

3 – filter(“S_DATE”<=TO_DATE(‘ 2010-05-31 00:00:00′, ‘syyyy-mm-dd hh24:mi:ss’) AND “S_DATE”>=TO_DATE(‘
2010-01-01 00:00:00′, ‘syyyy-mm-dd hh24:mi:ss’))
4 – access(“LUCENE”.”SCONTAINS”(“T_NAME”,’rownum:[1 TO 10] AND love’,1)>0)

18 rows selected.

Elapsed: 00:00:01.80

Note that columns Pstart and Pstop shows that the RDBMS optimizer knows that only partitions 1 to 5 of the index need to be visited to satisfy the query with the condition of  01-JAN-2010 and 31-MAY-2010, this functionality is known as partition pruning and is automatically exposed to OLS.

The advantages of partitioning/sharding are not only the possibility to transparently distribute searches across multiple processes, but also the index structure is smaller and as consequence of that less RAM is used during index operations and less time is involved doing maintenance operations such as optimization or export/import.

For example MB_SALES table has 167M rows, but each partition has only 6.9M rows. Finally, the index structure is only using 1.5Gb of space on each partition.

 Note: Using inline OLS pagination with partitioned tables caused that the query returns more rows than expected, it means for example if we defined rownum:[1 TO 10] instead of 10 rows expected above query returns 50 rows (10 by 5 partitions visited) and the sscore(1) sort should be re-executed at SQL level; if we need only 10 rows the solution is to use the Top-N syntax or limiting the cursor result at client-side.

NoSQL databases: What you should know about

Relational database model has been co-existing with us for around a quarter of a century -so much time, right?-, but a new class of database has emerged in the enterprise. I’m talking about NoSQL.

 

What is NoSQL?

NoSQL, also known as “non-relational” or “cloud”, is a broad class of database management system with significant differences from a classic relational database management system (RDBMS). The stored data not require fixed table schemas. It usually avoids join operations and typically scale horizontally.

 

Architecture with NoSQL

NoSQL database is characterized by a move away from the complexity of SQL based servers. The logic of validation, access control, mapping querieable indexed data, correlating related data, conflic resolution, maintaining integrity constraints and triggered procedures is moved out of the database layer. This enables NoSQL databases engines to focus on exceptional performance and scalability.
A key concept of NoSQL systems is to have DBs focus on the task of high-performance scalable data storage, and provide low-level access to data management layer.
Pros & Cons
Pros
  • Improve performance – Performance metrics have shown significant improvements vs relational access. For example, this performance metric compares MySQL vs Cassandra:
    Facebook Search
    MySQL > 50 GB Data
    – Writes Average: ~300 ms
    – Reads Average: ~350 ms
    Rewritten with Cassandra > 50 GB Data
    – Writes Average: 0.12 ms
    – Reads Average: 15 ms
  • Scaling – NoSQL databases are designed to expand transparently and they’re usually designed with low-cost commodity hardware in mind.
  • Big data handling – Over the last decade, the volumes of data has been increased massively. NoSQL systems can handle big data in the same way as the biggest RDBMS.
  • Less DBA time – NoSQL databases are generally designed to requiere less management: automatic repair, data distribution and simpler data models.
  • Reduce costs – RDBMS uses expensive proprietary servers and storage systems, while NoSQL databases user clusters of cheap commodity servers. So, the cost per gigabyte or transaction/second for NoSQL can be many time less.
  • Flexible data models – NoSQL key-value stores and document databases allow the application to store virtually any structure it wants in a data element.

Cons

  • Maturity – RDBMS systems are stable and richly functional. But most NoSQL alternatives are in pre-production versions with many key features yet to be implemented.
  • Support – Most NoSQL systems are open source projects. There’re a couple of small companies offering support for each NoSQL database.
  • Analytics & BI – NoSQL databases do not offer facilities for ad-hoc query and analysis. Commonly used business intelligence tools do not provide connectivity to NoSQL systems.
  • Administration – Although the design goal of NoSQL system is to provide a zero-admin solution, it’s true that requires a lot of skill to install and a lot of effort to maintain.
  • Expertise – As NoSQL systems is a new paradigm, all developers are in a learning mode.

Use a RDBMS or a NoSQL Database?

Depends mainly on what you’re trying to achieve. It’s certainly mature enough to use. But few applications really need to scale that massively. For most, a traditional RDBMS is sufficient. However, with internet usage becoming more ubiquitous all the time, it’s quite likely that applications that do will become more common (though probably not dominant).

NoSQL Implementations

There are currently more than 122 NoSQL databases. They can be categorized by their manner of implementation:
  • Wide column store / column families – Cassandra, Hadoop, Hypertable, Cloudata, Cloudera, Amazon SimpleDB, SciDB
  • Document store – MongoDB, CouchDB, Terrastore, ThruDB, OrientDB, RavenDB, Citrusleaf, SisoDB
  • Key Value / Tuple store – Azure Table Storage, MEMBASE, Riak, Redis, Chordless, GenieDB, Scalaris, BerkeleyDB, MemcacheDB.
  • Eventually consistent Key Value store – Amazon Dynamo, Voldemort, Dynomite, KAI
  • Graph databases – Neo4J, Infinite Graph, Sones, InfoGrid, HyperGraphDB, Trinity, etc
  • And others, and others…

Early Adopters of NoSQL

Social media corporations as the primary traiblazers of NoSQL implementations. The list includes:
  • Facebook
  • Twitter
  • MySpace
  • Google (Hadoop, Google App Engine)
  • Amazon (Dynamo)

Books & Papers Recomended

  • Professional NoSQL (Wiley/Wrox. 2011)
  • NoSQL Database Technology (CouchBase. 2011)
  • NoSQL Handbook (Mathias Meyer)
  • No Relation: The Mixed Blessings of Non-Relational Databases (Ian Thomas Varley. 2009)
  • Cassandra: The definitive guide (Even Hewitt. 2010)
  • CouchDB: The Definitive Guide: Time to Relax (J. Chris Anderson, Jan Lehnardt, Noah Slater. 2010)
  • Hadoop in Action (Chuck Lam. 2010)
  • HBase: The Definitive Guide (Lars George. 2011)
  • MongoDB in Action (Kyle Banker. 2010)
  • Beginning SimpleDB (Kevin Marshall, Tyler Freeling. 2009)

Conclusions

NoSQL databases solve problems which born with the global digital data growth, where the DBAs have been dealing with the well-known RDBMS.
Outside of scalability, it really seems that NoSQL databases do not have a killer feature.
Although, I think this is a new opportunity to become a professional on this paradigm.