OLS ready for installing at 19c using Docker + source code at GitHub

Some time ago We released 3 scripts for installing Scotas OLS using Oracle Docker images for 19c release, here an step by step instructions for installing OLS using Oracle 19c.

The procedure is similar to Oracle XE, first check if you have a Docker image for 19c

$ docker image ls|grep oracle
oracle/database                                   19.3.0-ee           a8bf34972c5c        5 months ago        6.5GB

If not, check Oracle Readme at GitHub repo for building above image.

Once you have a Docker Image for 19.3.0-ee release create three docker swarm config files to proceed with the OLS installing process during first startup.

$ curl -s https://raw.githubusercontent.com/scotas/docker-images/master/ols-scripts-r5/00-unzip-ols.sh | docker config create 00-unzip-ols.sh –
$ curl -s https://raw.githubusercontent.com/scotas/docker-images/master/ols-scripts-r5/01-ols-ins.sh | docker config create 01-ols-ins.sh –
$ curl -s https://raw.githubusercontent.com/scotas/docker-images/master/ols-scripts-r5/02-clean-up-ols-files.sh | docker config create 02-clean-up-ols-files.sh –
$ docker config ls
ID                                                        NAME                                           CREATED            UPDATED
k8ntw4p1w06wr5bifbx6w3ztz           00-unzip-ols.sh                               53 minutes ago     53 minutes ago
zj9link1pqi04ddgxtnwu81j4               01-ols-ins.sh                                   51 minutes ago     51 minutes ago
pnf61s3l0vxmhqoj5twfrw688            02-clean-up-ols-files.sh                  51 minutes ago      51 minutes ago

finally create docker-compose.yml file for you Oracle OLS RDBMS deploy, the service definition for OLS DB look like:

other related services into stack definition is for development purpose, remember that Scotas OLS is at GitHub repo and free as in beer.

Before creating your RDBMS with OLS installed check if your persistent oracle database data directory have UID/GID 54321:54321 if you already do that just create your Docker Swarm stack using:

$ docker stack deploy -c docker-compose.yml ols

once the DB is created and OLS installed you will have a log output like:

$ docker service logs -f ols_db
……..
ols_db.1.a37moycgrhtv@pocho | #########################
ols_db.1.a37moycgrhtv@pocho | DATABASE IS READY TO USE!
ols_db.1.a37moycgrhtv@pocho | #########################

simple enough no? have fun using Scotas OLS on 19c!!!

Note: ORACLE_PWD environment variable is only required during first startup (OLS installation) because a random generated password defined by Oracle install script is not exported to child scripts.

Push Connector 2.0 for Oracle XE 18c

At the end of 2018 Oracle finally released a long awaited 18c XE release for Linux and a few days ago Windows version.

This new release included a native support for Json inside the Oracle Database this is a good new because:

  •  standardize the API (no more dependency over PLJson Library or Apex JSON)
  • performance, this support is complete implemented in C

using this new feature We complete rewrite Scotas Push Connector for Solr using the new API and due XE is free, Scotas Push connector is delivered also as Open Source version with the source at GitHub!!!

To deploy this new version is simple as deploy a new Docker stack on your Swarm cluster or single server, a sample stack is at docker-compose.yml file. Here embedded YML file:

first part of the stack is to start a new Solr instance to assist Solr Push connector searching facilities, remember that Push Connector implements Searching functionality using outside instance of Apache Solr.

Screenshot from 2019-02-26 10-56-03

 

 

 

 

second service is an Oracle RDBMS instance with two important parts, an external storage for the database data as the Oracle Docker image suggest, you can do the same with the persistent storage of Apache Solr image, also We define as environment variables the initial SYS password for the XE instance (this is mandatory during PC installation) and the hostname and port for Solr instance.

Finally We provide into the stack the startup scripts for installing Scotas Push Connector 2.0.0, this scripts are automatically called just after the Oracle XE instance is created.

Screenshot from 2019-02-26 11-04-23

 

 

 

to deploy this stack either in Swarm cluster or a simple node with Docker installed just proceed with these steps:

  • Checkout GitHub scripts

$ git clone https://github.com/scotas/docker-images.git
$ cd docker-images/pc-18cXE

  • Create configs for Oracle XE startup scripts

$ curl https://raw.githubusercontent.com/scotas/docker-images/master/pc-18cXE/00-unzip-pc.sh|docker config create 00-unzip-pc –
$ curl https://raw.githubusercontent.com/scotas/docker-images/master/pc-18cXE/01-pc-ins.sh|docker config create 01-pc-ins –
$ curl https://raw.githubusercontent.com/scotas/docker-images/master/pc-18cXE/02-clean-up-files.sh|docker config create 02-clean-up-files -

  • Deploy sample stack

$ cd ../sample-stacks
$ docker stack deploy pc -c docker-compose-pc.yml
$ docker service logs -f pc_db
pc_db.1.y7826xns4ig6@pocho | ORACLE PASSWORD FOR SYS AND SYSTEM: Oracle_2018
….
pc_db.1.y7826xns4ig6@pocho | old 33: insert into PC.bg_process (bg_process_name,host_name,port) values (‘Default &connector Server’,’&newhost’,&newport);
pc_db.1.y7826xns4ig6@pocho | new 33: insert into PC.bg_process (bg_process_name,host_name,port) values (‘Default solr Server’,’solr’,8983);
pc_db.1.y7826xns4ig6@pocho |
pc_db.1.y7826xns4ig6@pocho | PL/SQL procedure successfully completed.
pc_db.1.y7826xns4ig6@pocho |
pc_db.1.y7826xns4ig6@pocho | Elapsed: 00:00:10.63
pc_db.1.y7826xns4ig6@pocho | Disconnected from Oracle Database 18c Express Edition Release 18.0.0.0.0 – Production
pc_db.1.y7826xns4ig6@pocho | Version 18.4.0.0.0
pc_db.1.y7826xns4ig6@pocho |
pc_db.1.y7826xns4ig6@pocho | /opt/oracle/runOracle.sh: running /opt/oracle/scripts/setup/02-clean-up-files.sh
pc_db.1.y7826xns4ig6@pocho |
pc_db.1.y7826xns4ig6@pocho | DONE: Executing user defined scripts
pc_db.1.y7826xns4ig6@pocho |
pc_db.1.y7826xns4ig6@pocho | The Oracle base remains unchanged with value /opt/oracle
pc_db.1.y7826xns4ig6@pocho | #########################
pc_db.1.y7826xns4ig6@pocho | DATABASE IS READY TO USE!
pc_db.1.y7826xns4ig6@pocho | #########################


Testing your Solr Push connector

Once you have the stack deployed and your XE ready to use a simple test could be executed connecting to Oracle and creating an scott account, for example:

$ /opt/sqlcl/bin/sql
Username? (”?) sys/Oracle_2018@localhost:1521/XEPDB1 as sysdba
Connected to:
Oracle Database 18c Express Edition Release 18.0.0.0.0 – Production
SQL> create user scott identified by tiger
2 default tablespace users
3 temporary tablespace temp
4* quota unlimited on users;
User SCOTT created.

SQL> grant connect,resource,pcuser to scott;

Grant succeeded.


Preparing Solr to index a sample table

Unlike Scotas OLS which use Apache Solr inside the RDBMS, Push Connector technology relies on an external Solr instance to prepare Solr mappings for storing a sample table indexing just do these steps in shell inside the Solr instance:

$ docker exec -ti pc_solr.1.tn78o19x9qov3635yhkyyea6r /bin/bash
solr@solr:/opt/solr$ bin/solr create_core -c source_big_pidx
….
Created new core ‘source_big_pidx’
solr@solr:/opt/solr$ curl -X POST -H ‘Content-type:application/json’ –data-binary ‘{
> “add-field”:{
> “name”:”rowid”,
> “type”:”string”,
> “indexed”:true,
> “stored”:true,
> “required”:true,
> “multiValued”:false
> },
> “add-field”:{
> “name”:”solridx”,
> “type”:”string”,
> “indexed”:true,
> “stored”:true,
> “required”:true,
> “multiValued”:false
> }
> }’ http://localhost:8983/solr/source_big_pidx/schema
{
“responseHeader”:{
“status”:0,
“QTime”:336}}
solr@solr:/opt/solr$ sed -i ‘s/<uniqueKey>id/<uniqueKey>rowid/g’ /opt/solr/server/solr/source_big_pidx/conf/managed-schema
solr@solr:/opt/solr$ curl “http://localhost:8983/solr/admin/cores?action=RELOAD&core=source_big_pidx”
{
“responseHeader”:{
“status”:0,
“QTime”:1253}}
solr@solr:/opt/solr$ curl -X POST -H ‘Content-type:application/json’ –data-binary ‘{
> “add-field”:{
> “name”:”title”,
> “type”:”text_general”,
> “indexed”:true,
> “stored”:true,
> “multiValued”:true
> },
> “add-copy-field” :{
> “source”:”*”,
> “dest”:”_text_”
> }
> }’ http://localhost:8983/solr/source_big_pidx/schema
{
“responseHeader”:{
“status”:0,
“QTime”:188}}
solr@solr:/opt/solr$ curl http://localhost:8983/solr/source_big_pidx/schema/uniquekey?wt=json
{
“responseHeader”:{
“status”:0,
“QTime”:0},
“uniqueKey”:”rowid”}


Index your first table

Once Apache Solr is ready to receive Push Connector indexing information We can do these:

SQL> conn scott/tiger@localhost:1521/XEPDB1
Connected.
SQL> set long 10000 lines 140 pages 50 timing on echo on
SQL> set serveroutput on size 1000000
SQL> create table test_source_big as (
2 select owner,name,type,line,text from (select rownum as ntop_pos,q.* from
3 (select * from all_source) q)
4 where ntop_pos>=0 and ntop_pos<=5000
5 );

Table TEST_SOURCE_BIG created.

Elapsed: 00:00:05.460
SQL> CREATE INDEX SOURCE_BIG_PIDX ON TEST_SOURCE_BIG(TEXT)
2 INDEXTYPE IS PC.SOLR
3 parameters(‘{LogLevel:”INFO”,
4 Updater:”solr@8983″,
5 Searcher:”solr@8983″,
6 SolrBase:”solr/source_big_pidx“,
7 SyncMode:”OnLine”,
8 BatchCount:5000,
9 CommitOnSync:true,
10 LockMasterTable:false,
11 IncludeMasterColumn:false,
12 DefaultColumn:”text”,
13 HighlightColumn:”title”,
14 ExtraCols:””text” value text,”title” value substr(text,1,256),”line_i” value line,”type_s” value type”}’);

Index SOURCE_BIG_PIDX created.

Elapsed: 00:00:05.923

Note that ExtraCols parameter is using latest json_object column syntax.


Doing some queries

If you look at the logs of Solr instance during the indexing time you will see something like:

$ docker service logs -f pc_solr

pc_solr.1.tn78o19x9qov@pocho | 2019-01-28 13:00:50.281 INFO (qtp735937428-21) [ x:source_big_pidx] o.a.s.u.DirectUpdateHandler2 end_commit_flush
pc_solr.1.tn78o19x9qov@pocho | 2019-01-28 13:00:50.285 INFO (qtp735937428-21) [ x:source_big_pidx] o.a.s.u.p.LogUpdateProcessorFactory [source_big_pidx] webapp=/solr path=/update/json params={waitSearcher=true&ident=on&commit=true&softCommit=false&wt=json&expungeDeletes=false}{add=[AAAR79AAMAAAADjAAA (1623909145638862848), AAAR79AAMAAAADjAAB (1623909145757351936), AAAR79AAMAAAADjAAC (1623909145761546240), AAAR79AAMAAAADjAAD (1623909145765740544), AAAR79AAMAAAADjAAE (1623909145769934848), AAAR79AAMAAAADjAAF (1623909145772032000), AAAR79AAMAAAADjAAG (1623909145775177728), AAAR79AAMAAAADjAAH (1623909145778323456), AAAR79AAMAAAADjAAI (1623909145781469184), AAAR79AAMAAAADjAAJ (1623909145785663488), … (5000 adds)],commit=} 0 6174

it tell you that Apache Solr is receiving from XE instance batchs of 5000 rows for indexing.

After that here a sample outputs of querying Oracle with Scotas Push connector domain index functionality, count hits:

SQL> select SolrPushConnector.countHits(‘SOURCE_BIG_PIDX’,’*:*’) from dual;

SOLRPUSHCONNECTOR.COUNTHITS(‘SOURCE_BIG_PIDX’,’*:*’)
—————————————————-
5000

Elapsed: 00:00:00.030

query with highlighting:

SQL> select sscore(1),shighlight(1) from test_source_big where scontains(text,'”procedure java”~10′,1)>0 order by sscore(1) desc;

SSCORE(1)
———-
SHIGHLIGHT(1)
——————————————————————————————————————————————–
1.3910314
{“title”:[” — <em>procedure</em> executing the recursive statement via DBMS_SQL, a <em>Java</em> stored\n”]}
Elapsed: 00:00:00.347

domain index pagination:

SQL> select /*+ DOMAIN_INDEX_SORT */ sscore(1) sc,text from test_source_big where scontains(text,’rownum:[1 TO 10] AND function’,1)>0 order by sscore(1) asc;
1.3612658
member function getBlobVal(csid IN number, pflag IN number, indent IN number) return BLOB deterministic parallel_enable
…………
1.5529456
If it is based on a user defined type, this function will return
10 rows selected.

Elapsed: 00:00:00.112

faceting and pivot:

SQL> declare
2 obj f_info;
3 BEGIN
4 obj := SolrPushConnector.facet(‘SOURCE_BIG_PIDX’,null,’facet.field=type_s&facet.limit=5&facet.pivot=type_s,line_i’);
5 dbms_output.put_line(obj.fields.to_string);
6 dbms_output.put_line(obj.pivots.to_string);
7 END;
8 /
{“type_s”:[“PACKAGE”,13493,”TYPE”,6438,”FUNCTION”,64,”PROCEDURE”,5]}
{“type_s,line_i”:[{“field”:”type_s”,”value”:”PACKAGE”,”count”:13493,”pivot”:[{“field”:”line_i”,”value”:1,”count”:20},{“field”:”line_i”,”valu
e”:2,”count”:19},{“field”:”line_i”,”value”:3,”count”:19},{“field”:”line_i”,”value”:4,”count”:19},{“field”:”line_i”,”value”:5,”count”:19}]},{
“field”:”type_s”,”value”:”TYPE”,”count”:6438,”pivot”:[{“field”:”line_i”,”value”:1,”count”:456},{“field”:”line_i”,”value”:2,”count”:429},{“fi
eld”:”line_i”,”value”:3,”count”:346},{“field”:”line_i”,”value”:4,”count”:295},{“field”:”line_i”,”value”:5,”count”:283}]},{“field”:”type_s”,”
value”:”FUNCTION”,”count”:64,”pivot”:[{“field”:”line_i”,”value”:1,”count”:14},{“field”:”line_i”,”value”:2,”count”:14},{“field”:”line_i”,”val
ue”:3,”count”:10},{“field”:”line_i”,”value”:4,”count”:6},{“field”:”line_i”,”value”:5,”count”:5}]},{“field”:”type_s”,”value”:”PROCEDURE”,”cou
nt”:5,”pivot”:[{“field”:”line_i”,”value”:1,”count”:1},{“field”:”line_i”,”value”:2,”count”:1},{“field”:”line_i”,”value”:3,”count”:1},{“field”
:”line_i”,”value”:4,”count”:1},{“field”:”line_i”,”value”:5,”count”:1}]}]}
PL/SQL procedure successfully completed.

Elapsed: 00:00:00.080

more samples at the source at the script testSourceBig.sql and tutorial directory.

Apache Solr tutorial in a relational way

Many questions to be answered when starting with Scotas products is the lack of knowledge on how to implement typical Solr operations in a SQL way or vice-versa.

The idea of this tutorial is to do the same thing explained into the Apache Solr tutorial but using relational concepts using Scotas OLS.