Compare commits

..

11 Commits

Author SHA1 Message Date
Karl Southern
3dc7627782 v0.3.0 2016-07-24 12:14:31 +01:00
Karl Southern
9235c48c88 Fix travis for v2.x 2016-07-13 17:46:42 +01:00
Karl Southern
53e665bbb6 0.3.0 uses jar-dependencies 2016-07-13 17:41:32 +01:00
Karl Southern
fa2d226fbf 0.3.0.pre - Preparing for threadsafety 2016-07-13 17:40:35 +01:00
Karl Southern
da5a3d8be3 0.2.10 2016-07-07 11:03:14 +01:00
Karl Southern
b10462dacd Preparing for 0.2.10 2016-07-07 10:09:31 +01:00
Karl Southern
61c7a1307e Provisionally address issue 46 2016-07-07 08:50:58 +01:00
Karl Southern
b5419813ba 0.2.9 2016-06-29 13:42:09 +01:00
Karl Southern
ded1106b13 Address issue 44. 2016-06-28 22:38:36 +01:00
Karl Southern
2b27f39088 0.2.7 2016-05-29 13:45:26 +01:00
Karl Southern
7b337a8b91 Backport functionality from v5 branch. 2016-05-29 13:40:47 +01:00
25 changed files with 97 additions and 495 deletions

View File

@ -1,26 +0,0 @@
<!--
Trouble installing the plugin under Logstash 2.4.0 with the message "duplicate gems"? See https://github.com/elastic/logstash/issues/5852
Please remember:
- I have not used every database engine in the world
- I have not got access to every database engine in the world
- Any support I provide is done in my own personal time which is limited
- Understand that I won't always have the answer immediately
Please provide as much information as possible.
-->
<!--- Provide a general summary of the issue in the Title above -->
## Expected & Actual Behavior
<!--- If you're describing a bug, tell us what should happen, and what is actually happening, and if necessary how to reproduce it -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version of plugin used:
* Version of Logstash used:
* Database engine & version you're connecting to:
* Have you checked you've met the Logstash requirements for Java versions?:

View File

@ -1,25 +0,0 @@
# I don't care for underscores in numbers.
Style/NumericLiterals:
Enabled: false
Style/ClassAndModuleChildren:
Enabled: false
Metrics/AbcSize:
Enabled: false
Metrics/CyclomaticComplexity:
Max: 9
Metrics/PerceivedComplexity:
Max: 10
Metrics/LineLength:
Enabled: false
Metrics/MethodLength:
Max: 50
Style/FileName:
Exclude:
- 'lib/logstash-output-jdbc_jars.rb'

View File

@ -2,9 +2,7 @@ sudo: required
language: ruby language: ruby
cache: bundler cache: bundler
rvm: rvm:
- jruby-1.7.25 - jruby
jdk:
- oraclejdk8
before_script: before_script:
- bundle exec rake vendor - bundle exec rake vendor
- bundle exec rake install_jars - bundle exec rake install_jars

View File

@ -1,28 +1,6 @@
# Change Log # Change Log
All notable changes to this project will be documented in this file, from 0.2.0. All notable changes to this project will be documented in this file, from 0.2.0.
## [5.3.0] - 2017-11-08
- Adds configuration options `enable_event_as_json_keyword` and `event_as_json_keyword`
- Adds BigDecimal support
- Adds additional logging for debugging purposes (with thanks to @mlkmhd's work)
## [5.2.1] - 2017-04-09
- Adds Array and Hash to_json support for non-sprintf syntax
## [5.2.0] - 2017-04-01
- Upgrades HikariCP to latest
- Fixes HikariCP logging integration issues
## [5.1.0] - 2016-12-17
- phoenix-thin fixes for issue #60
## [5.0.0] - 2016-11-03
- logstash v5 support
## [0.3.1] - 2016-08-28
- Adds connection_test configuration option, to prevent the connection test from occuring, allowing the error to be suppressed.
Useful for cockroachdb deployments. https://github.com/theangryangel/logstash-output-jdbc/issues/53
## [0.3.0] - 2016-07-24 ## [0.3.0] - 2016-07-24
- Brings tests from v5 branch, providing greater coverage - Brings tests from v5 branch, providing greater coverage
- Removes bulk update support, due to inconsistent behaviour - Removes bulk update support, due to inconsistent behaviour

View File

@ -21,7 +21,7 @@ See CHANGELOG.md
Released versions are available via rubygems, and typically tagged. Released versions are available via rubygems, and typically tagged.
For development: For development:
- See master branch for logstash v5 - See master branch for logstash v5 (currently **development only**)
- See v2.x branch for logstash v2 - See v2.x branch for logstash v2
- See v1.5 branch for logstash v1.5 - See v1.5 branch for logstash v1.5
- See v1.4 branch for logstash 1.4 - See v1.4 branch for logstash 1.4
@ -37,27 +37,23 @@ For development:
## Configuration options ## Configuration options
| Option | Type | Description | Required? | Default | | Option | Type | Description | Required? | Default |
| ------ | ---- | ----------- | --------- | ------- | | ------ | ---- | ----------- | --------- | ------- |
| driver_class | String | Specify a driver class if autoloading fails | No | | | driver_class | String | Specify a driver class if autoloading fails | No | |
| driver_auto_commit | Boolean | If the driver does not support auto commit, you should set this to false | No | True | | driver_auto_commit | Boolean | If the driver does not support auto commit, you should set this to false | No | True |
| driver_jar_path | String | File path to jar file containing your JDBC driver. This is optional, and all JDBC jars may be placed in $LOGSTASH_HOME/vendor/jar/jdbc instead. | No | | | driver_jar_path | String | File path to jar file containing your JDBC driver. This is optional, and all JDBC jars may be placed in $LOGSTASH_HOME/vendor/jar/jdbc instead. | No | |
| connection_string | String | JDBC connection URL | Yes | | | connection_string | String | JDBC connection URL | Yes | |
| connection_test | Boolean | Run a JDBC connection test. Some drivers do not function correctly, and you may need to disable the connection test to supress an error. Cockroach with the postgres JDBC driver is such an example. | No | Yes | | username | String | JDBC username - this is optional as it may be included in the connection string, for many drivers | No | |
| connection_test_query | String | Connection test and init query string, required for some JDBC drivers that don't support isValid(). Typically you'd set to this "SELECT 1" | No | | | password | String | JDBC password - this is optional as it may be included in the connection string, for many drivers | No | |
| username | String | JDBC username - this is optional as it may be included in the connection string, for many drivers | No | | | statement | Array | An array of strings representing the SQL statement to run. Index 0 is the SQL statement that is prepared, all other array entries are passed in as parameters (in order). A parameter may either be a property of the event (i.e. "@timestamp", or "host") or a formatted string (i.e. "%{host} - %{message}" or "%{message}"). If a key is passed then it will be automatically converted as required for insertion into SQL. If it's a formatted string then it will be passed in verbatim. | Yes | |
| password | String | JDBC password - this is optional as it may be included in the connection string, for many drivers | No | | | unsafe_statement | Boolean | If yes, the statement is evaluated for event fields - this allows you to use dynamic table names, etc. **This is highly dangerous** and you should **not** use this unless you are 100% sure that the field(s) you are passing in are 100% safe. Failure to do so will result in possible SQL injections. Please be aware that there is also a potential performance penalty as each event must be evaluated and inserted into SQL one at a time, where as when this is false multiple events are inserted at once. Example statement: [ "insert into %{table_name_field} (column) values(?)", "fieldname" ] | No | False |
| statement | Array | An array of strings representing the SQL statement to run. Index 0 is the SQL statement that is prepared, all other array entries are passed in as parameters (in order). A parameter may either be a property of the event (i.e. "@timestamp", or "host") or a formatted string (i.e. "%{host} - %{message}" or "%{message}"). If a key is passed then it will be automatically converted as required for insertion into SQL. If it's a formatted string then it will be passed in verbatim. | Yes | | | max_pool_size | Number | Maximum number of connections to open to the SQL server at any 1 time. Default set to same as Logstash default number of workers | No | 24 |
| unsafe_statement | Boolean | If yes, the statement is evaluated for event fields - this allows you to use dynamic table names, etc. **This is highly dangerous** and you should **not** use this unless you are 100% sure that the field(s) you are passing in are 100% safe. Failure to do so will result in possible SQL injections. Example statement: [ "insert into %{table_name_field} (column) values(?)", "fieldname" ] | No | False | | connection_timeout | Number | Number of seconds before a SQL connection is closed | No | 2800 |
| max_pool_size | Number | Maximum number of connections to open to the SQL server at any 1 time | No | 5 | | flush_size | Number | Maximum number of entries to buffer before sending to SQL - if this is reached before idle_flush_time | No | 1000 |
| connection_timeout | Number | Number of seconds before a SQL connection is closed | No | 2800 | | max_flush_exceptions | Number | Number of sequential flushes which cause an exception, before the set of events are discarded. Set to a value less than 1 if you never want it to stop. This should be carefully configured with respect to retry_initial_interval and retry_max_interval, if your SQL server is not highly available | No | 10 |
| flush_size | Number | Maximum number of entries to buffer before sending to SQL - if this is reached before idle_flush_time | No | 1000 | | retry_initial_interval | Number | Number of seconds before the initial retry in the event of a failure. On each failure it will be doubled until it reaches retry_max_interval | No | 2 |
| max_flush_exceptions | Number | Number of sequential flushes which cause an exception, before the set of events are discarded. Set to a value less than 1 if you never want it to stop. This should be carefully configured with respect to retry_initial_interval and retry_max_interval, if your SQL server is not highly available | No | 10 | | retry_max_interval | Number | Maximum number of seconds between each retry | No | 128 |
| retry_initial_interval | Number | Number of seconds before the initial retry in the event of a failure. On each failure it will be doubled until it reaches retry_max_interval | No | 2 | | retry_sql_states | Array of strings | An array of custom SQL state codes you wish to retry until `max_flush_exceptions`. Useful if you're using a JDBC driver which returns retry-able, but non-standard SQL state codes in it's exceptions. | No | [] |
| retry_max_interval | Number | Maximum number of seconds between each retry | No | 128 |
| retry_sql_states | Array of strings | An array of custom SQL state codes you wish to retry until `max_flush_exceptions`. Useful if you're using a JDBC driver which returns retry-able, but non-standard SQL state codes in it's exceptions. | No | [] |
| event_as_json_keyword | String | The magic key word that the plugin looks for to convert the entire event into a JSON object. As Logstash does not support this out of the box with it's `sprintf` implementation, you can use whatever this field is set to in the statement parameters | No | @event |
| enable_event_as_json_keyword | Boolean | Enables the magic keyword set in the configuration option `event_as_json_keyword`. Without this enabled the plugin will not convert the `event_as_json_keyword` into JSON encoding of the entire event. | No | False |
## Example configurations ## Example configurations
Example logstash configurations, can now be found in the examples directory. Where possible we try to link every configuration with a tested jar. Example logstash configurations, can now be found in the examples directory. Where possible we try to link every configuration with a tested jar.

View File

@ -1,18 +0,0 @@
logstash-output-jdbc is a project originally created by Karl Southern
(the_angry_angel), but there are a number of people that have contributed
or implemented key features over time. We do our best to keep this list
up-to-date, but you can also have a look at the nice contributor graphs
produced by GitHub: https://github.com/theangryangel/logstash-output-jdbc/graphs/contributors
* [hordijk](https://github.com/hordijk)
* [dmitryakadiamond](https://github.com/dmitryakadiamond)
* [MassimoSporchia](https://github.com/MassimoSporchia)
* [ebuildy](https://github.com/ebuildy)
* [kushtrimjunuzi](https://github.com/kushtrimjunuzi)
* [josemazo](https://github.com/josemazo)
* [aceoliver](https://github.com/aceoliver)
* [roflmao](https://github.com/roflmao)
* [onesuper](https://github.com/onesuper)
* [phr0gz](https://github.com/phr0gz)
* [jMonsinjon](https://github.com/jMonsinjon)
* [mlkmhd](https://github.com/mlkmhd)

35
Vagrantfile vendored
View File

@ -1,35 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
JRUBY_VERSION = "jruby-1.7"
Vagrant.configure(2) do |config|
config.vm.define "debian" do |deb|
deb.vm.box = 'debian/stretch64'
deb.vm.synced_folder '.', '/vagrant', type: :virtualbox
deb.vm.provision 'shell', inline: <<-EOP
apt-get update
apt-get install openjdk-8-jre ca-certificates-java git curl -y -q
curl -sSL https://rvm.io/mpapis.asc | sudo gpg --import -
curl -sSL https://get.rvm.io | bash -s stable --ruby=#{JRUBY_VERSION}
usermod -a -G rvm vagrant
EOP
end
config.vm.define "centos" do |centos|
centos.vm.box = 'centos/7'
centos.ssh.insert_key = false # https://github.com/mitchellh/vagrant/issues/7610
centos.vm.synced_folder '.', '/vagrant', type: :virtualbox
centos.vm.provision 'shell', inline: <<-EOP
yum update
yum install java-1.7.0-openjdk
curl -sSL https://rvm.io/mpapis.asc | sudo gpg --import -
curl -sSL https://get.rvm.io | bash -s stable --ruby=#{JRUBY_VERSION}
usermod -a -G rvm vagrant
EOP
end
end

View File

@ -1,7 +1,6 @@
# Example: Apache Phoenix (HBase SQL) # Example: Apache Phoenix (HBase SQL)
* Tested with Ubuntu 14.04.03 / Logstash 2.1 / Apache Phoenix 4.6 * Tested with Ubuntu 14.04.03 / Logstash 2.1 / Apache Phoenix 4.6
* <!> HBase and Zookeeper must be both accessible from logstash machine <!> * <!> HBase and Zookeeper must be both accessible from logstash machine <!>
* Please see apache-phoenix-thin-hbase-sql for phoenix-thin. The examples are different.
``` ```
input input
{ {

View File

@ -1,28 +0,0 @@
# Example: Apache Phoenix-Thin (HBase SQL)
**There are special instructions for phoenix-thin. Please read carefully!**
* Tested with Logstash 5.1.1 / Apache Phoenix 4.9
* HBase and Zookeeper must be both accessible from logstash machine
* At time of writing phoenix-client does not include all the required jars (see https://issues.apache.org/jira/browse/PHOENIX-3476), therefore you must *not* use the driver_jar_path configuration option and instead:
- `mkdir -p vendor/jar/jdbc` in your logstash installation path
- copy `phoenix-queryserver-client-4.9.0-HBase-1.2.jar` from the phoenix distribution into this folder
- download the calcite jar from https://mvnrepository.com/artifact/org.apache.calcite/calcite-avatica/1.6.0 and place it into your `vendor/jar/jdbc` directory
* Use the following configuration as a base. The connection_test => false and connection_test_query are very important and should not be omitted. Phoenix-thin does not appear to support isValid and these are necessary for the connection to be added to the pool and be available.
```
input
{
stdin { }
}
output {
jdbc {
connection_test => false
connection_test_query => "select 1"
driver_class => "org.apache.phoenix.queryserver.client.Driver"
connection_string => "jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF"
statement => [ "UPSERT INTO log (host, timestamp, message) VALUES(?, ?, ?)", "host", "@timestamp", "message" ]
}
}
```

View File

@ -1,18 +0,0 @@
# Example: CockroachDB
- Tested using postgresql-9.4.1209.jre6.jar
- **Warning** cockroach is known to throw a warning on connection test (at time of writing), thus the connection test is explicitly disabled.
```
input
{
stdin { }
}
output {
jdbc {
driver_jar_path => '/opt/postgresql-9.4.1209.jre6.jar'
connection_test => false
connection_string => 'jdbc:postgresql://127.0.0.1:26257/test?user=root'
statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, CAST (? AS timestamp), ?)", "host", "@timestamp", "message" ]
}
}
```

View File

@ -9,9 +9,8 @@ input
} }
output { output {
jdbc { jdbc {
driver_class => "com.mysql.jdbc.Driver"
connection_string => "jdbc:mysql://HOSTNAME/DATABASE?user=USER&password=PASSWORD" connection_string => "jdbc:mysql://HOSTNAME/DATABASE?user=USER&password=PASSWORD"
statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, CAST(? AS timestamp), ?)", "host", "@timestamp", "message" ] statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, CAST (? AS timestamp), ?)", "host", "@timestamp", "message" ]
} }
} }
``` ```

View File

@ -1,6 +1,5 @@
# Example: SQL Server # Example: SQL Server
* Tested using http://msdn.microsoft.com/en-gb/sqlserver/aa937724.aspx * Tested using http://msdn.microsoft.com/en-gb/sqlserver/aa937724.aspx
* Known to be working with Microsoft SQL Server Always-On Cluster (see https://github.com/theangryangel/logstash-output-jdbc/issues/37). With thanks to [@phr0gz](https://github.com/phr0gz)
``` ```
input input
{ {
@ -8,25 +7,8 @@ input
} }
output { output {
jdbc { jdbc {
driver_jar_path => '/opt/sqljdbc42.jar' connection_string => "jdbc:sqlserver://server:1433;databaseName=databasename;user=username;password=password;autoReconnect=true;"
connection_string => "jdbc:sqlserver://server:1433;databaseName=databasename;user=username;password=password"
statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, ?, ?)", "host", "@timestamp", "message" ] statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, ?, ?)", "host", "@timestamp", "message" ]
} }
}
```
Another example, with mixed static strings and parameters, with thanks to [@MassimoSporchia](https://github.com/MassimoSporchia)
```
input
{
stdin { }
}
output {
jdbc {
driver_jar_path => '/opt/sqljdbc42.jar'
connection_string => "jdbc:sqlserver://server:1433;databaseName=databasename;user=username;password=password"
statement => [ "INSERT INTO log (host, timestamp, message, comment) VALUES(?, ?, ?, 'static string')", "host", "@timestamp", "message" ]
}
} }
``` ```

View File

@ -10,7 +10,6 @@ output {
stdout { } stdout { }
jdbc { jdbc {
driver_class => "org.sqlite.JDBC"
connection_string => 'jdbc:sqlite:test.db' connection_string => 'jdbc:sqlite:test.db'
statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, ?, ?)", "host", "@timestamp", "message" ] statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, ?, ?)", "host", "@timestamp", "message" ]
} }

View File

@ -1,5 +1,5 @@
# encoding: utf-8 # encoding: utf-8
require 'logstash/environment' require 'logstash/environment'
root_dir = File.expand_path(File.join(File.dirname(__FILE__), '..')) root_dir = File.expand_path(File.join(File.dirname(__FILE__), ".."))
LogStash::Environment.load_runtime_jars! File.join(root_dir, 'vendor') LogStash::Environment.load_runtime_jars! File.join(root_dir, "vendor")

View File

@ -5,8 +5,6 @@ require 'concurrent'
require 'stud/interval' require 'stud/interval'
require 'java' require 'java'
require 'logstash-output-jdbc_jars' require 'logstash-output-jdbc_jars'
require 'json'
require 'bigdecimal'
# Write events to a SQL engine, using JDBC. # Write events to a SQL engine, using JDBC.
# #
@ -14,7 +12,7 @@ require 'bigdecimal'
# includes correctly crafting the SQL statement, and matching the number of # includes correctly crafting the SQL statement, and matching the number of
# parameters correctly. # parameters correctly.
class LogStash::Outputs::Jdbc < LogStash::Outputs::Base class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
concurrency :shared declare_threadsafe! if self.respond_to?(:declare_threadsafe!)
STRFTIME_FMT = '%Y-%m-%d %T.%L'.freeze STRFTIME_FMT = '%Y-%m-%d %T.%L'.freeze
@ -65,7 +63,7 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
config :unsafe_statement, validate: :boolean, default: false config :unsafe_statement, validate: :boolean, default: false
# Number of connections in the pool to maintain # Number of connections in the pool to maintain
config :max_pool_size, validate: :number, default: 5 config :max_pool_size, validate: :number, default: 24
# Connection timeout # Connection timeout
config :connection_timeout, validate: :number, default: 10000 config :connection_timeout, validate: :number, default: 10000
@ -85,13 +83,6 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
# Suitable for configuring retryable custom JDBC SQL state codes. # Suitable for configuring retryable custom JDBC SQL state codes.
config :retry_sql_states, validate: :array, default: [] config :retry_sql_states, validate: :array, default: []
# Run a connection test on start.
config :connection_test, validate: :boolean, default: true
# Connection test and init string, required for some JDBC endpoints
# notable phoenix-thin - see logstash-output-jdbc issue #60
config :connection_test_query, validate: :string, required: false
# Maximum number of sequential failed attempts, before we stop retrying. # Maximum number of sequential failed attempts, before we stop retrying.
# If set to < 1, then it will infinitely retry. # If set to < 1, then it will infinitely retry.
# At the default values this is a little over 10 minutes # At the default values this is a little over 10 minutes
@ -100,16 +91,11 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
config :max_repeat_exceptions, obsolete: 'This has been replaced by max_flush_exceptions - which behaves slightly differently. Please check the documentation.' config :max_repeat_exceptions, obsolete: 'This has been replaced by max_flush_exceptions - which behaves slightly differently. Please check the documentation.'
config :max_repeat_exceptions_time, obsolete: 'This is no longer required' config :max_repeat_exceptions_time, obsolete: 'This is no longer required'
config :idle_flush_time, obsolete: 'No longer necessary under Logstash v5' config :idle_flush_time, obsolete: 'No longer necessary under Logstash v5'
# Allows the whole event to be converted to JSON
config :enable_event_as_json_keyword, validate: :boolean, default: false
# The magic key used to convert the whole event to JSON. If you need this, and you have the default in your events, you can use this to change your magic keyword.
config :event_as_json_keyword, validate: :string, default: '@event'
def register def register
@logger.info('JDBC - Starting up') @logger.info('JDBC - Starting up')
LogStash::Logger.setup_log4j(@logger)
load_jar_files! load_jar_files!
@stopping = Concurrent::AtomicBoolean.new(false) @stopping = Concurrent::AtomicBoolean.new(false)
@ -133,6 +119,10 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
end end
end end
def receive(event)
retrying_submit([event])
end
def close def close
@stopping.make_true @stopping.make_true
@pool.close @pool.close
@ -158,17 +148,10 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
validate_connection_timeout = (@connection_timeout / 1000) / 2 validate_connection_timeout = (@connection_timeout / 1000) / 2
if !@connection_test_query.nil? and @connection_test_query.length > 1
@pool.setConnectionTestQuery(@connection_test_query)
@pool.setConnectionInitSql(@connection_test_query)
end
return unless @connection_test
# Test connection # Test connection
test_connection = @pool.getConnection test_connection = @pool.getConnection
unless test_connection.isValid(validate_connection_timeout) unless test_connection.isValid(validate_connection_timeout)
@logger.warn('JDBC - Connection is not reporting as validate. Either connection is invalid, or driver is not getting the appropriate response.') @logger.error('JDBC - Connection is not valid. Please check connection string or that your JDBC endpoint is available.')
end end
test_connection.close test_connection.close
end end
@ -189,13 +172,13 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
File.join(File.dirname(__FILE__), '../../../vendor/jar/jdbc/*.jar') File.join(File.dirname(__FILE__), '../../../vendor/jar/jdbc/*.jar')
end end
@logger.trace('JDBC - jarpath', path: jarpath) @logger.debug('JDBC - jarpath', path: jarpath)
jars = Dir[jarpath] jars = Dir[jarpath]
raise LogStash::ConfigurationError, 'JDBC - No jars found. Have you read the README?' if jars.empty? raise LogStash::ConfigurationError, 'JDBC - No jars found. Have you read the README?' if jars.empty?
jars.each do |jar| jars.each do |jar|
@logger.trace('JDBC - Loaded jar', jar: jar) @logger.debug('JDBC - Loaded jar', jar: jar)
require jar require jar
end end
end end
@ -208,7 +191,7 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
begin begin
connection = @pool.getConnection connection = @pool.getConnection
rescue => e rescue => e
log_jdbc_exception(e, true, nil) log_jdbc_exception(e, true)
# If a connection is not available, then the server has gone away # If a connection is not available, then the server has gone away
# We're not counting that towards our retry count. # We're not counting that towards our retry count.
return events, false return events, false
@ -222,7 +205,7 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
statement = add_statement_event_params(statement, event) if @statement.length > 1 statement = add_statement_event_params(statement, event) if @statement.length > 1
statement.execute statement.execute
rescue => e rescue => e
if retry_exception?(e, event.to_json()) if retry_exception?(e)
events_to_retry.push(event) events_to_retry.push(event)
end end
ensure ensure
@ -269,17 +252,15 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
def add_statement_event_params(statement, event) def add_statement_event_params(statement, event)
@statement[1..-1].each_with_index do |i, idx| @statement[1..-1].each_with_index do |i, idx|
if @enable_event_as_json_keyword == true and i.is_a? String and i == @event_as_json_keyword if i.is_a? String
value = event.to_json value = event[i]
elsif i.is_a? String
value = event.get(i)
if value.nil? and i =~ /%\{/ if value.nil? and i =~ /%\{/
value = event.sprintf(i) value = event.sprintf(i)
end end
else else
value = i value = i
end end
case value case value
when Time when Time
# See LogStash::Timestamp, below, for the why behind strftime. # See LogStash::Timestamp, below, for the why behind strftime.
@ -294,20 +275,11 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
# strftime appears to be the most reliable across drivers. # strftime appears to be the most reliable across drivers.
statement.setString(idx + 1, value.time.strftime(STRFTIME_FMT)) statement.setString(idx + 1, value.time.strftime(STRFTIME_FMT))
when Fixnum, Integer when Fixnum, Integer
if value > 2147483647 or value < -2147483648 statement.setInt(idx + 1, value)
statement.setLong(idx + 1, value)
else
statement.setInt(idx + 1, value)
end
when BigDecimal
# TODO: There has to be a better way than this. Find it.
statement.setBigDecimal(idx + 1, java.math.BigDecimal.new(value.to_s))
when Float when Float
statement.setFloat(idx + 1, value) statement.setFloat(idx + 1, value)
when String when String
statement.setString(idx + 1, value) statement.setString(idx + 1, value)
when Array, Hash
statement.setString(idx + 1, value.to_json)
when true, false when true, false
statement.setBoolean(idx + 1, value) statement.setBoolean(idx + 1, value)
else else
@ -318,23 +290,20 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
statement statement
end end
def retry_exception?(exception, event) def retry_exception?(exception)
retrying = (exception.respond_to? 'getSQLState' and (RETRYABLE_SQLSTATE_CLASSES.include?(exception.getSQLState.to_s[0,2]) or @retry_sql_states.include?(exception.getSQLState))) retrying = (exception.respond_to? 'getSQLState' and (RETRYABLE_SQLSTATE_CLASSES.include?(exception.getSQLState.to_s[0,2]) or @retry_sql_states.include?(exception.getSQLState)))
log_jdbc_exception(exception, retrying, event) log_jdbc_exception(exception, retrying)
retrying retrying
end end
def log_jdbc_exception(exception, retrying, event) def log_jdbc_exception(exception, retrying)
current_exception = exception current_exception = exception
log_text = 'JDBC - Exception. ' + (retrying ? 'Retrying' : 'Not retrying') log_text = 'JDBC - Exception. ' + (retrying ? 'Retrying' : 'Not retrying') + '.'
log_method = (retrying ? 'warn' : 'error') log_method = (retrying ? 'warn' : 'error')
loop do loop do
# TODO reformat event output so that it only shows the fields necessary. @logger.send(log_method, log_text, :exception => current_exception, :backtrace => current_exception.backtrace)
@logger.send(log_method, log_text, :exception => current_exception, :statement => @statement[0], :event => event)
if current_exception.respond_to? 'getNextException' if current_exception.respond_to? 'getNextException'
current_exception = current_exception.getNextException() current_exception = current_exception.getNextException()

View File

@ -1,17 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<Configuration>
<Appenders>
<File name="file" fileName="log4j2.log">
<PatternLayout pattern="%d{yyyy-mm-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
</File>
</Appenders>
<Loggers>
<!-- If we need to figure out whats happening for development purposes, disable this -->
<Logger name="com.zaxxer.hikari" level="off" />
<Root level="debug">
<AppenderRef ref="file"/>
</Root>
</Loggers>
</Configuration>

View File

@ -1,13 +1,13 @@
Gem::Specification.new do |s| Gem::Specification.new do |s|
s.name = 'logstash-output-jdbc' s.name = 'logstash-output-jdbc'
s.version = '5.3.0' s.version = "0.3.0"
s.licenses = ['Apache License (2.0)'] s.licenses = [ "Apache License (2.0)" ]
s.summary = 'This plugin allows you to output to SQL, via JDBC' s.summary = "This plugin allows you to output to SQL, via JDBC"
s.description = "This gem is a logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install 'logstash-output-jdbc'. This gem is not a stand-alone program" s.description = "This gem is a logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/plugin install gemname. This gem is not a stand-alone program"
s.authors = ['the_angry_angel'] s.authors = ["the_angry_angel"]
s.email = 'karl+github@theangryangel.co.uk' s.email = "karl+github@theangryangel.co.uk"
s.homepage = 'https://github.com/theangryangel/logstash-output-jdbc' s.homepage = "https://github.com/theangryangel/logstash-output-jdbc"
s.require_paths = ['lib'] s.require_paths = [ "lib" ]
# Java only # Java only
s.platform = 'java' s.platform = 'java'
@ -15,24 +15,24 @@ Gem::Specification.new do |s|
# Files # Files
s.files = Dir.glob('{lib,spec}/**/*.rb') + Dir.glob('vendor/**/*') + %w(LICENSE.txt README.md) s.files = Dir.glob('{lib,spec}/**/*.rb') + Dir.glob('vendor/**/*') + %w(LICENSE.txt README.md)
# Tests # Tests
s.test_files = s.files.grep(%r{^(test|spec|features)/}) s.test_files = s.files.grep(%r{^(test|spec|features)/})
# Special flag to let us know this is actually a logstash plugin # Special flag to let us know this is actually a logstash plugin
s.metadata = { 'logstash_plugin' => 'true', 'logstash_group' => 'output' } s.metadata = { "logstash_plugin" => "true", "logstash_group" => "output" }
# Gem dependencies # Gem dependencies
s.add_runtime_dependency 'logstash-core-plugin-api', '~> 2' s.add_runtime_dependency 'logstash-core-plugin-api', '~> 1.0'
s.add_runtime_dependency 'stud' s.add_runtime_dependency 'stud'
s.add_runtime_dependency 'logstash-codec-plain' s.add_runtime_dependency 'logstash-codec-plain'
s.requirements << "jar 'com.zaxxer:HikariCP', '2.7.2'" s.requirements << "jar 'com.zaxxer:HikariCP', '2.4.2'"
s.requirements << "jar 'org.apache.logging.log4j:log4j-slf4j-impl', '2.6.2'" s.requirements << "jar 'org.slf4j:slf4j-log4j12', '1.7.21'"
s.add_development_dependency 'jar-dependencies' s.add_development_dependency 'jar-dependencies'
s.add_development_dependency 'ruby-maven', '~> 3.3' s.add_development_dependency 'ruby-maven', '~> 3.3'
s.add_development_dependency "logstash-devutils", "~> 1.3", ">= 1.3.1" s.add_development_dependency 'logstash-devutils'
s.add_development_dependency 'rubocop', '0.41.2' s.add_development_dependency 'rubocop'
end end

View File

@ -1,19 +0,0 @@
#!/usr/bin/env ruby -w
seconds_to_reach = 10 * 60
retry_max_interval = 128
current_interval = 2
total_interval = 0
exceptions_count = 1
loop do
break if total_interval > seconds_to_reach
exceptions_count += 1
current_interval = current_interval * 2 > retry_max_interval ? retry_max_interval : current_interval * 2
total_interval += current_interval
end
puts exceptions_count

View File

@ -1,10 +1,8 @@
#!/bin/bash #!/bin/bash
wget http://search.maven.org/remotecontent?filepath=org/apache/derby/derby/10.12.1.1/derby-10.12.1.1.jar -O /tmp/derby.jar wget http://search.maven.org/remotecontent?filepath=org/apache/derby/derby/10.12.1.1/derby-10.12.1.1.jar -O /tmp/derby.jar
sudo apt-get install mysql-server postgresql-client postgresql -qq -y sudo apt-get install mysql-server -qq -y
echo "create database logstash; grant all privileges on logstash.* to 'logstash'@'localhost' identified by 'logstash'; flush privileges;" | sudo -u root mysql echo "create database logstash_output_jdbc_test;" | mysql -u root
echo "create user logstash PASSWORD 'logstash'; create database logstash; grant all privileges on database logstash to logstash;" | sudo -u postgres psql
wget http://search.maven.org/remotecontent?filepath=mysql/mysql-connector-java/5.1.38/mysql-connector-java-5.1.38.jar -O /tmp/mysql.jar wget http://search.maven.org/remotecontent?filepath=mysql/mysql-connector-java/5.1.38/mysql-connector-java-5.1.38.jar -O /tmp/mysql.jar
wget http://search.maven.org/remotecontent?filepath=org/xerial/sqlite-jdbc/3.8.11.2/sqlite-jdbc-3.8.11.2.jar -O /tmp/sqlite.jar wget http://search.maven.org/remotecontent?filepath=org/xerial/sqlite-jdbc/3.8.11.2/sqlite-jdbc-3.8.11.2.jar -O /tmp/sqlite.jar
wget http://central.maven.org/maven2/org/postgresql/postgresql/42.1.4/postgresql-42.1.4.jar -O /tmp/postgres.jar

View File

@ -1,5 +1,3 @@
export JDBC_DERBY_JAR=/tmp/derby.jar export JDBC_DERBY_JAR=/tmp/derby.jar
export JDBC_MYSQL_JAR=/tmp/mysql.jar export JDBC_MYSQL_JAR=/tmp/mysql.jar
export JDBC_SQLITE_JAR=/tmp/sqlite.jar export JDBC_SQLITE_JAR=/tmp/sqlite.jar
export JDBC_POSTGRES_JAR=/tmp/postgres.jar

View File

@ -4,34 +4,6 @@ require 'stud/temporary'
require 'java' require 'java'
require 'securerandom' require 'securerandom'
RSpec::Support::ObjectFormatter.default_instance.max_formatted_output_length = 80000
RSpec.configure do |c|
def start_service(name)
cmd = "sudo /etc/init.d/#{name}* start"
`which systemctl`
if $?.success?
cmd = "sudo systemctl start #{name}"
end
`#{cmd}`
end
def stop_service(name)
cmd = "sudo /etc/init.d/#{name}* stop"
`which systemctl`
if $?.success?
cmd = "sudo systemctl stop #{name}"
end
`#{cmd}`
end
end
RSpec.shared_context 'rspec setup' do RSpec.shared_context 'rspec setup' do
it 'ensure jar is available' do it 'ensure jar is available' do
expect(ENV[jdbc_jar_env]).not_to be_nil, "#{jdbc_jar_env} not defined, required to run tests" expect(ENV[jdbc_jar_env]).not_to be_nil, "#{jdbc_jar_env} not defined, required to run tests"
@ -48,9 +20,7 @@ RSpec.shared_context 'when initializing' do
end end
RSpec.shared_context 'when outputting messages' do RSpec.shared_context 'when outputting messages' do
let(:logger) { let(:logger) { double("logger") }
double("logger")
}
let(:jdbc_test_table) do let(:jdbc_test_table) do
'logstash_output_jdbc_test' 'logstash_output_jdbc_test'
@ -60,76 +30,32 @@ RSpec.shared_context 'when outputting messages' do
"DROP TABLE #{jdbc_test_table}" "DROP TABLE #{jdbc_test_table}"
end end
let(:jdbc_statement_fields) do
[
{db_field: "created_at", db_type: "datetime", db_value: '?', event_field: '@timestamp'},
{db_field: "message", db_type: "varchar(512)", db_value: '?', event_field: 'message'},
{db_field: "message_sprintf", db_type: "varchar(512)", db_value: '?', event_field: 'sprintf-%{message}'},
{db_field: "static_int", db_type: "int", db_value: '?', event_field: 'int'},
{db_field: "static_bigint", db_type: "bigint", db_value: '?', event_field: 'bigint'},
{db_field: "static_float", db_type: "float", db_value: '?', event_field: 'float'},
{db_field: "static_bool", db_type: "boolean", db_value: '?', event_field: 'bool'},
{db_field: "static_bigdec", db_type: "decimal", db_value: '?', event_field: 'bigdec'}
]
end
let(:jdbc_create_table) do let(:jdbc_create_table) do
fields = jdbc_statement_fields.collect { |entry| "#{entry[:db_field]} #{entry[:db_type]} not null" }.join(", ") "CREATE table #{jdbc_test_table} (created_at datetime not null, message varchar(512) not null, message_sprintf varchar(512) not null, static_int int not null, static_bit bit not null)"
"CREATE table #{jdbc_test_table} (#{fields})"
end
let(:jdbc_drop_table) do
"DROP table #{jdbc_test_table}"
end end
let(:jdbc_statement) do let(:jdbc_statement) do
fields = jdbc_statement_fields.collect { |entry| "#{entry[:db_field]}" }.join(", ") ["insert into #{jdbc_test_table} (created_at, message, message_sprintf, static_int, static_bit) values(?, ?, ?, ?, ?)", '@timestamp', 'message', 'sprintf-%{message}', 1, true]
values = jdbc_statement_fields.collect { |entry| "#{entry[:db_value]}" }.join(", ")
statement = jdbc_statement_fields.collect { |entry| entry[:event_field] }
statement.insert(0, "insert into #{jdbc_test_table} (#{fields}) values(#{values})")
end end
let(:systemd_database_service) do let(:systemd_database_service) do
nil nil
end end
let(:event) do let(:event_fields) do
# TODO: Auto generate fields from jdbc_statement_fields { 'message' => "test-message #{SecureRandom.uuid}" }
LogStash::Event.new({
message: "test-message #{SecureRandom.uuid}",
float: 12.1,
bigint: 4000881632477184,
bool: true,
int: 1,
bigdec: BigDecimal.new("123.123")
})
end end
let(:event) { LogStash::Event.new(event_fields) }
let(:plugin) do let(:plugin) do
# Setup logger
allow(LogStash::Outputs::Jdbc).to receive(:logger).and_return(logger)
# XXX: Suppress reflection logging. There has to be a better way around this.
allow(logger).to receive(:debug).with(/config LogStash::/)
# Suppress beta warnings.
allow(logger).to receive(:info).with(/Please let us know if you find bugs or have suggestions on how to improve this plugin./)
# Suppress start up messages.
expect(logger).to receive(:info).once.with(/JDBC - Starting up/)
# Setup plugin # Setup plugin
output = LogStash::Plugin.lookup('output', 'jdbc').new(jdbc_settings) output = LogStash::Plugin.lookup('output', 'jdbc').new(jdbc_settings)
output.register output.register
output.logger = logger
output
end
before :each do
# Setup table # Setup table
c = plugin.instance_variable_get(:@pool).getConnection c = output.instance_variable_get(:@pool).getConnection
# Derby doesn't support IF EXISTS. # Derby doesn't support IF EXISTS.
# Seems like the quickest solution. Bleurgh. # Seems like the quickest solution. Bleurgh.
@ -146,16 +72,8 @@ RSpec.shared_context 'when outputting messages' do
stmt.close stmt.close
c.close c.close
end end
end
# Delete table after each output
after :each do
c = plugin.instance_variable_get(:@pool).getConnection
stmt = c.createStatement
stmt.executeUpdate(jdbc_drop_table)
stmt.close
c.close
end end
it 'should save a event' do it 'should save a event' do
@ -163,11 +81,8 @@ RSpec.shared_context 'when outputting messages' do
# Verify the number of items in the output table # Verify the number of items in the output table
c = plugin.instance_variable_get(:@pool).getConnection c = plugin.instance_variable_get(:@pool).getConnection
# TODO replace this simple count with a check of the actual contents
stmt = c.prepareStatement("select count(*) as total from #{jdbc_test_table} where message = ?") stmt = c.prepareStatement("select count(*) as total from #{jdbc_test_table} where message = ?")
stmt.setString(1, event.get('message')) stmt.setString(1, event['message'])
rs = stmt.executeQuery rs = stmt.executeQuery
count = 0 count = 0
count = rs.getInt('total') while rs.next count = rs.getInt('total') while rs.next
@ -178,39 +93,43 @@ RSpec.shared_context 'when outputting messages' do
end end
it 'should not save event, and log an unretryable exception' do it 'should not save event, and log an unretryable exception' do
e = event e = LogStash::Event.new({})
original_event = e.get('message')
e.set('message', nil)
expect(logger).to receive(:error).once.with(/JDBC - Exception. Not retrying/, Hash) expect(logger).to receive(:error).once.with(/JDBC - Exception. Not retrying/, Hash)
expect { plugin.multi_receive([event]) }.to_not raise_error expect { plugin.multi_receive([e]) }.to_not raise_error
e.set('message', original_event)
end end
it 'it should retry after a connection loss, and log a warning' do it 'it should retry after a connection loss, and log a warning' do
skip "does not run as a service, or known issue with test" if systemd_database_service.nil? skip "does not run as a service" if systemd_database_service.nil?
p = plugin p = plugin
# Check that everything is fine right now # Check that everything is fine right now
expect { p.multi_receive([event]) }.not_to raise_error expect { p.multi_receive([event]) }.not_to raise_error
stop_service(systemd_database_service) # Start a thread to stop and restart the service.
# Start a thread to restart the service after the fact.
t = Thread.new(systemd_database_service) { |systemd_database_service| t = Thread.new(systemd_database_service) { |systemd_database_service|
sleep 20 start_stop_cmd = 'sudo /etc/init.d/%<service>s* %<action>s'
start_service(systemd_database_service) `which systemctl`
if $?.success?
start_stop_cmd = 'sudo systemctl %<action>s %<service>s'
end
cmd = start_stop_cmd % { action: 'stop', service: systemd_database_service }
`#{cmd}`
sleep 10
cmd = start_stop_cmd % { action: 'start', service: systemd_database_service }
`#{cmd}`
} }
t.run # Wait a few seconds to the service to stop
sleep 5
expect(logger).to receive(:warn).at_least(:once).with(/JDBC - Exception. Retrying/, Hash) expect(logger).to receive(:warn).at_least(:once).with(/JDBC - Exception. Retrying/, Hash)
expect { p.multi_receive([event]) }.to_not raise_error expect { p.multi_receive([event]) }.to_not raise_error
# Wait for the thread to finish
t.join t.join
end end
end end

View File

@ -2,25 +2,17 @@ require_relative '../jdbc_spec_helper'
describe 'logstash-output-jdbc: derby', if: ENV['JDBC_DERBY_JAR'] do describe 'logstash-output-jdbc: derby', if: ENV['JDBC_DERBY_JAR'] do
include_context 'rspec setup' include_context 'rspec setup'
include_context 'when initializing'
include_context 'when outputting messages' include_context 'when outputting messages'
let(:jdbc_jar_env) do let(:jdbc_jar_env) do
'JDBC_DERBY_JAR' 'JDBC_DERBY_JAR'
end end
let(:jdbc_statement_fields) do let(:jdbc_create_table) do
[ "CREATE table #{jdbc_test_table} (created_at timestamp not null, message varchar(512) not null, message_sprintf varchar(512) not null, static_int int not null, static_bit boolean not null)"
{db_field: "created_at", db_type: "timestamp", db_value: 'CAST(? as timestamp)', event_field: '@timestamp'},
{db_field: "message", db_type: "varchar(512)", db_value: '?', event_field: 'message'},
{db_field: "message_sprintf", db_type: "varchar(512)", db_value: '?', event_field: 'sprintf-%{message}'},
{db_field: "static_int", db_type: "int", db_value: '?', event_field: 'int'},
{db_field: "static_bigint", db_type: "bigint", db_value: '?', event_field: 'bigint'},
{db_field: "static_float", db_type: "float", db_value: '?', event_field: 'float'},
{db_field: "static_bool", db_type: "boolean", db_value: '?', event_field: 'bool'},
{db_field: "static_bigdec", db_type: "decimal", db_value: '?', event_field: 'bigdec'}
]
end end
let(:jdbc_settings) do let(:jdbc_settings) do
{ {
'driver_class' => 'org.apache.derby.jdbc.EmbeddedDriver', 'driver_class' => 'org.apache.derby.jdbc.EmbeddedDriver',

View File

@ -2,6 +2,7 @@ require_relative '../jdbc_spec_helper'
describe 'logstash-output-jdbc: mysql', if: ENV['JDBC_MYSQL_JAR'] do describe 'logstash-output-jdbc: mysql', if: ENV['JDBC_MYSQL_JAR'] do
include_context 'rspec setup' include_context 'rspec setup'
include_context 'when initializing'
include_context 'when outputting messages' include_context 'when outputting messages'
let(:jdbc_jar_env) do let(:jdbc_jar_env) do
@ -15,7 +16,7 @@ describe 'logstash-output-jdbc: mysql', if: ENV['JDBC_MYSQL_JAR'] do
let(:jdbc_settings) do let(:jdbc_settings) do
{ {
'driver_class' => 'com.mysql.jdbc.Driver', 'driver_class' => 'com.mysql.jdbc.Driver',
'connection_string' => 'jdbc:mysql://localhost/logstash?user=logstash&password=logstash', 'connection_string' => 'jdbc:mysql://localhost/logstash_output_jdbc_test?user=root',
'driver_jar_path' => ENV[jdbc_jar_env], 'driver_jar_path' => ENV[jdbc_jar_env],
'statement' => jdbc_statement, 'statement' => jdbc_statement,
'max_flush_exceptions' => 1 'max_flush_exceptions' => 1

View File

@ -1,41 +0,0 @@
require_relative '../jdbc_spec_helper'
describe 'logstash-output-jdbc: postgres', if: ENV['JDBC_POSTGRES_JAR'] do
include_context 'rspec setup'
include_context 'when outputting messages'
let(:jdbc_jar_env) do
'JDBC_POSTGRES_JAR'
end
# TODO: Postgres doesnt kill connections fast enough for the test to pass
# Investigate options.
#let(:systemd_database_service) do
# 'postgresql'
#end
let(:jdbc_statement_fields) do
[
{db_field: "created_at", db_type: "timestamp", db_value: 'CAST(? as timestamp)', event_field: '@timestamp'},
{db_field: "message", db_type: "varchar(512)", db_value: '?', event_field: 'message'},
{db_field: "message_sprintf", db_type: "varchar(512)", db_value: '?', event_field: 'sprintf-%{message}'},
{db_field: "static_int", db_type: "int", db_value: '?', event_field: 'int'},
{db_field: "static_bigint", db_type: "bigint", db_value: '?', event_field: 'bigint'},
{db_field: "static_float", db_type: "float", db_value: '?', event_field: 'float'},
{db_field: "static_bool", db_type: "boolean", db_value: '?', event_field: 'bool'},
{db_field: "static_bigdec", db_type: "decimal", db_value: '?', event_field: 'bigdec'}
]
end
let(:jdbc_settings) do
{
'driver_class' => 'org.postgresql.Driver',
'connection_string' => 'jdbc:postgresql://localhost/logstash?user=logstash&password=logstash',
'driver_jar_path' => ENV[jdbc_jar_env],
'statement' => jdbc_statement,
'max_flush_exceptions' => 1
}
end
end

View File

@ -8,6 +8,7 @@ describe 'logstash-output-jdbc: sqlite', if: ENV['JDBC_SQLITE_JAR'] do
end end
include_context 'rspec setup' include_context 'rspec setup'
include_context 'when initializing'
include_context 'when outputting messages' include_context 'when outputting messages'
let(:jdbc_jar_env) do let(:jdbc_jar_env) do