Compare commits

..

89 Commits

Author SHA1 Message Date
Karl Southern
eacd2a2c38 temporarily revert rubocop version. Unsupported under v5's java env. 2017-11-26 13:11:55 +00:00
Karl Southern
309768c893 Update README indicating logstash v6 support. Add some additional configuration options to the README. Upgrade Rubocop. 2017-11-26 13:02:52 +00:00
Karl
aba4e08bf5
Update jdbc_spec_helper.rb
Apparently MySQL on travis doesnt have the BigDecimal type?
2017-11-08 17:31:59 +00:00
Karl Southern
e91db61e8c First pass at making it a bit quicker to add more tests. 2017-11-08 17:25:39 +00:00
Karl Southern
5f2a99c4a6 CHANGELOG 2017-11-08 14:14:19 +00:00
Karl Southern
8e73958359 Release 5.3.0 2017-11-08 14:09:09 +00:00
Karl Southern
f48c12a8da Bumps Vagrant env. Bumps jar deps. Bumps version number, ready for tagging after tests 2017-11-08 13:45:29 +00:00
Karl Southern
7a2996e985 Fix the broken test 2017-11-08 13:24:23 +00:00
Karl
4c04b8c24d
Update jdbc_derby_spec.rb 2017-11-08 13:17:55 +00:00
Karl
fe982c95aa
Update jdbc_spec_helper.rb 2017-11-08 13:17:36 +00:00
Karl
d1a733d195
Update jdbc.rb
Provisionally adds whole event encoding. Attempts to give an escape plan for users who may have the same key in their event data (to preserve backwards compatibility this is disabled by default). Address 

Also adds provisional support for BigDecimal. This is untested.
2017-11-08 13:06:02 +00:00
Karl
21217f7b03
Update THANKS.md
Adds mlkmhd
2017-11-08 12:49:11 +00:00
Karl
2bdb75f1b7
Merge pull request from mlkmhd/master
log event instead of insert query
2017-11-08 12:48:45 +00:00
Karl
6b5398b152
Update jdbc.rb
Update plugin so that the max_pool_size matches documentation.
2017-11-08 12:47:55 +00:00
Your Name
daebe44f32 add event in jdbc exception log instead of statement query. because some jdbc drivers (like oracle jdbc) not implemented toString() method in PreparedStatement class and it print java object hashcode instead of actual query. but in some jdbc implemention like postgresql and mysql and ... this has no problem 2017-10-23 11:24:28 +03:30
Karl
079c3a6c78 Merge pull request from mlkmhd/master
Adding more sql exception detail in log
2017-10-19 09:56:21 +01:00
Your Name
3804eb59d2 adding more sql exception detail in log 2017-10-18 12:41:14 +03:30
Karl
ef6ed66cdd Update THANKS.md 2017-04-12 12:44:08 +01:00
Karl
147cd3d67b Create THANKS.md 2017-04-12 12:43:08 +01:00
Karl
6affac0a0c Update sql-server.md
Add a with thanks.
2017-04-12 12:35:56 +01:00
Karl
508c769650 Merge pull request from MassimoSporchia/master
Added new example and driver_jar_path parameter
2017-04-12 12:35:00 +01:00
Massimo Sporchia
d36d659e16 Added new example and driver_jar_path parameter
Maybe it's just me, but I kept wondering how to add static strings.
Added the driver_jar_path, just in case
2017-04-12 12:59:15 +02:00
Karl Southern
51a04faca3 Fix JDK issue with backports. Bump version 2017-04-09 09:40:49 +01:00
Karl Southern
cdd88fe322 Adds JSON support with non-sprintf syntax 2017-04-09 09:17:05 +01:00
Karl
e74d67b477 Update CHANGELOG.md 2017-04-01 12:17:42 +01:00
Karl Southern
710791c3aa Initial commit for 5.2.0 - addresses HikariCP logging issues 2017-04-01 12:01:27 +01:00
Karl
ccb30c7edd Fix 2017-01-25 19:34:56 +00:00
Karl Southern
6bae1d81e3 Apache phoenix-thin support 2016-12-17 13:07:48 +00:00
Karl Southern
667e066d74 Release v5 2016-11-03 16:25:19 +00:00
Karl Southern
8e15bc5f45 Prepare for v5 2016-11-03 16:00:41 +00:00
Karl
43142287de Update ISSUE_TEMPLATE.md 2016-10-20 09:20:00 +01:00
Karl
2a6f048fa0 Update README.md
Removes incorrect information on unsafe_statement
2016-10-05 11:47:33 +01:00
Karl
318c1bd86a Update jdbc.rb
Logstash v5 nomenclature for threadsafety/concurrency change
2016-10-05 11:44:03 +01:00
Karl
3085606eb7 Update ISSUE_TEMPLATE.md 2016-09-27 17:59:32 +01:00
Karl Southern
f0d88a237f Start. Stop. Probably stop trying to do this before bed. 2016-09-15 22:00:34 +01:00
Karl Southern
ca1c71ea68 Travis 2016-09-15 21:32:34 +01:00
Karl Southern
cb4cefdfad More duh. 2016-09-15 21:02:05 +01:00
Karl Southern
44e1947f31 Duh. 2016-09-15 20:43:48 +01:00
Karl Southern
64a6bcfd55 Forward port from v2.x branch. Try and address 2016-09-15 20:33:40 +01:00
Karl Southern
238ef153e4 Start of setup for Logstash's log4j2 integration 2016-09-02 19:01:28 +01:00
Karl Southern
3bdd8ef3a8 Wut? 2016-09-02 15:18:15 +01:00
Karl Southern
9164605aae Merge branch 'master' of github.com:theangryangel/logstash-output-jdbc 2016-09-02 14:53:11 +01:00
Karl Southern
d2f99b05d2 Remove log4j setup. 2016-09-02 14:48:01 +01:00
Karl
c110bdd551 Update README.md 2016-08-28 23:20:17 +01:00
Karl Southern
b3a6de6340 Rubocop no longer supports 1.9 with newer releases 2016-08-28 22:57:47 +01:00
Karl Southern
37631a62b7 Forward port connection_test configuration option from v2.x 2016-08-28 22:24:26 +01:00
Karl Southern
76e0f439a0 Bring across settings from v2.x 2016-07-13 17:44:47 +01:00
Karl Southern
43eb5d969d Multiple types in statement now supported 2016-07-07 11:00:33 +01:00
Karl Southern
0f37792177 Passing current tests for issue 46 2016-07-07 09:32:48 +01:00
Karl Southern
0e2e883cd1 Different api for v5. 2016-07-07 09:04:06 +01:00
Karl Southern
34708157f4 Provisionally address issue 46 for v5 2016-07-07 08:53:49 +01:00
Karl Southern
6c852d21dc Rollback from trusty. Not adding anything at this point. 2016-06-29 21:25:05 +01:00
Karl Southern
53eaee001d Detect if systemd is available for specs. Fallback to sysvinit. 2016-06-29 21:06:56 +01:00
Karl Southern
fe131f750e TravisCI trusty workaround 2016-06-29 20:44:58 +01:00
Karl Southern
f04e00019b Yeah. I'm an idiot 2016-06-29 20:40:11 +01:00
Karl Southern
867fd37805 Travis? Plz. 2016-06-29 20:38:47 +01:00
Karl Southern
b3e8d1a0f8 Fix mising bundler in travis-ci trusty 2016-06-29 20:35:29 +01:00
Karl Southern
25d14f2624 TravisCI sudo 2016-06-29 20:28:33 +01:00
Karl Southern
ab566ee969 Adds tests for connection loss exception handling, and unretryable SQL exceptions 2016-06-29 18:48:12 +01:00
Karl Southern
7d699e400c Bring fix from v2.x branch for exception retry handling NameError exception 2016-06-29 13:52:29 +01:00
Karl
542003e4e5 Update ISSUE_TEMPLATE.md 2016-06-22 09:20:01 +01:00
Karl
290aa63d2d Update ISSUE_TEMPLATE.md 2016-06-22 09:17:55 +01:00
Karl
a61dd21046 Create ISSUE_TEMPLATE.md 2016-06-22 09:09:35 +01:00
Karl
4a573cb599 Update logstash-output-jdbc.gemspec
Update jar deps. Since Logstash v5 requires Java 8 we don't need to worry about  in this branch anymore.
2016-06-16 17:09:57 +01:00
Karl
80560f6692 Update .travis.yml
Match elastic/logstash travis file.
2016-06-16 16:57:39 +01:00
Karl
2c00c5d016 Update CHANGELOG.md
Merge changes from v2 branch CHANGELOG.md
2016-06-16 16:33:46 +01:00
Karl Southern
a26b6106d4 Trying out some new stuff. 2016-05-27 12:09:48 +01:00
Karl Southern
d6869f594c Trying something new. 2016-05-20 16:30:48 +01:00
Karl Southern
5ec985b0df Fancy 2016-05-17 18:25:35 +01:00
Karl Southern
5a1fdb7c7f Files to ignore 2016-05-17 17:25:37 +01:00
Karl Southern
b14d61ccf0 Pre-release checks test. 2016-05-17 17:24:25 +01:00
Karl Southern
baaeba3c07 Adds more specific error exception checks to tests 2016-05-17 16:31:35 +01:00
Karl Southern
d362e791e5 Rubocop, in preparation for pre-release rake task that sanity checks for release 2016-05-17 16:21:37 +01:00
Karl Southern
5f0f897114 Fix rakefile 2016-05-17 12:53:13 +01:00
Karl Southern
721e128f29 Add success state. Just incase. 2016-05-17 12:05:02 +01:00
Karl Southern
fe2e23ac27 More v5 adjustments 2016-05-17 11:29:49 +01:00
Karl Southern
85b3f31051 Update README 2016-05-14 22:29:32 +01:00
Karl Southern
e32b6e9bbd Switches to what I believe is the prefered method for retrying in logstash v5 2016-05-14 22:17:34 +01:00
Karl Southern
f1202f6454 Stop breaking the bundler cache 2016-05-13 22:38:30 +01:00
Karl Southern
26a32079f1 Adds Microsoft Always-On note and credit 2016-05-13 22:32:31 +01:00
Karl Southern
8d27e0f90d Changelog 2016-05-13 22:14:47 +01:00
Karl Southern
df811f3d29 Switches from slf4j-nop to log4j. Uses built in logstash log4j setup. Switches to jar-dependencies (vendor'ed) instead of version controlled jars. Update logstash-api to v2. Does not yet support multi_recieve 2016-05-13 22:12:57 +01:00
Karl Southern
d056093ab8 Update changelog 2016-05-03 17:42:47 +01:00
Karl Southern
e83af287f0 Fix examples 2016-05-03 17:11:28 +01:00
Karl Southern
0ff6f16ec7 Quick and dirty setup so it's a bit quicker to use vagrant. I'll clean this up more later. 2016-05-03 17:09:21 +01:00
Karl Southern
e6e9ac3b04 Addressing tests. 2016-05-03 15:55:36 +01:00
Karl Southern
707c005979 Tests. 2016-05-03 15:28:01 +01:00
Karl Southern
8f5ceb451a Merge v2.x fixes 2016-05-02 18:14:12 +01:00
Karl Southern
e6537d053f Switch master to logstash v5 development 2016-04-16 14:49:03 +01:00
25 changed files with 486 additions and 98 deletions

26
.github/ISSUE_TEMPLATE.md vendored Normal file

@ -0,0 +1,26 @@
<!--
Trouble installing the plugin under Logstash 2.4.0 with the message "duplicate gems"? See https://github.com/elastic/logstash/issues/5852
Please remember:
- I have not used every database engine in the world
- I have not got access to every database engine in the world
- Any support I provide is done in my own personal time which is limited
- Understand that I won't always have the answer immediately
Please provide as much information as possible.
-->
<!--- Provide a general summary of the issue in the Title above -->
## Expected & Actual Behavior
<!--- If you're describing a bug, tell us what should happen, and what is actually happening, and if necessary how to reproduce it -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version of plugin used:
* Version of Logstash used:
* Database engine & version you're connecting to:
* Have you checked you've met the Logstash requirements for Java versions?:

25
.rubocop.yml Normal file

@ -0,0 +1,25 @@
# I don't care for underscores in numbers.
Style/NumericLiterals:
Enabled: false
Style/ClassAndModuleChildren:
Enabled: false
Metrics/AbcSize:
Enabled: false
Metrics/CyclomaticComplexity:
Max: 9
Metrics/PerceivedComplexity:
Max: 10
Metrics/LineLength:
Enabled: false
Metrics/MethodLength:
Max: 50
Style/FileName:
Exclude:
- 'lib/logstash-output-jdbc_jars.rb'

@ -2,7 +2,9 @@ sudo: required
language: ruby
cache: bundler
rvm:
- jruby
- jruby-1.7.25
jdk:
- oraclejdk8
before_script:
- bundle exec rake vendor
- bundle exec rake install_jars

@ -1,7 +1,25 @@
# Change Log
All notable changes to this project will be documented in this file, from 0.2.0.
## [0.3.1] = 2016-08-28
## [5.3.0] - 2017-11-08
- Adds configuration options `enable_event_as_json_keyword` and `event_as_json_keyword`
- Adds BigDecimal support
- Adds additional logging for debugging purposes (with thanks to @mlkmhd's work)
## [5.2.1] - 2017-04-09
- Adds Array and Hash to_json support for non-sprintf syntax
## [5.2.0] - 2017-04-01
- Upgrades HikariCP to latest
- Fixes HikariCP logging integration issues
## [5.1.0] - 2016-12-17
- phoenix-thin fixes for issue #60
## [5.0.0] - 2016-11-03
- logstash v5 support
## [0.3.1] - 2016-08-28
- Adds connection_test configuration option, to prevent the connection test from occuring, allowing the error to be suppressed.
Useful for cockroachdb deployments. https://github.com/theangryangel/logstash-output-jdbc/issues/53

@ -21,7 +21,7 @@ See CHANGELOG.md
Released versions are available via rubygems, and typically tagged.
For development:
- See master branch for logstash v5 (currently **development only**)
- See master branch for logstash v5
- See v2.x branch for logstash v2
- See v1.5 branch for logstash v1.5
- See v1.4 branch for logstash 1.4
@ -37,24 +37,27 @@ For development:
## Configuration options
| Option | Type | Description | Required? | Default |
| ------ | ---- | ----------- | --------- | ------- |
| driver_class | String | Specify a driver class if autoloading fails | No | |
| driver_auto_commit | Boolean | If the driver does not support auto commit, you should set this to false | No | True |
| driver_jar_path | String | File path to jar file containing your JDBC driver. This is optional, and all JDBC jars may be placed in $LOGSTASH_HOME/vendor/jar/jdbc instead. | No | |
| connection_string | String | JDBC connection URL | Yes | |
| connection_test | Boolean | Run a JDBC connection test. Some drivers do not function correctly, and you may need to disable the connection test to supress an error. Cockroach with the postgres JDBC driver is such an example. | No | Yes |
| username | String | JDBC username - this is optional as it may be included in the connection string, for many drivers | No | |
| password | String | JDBC password - this is optional as it may be included in the connection string, for many drivers | No | |
| statement | Array | An array of strings representing the SQL statement to run. Index 0 is the SQL statement that is prepared, all other array entries are passed in as parameters (in order). A parameter may either be a property of the event (i.e. "@timestamp", or "host") or a formatted string (i.e. "%{host} - %{message}" or "%{message}"). If a key is passed then it will be automatically converted as required for insertion into SQL. If it's a formatted string then it will be passed in verbatim. | Yes | |
| unsafe_statement | Boolean | If yes, the statement is evaluated for event fields - this allows you to use dynamic table names, etc. **This is highly dangerous** and you should **not** use this unless you are 100% sure that the field(s) you are passing in are 100% safe. Failure to do so will result in possible SQL injections. Please be aware that there is also a potential performance penalty as each event must be evaluated and inserted into SQL one at a time, where as when this is false multiple events are inserted at once. Example statement: [ "insert into %{table_name_field} (column) values(?)", "fieldname" ] | No | False |
| max_pool_size | Number | Maximum number of connections to open to the SQL server at any 1 time. Default set to same as Logstash default number of workers | No | 24 |
| connection_timeout | Number | Number of seconds before a SQL connection is closed | No | 2800 |
| flush_size | Number | Maximum number of entries to buffer before sending to SQL - if this is reached before idle_flush_time | No | 1000 |
| max_flush_exceptions | Number | Number of sequential flushes which cause an exception, before the set of events are discarded. Set to a value less than 1 if you never want it to stop. This should be carefully configured with respect to retry_initial_interval and retry_max_interval, if your SQL server is not highly available | No | 10 |
| retry_initial_interval | Number | Number of seconds before the initial retry in the event of a failure. On each failure it will be doubled until it reaches retry_max_interval | No | 2 |
| retry_max_interval | Number | Maximum number of seconds between each retry | No | 128 |
| retry_sql_states | Array of strings | An array of custom SQL state codes you wish to retry until `max_flush_exceptions`. Useful if you're using a JDBC driver which returns retry-able, but non-standard SQL state codes in it's exceptions. | No | [] |
| Option | Type | Description | Required? | Default |
| ------ | ---- | ----------- | --------- | ------- |
| driver_class | String | Specify a driver class if autoloading fails | No | |
| driver_auto_commit | Boolean | If the driver does not support auto commit, you should set this to false | No | True |
| driver_jar_path | String | File path to jar file containing your JDBC driver. This is optional, and all JDBC jars may be placed in $LOGSTASH_HOME/vendor/jar/jdbc instead. | No | |
| connection_string | String | JDBC connection URL | Yes | |
| connection_test | Boolean | Run a JDBC connection test. Some drivers do not function correctly, and you may need to disable the connection test to supress an error. Cockroach with the postgres JDBC driver is such an example. | No | Yes |
| connection_test_query | String | Connection test and init query string, required for some JDBC drivers that don't support isValid(). Typically you'd set to this "SELECT 1" | No | |
| username | String | JDBC username - this is optional as it may be included in the connection string, for many drivers | No | |
| password | String | JDBC password - this is optional as it may be included in the connection string, for many drivers | No | |
| statement | Array | An array of strings representing the SQL statement to run. Index 0 is the SQL statement that is prepared, all other array entries are passed in as parameters (in order). A parameter may either be a property of the event (i.e. "@timestamp", or "host") or a formatted string (i.e. "%{host} - %{message}" or "%{message}"). If a key is passed then it will be automatically converted as required for insertion into SQL. If it's a formatted string then it will be passed in verbatim. | Yes | |
| unsafe_statement | Boolean | If yes, the statement is evaluated for event fields - this allows you to use dynamic table names, etc. **This is highly dangerous** and you should **not** use this unless you are 100% sure that the field(s) you are passing in are 100% safe. Failure to do so will result in possible SQL injections. Example statement: [ "insert into %{table_name_field} (column) values(?)", "fieldname" ] | No | False |
| max_pool_size | Number | Maximum number of connections to open to the SQL server at any 1 time | No | 5 |
| connection_timeout | Number | Number of seconds before a SQL connection is closed | No | 2800 |
| flush_size | Number | Maximum number of entries to buffer before sending to SQL - if this is reached before idle_flush_time | No | 1000 |
| max_flush_exceptions | Number | Number of sequential flushes which cause an exception, before the set of events are discarded. Set to a value less than 1 if you never want it to stop. This should be carefully configured with respect to retry_initial_interval and retry_max_interval, if your SQL server is not highly available | No | 10 |
| retry_initial_interval | Number | Number of seconds before the initial retry in the event of a failure. On each failure it will be doubled until it reaches retry_max_interval | No | 2 |
| retry_max_interval | Number | Maximum number of seconds between each retry | No | 128 |
| retry_sql_states | Array of strings | An array of custom SQL state codes you wish to retry until `max_flush_exceptions`. Useful if you're using a JDBC driver which returns retry-able, but non-standard SQL state codes in it's exceptions. | No | [] |
| event_as_json_keyword | String | The magic key word that the plugin looks for to convert the entire event into a JSON object. As Logstash does not support this out of the box with it's `sprintf` implementation, you can use whatever this field is set to in the statement parameters | No | @event |
| enable_event_as_json_keyword | Boolean | Enables the magic keyword set in the configuration option `event_as_json_keyword`. Without this enabled the plugin will not convert the `event_as_json_keyword` into JSON encoding of the entire event. | No | False |
## Example configurations
Example logstash configurations, can now be found in the examples directory. Where possible we try to link every configuration with a tested jar.

18
THANKS.md Normal file

@ -0,0 +1,18 @@
logstash-output-jdbc is a project originally created by Karl Southern
(the_angry_angel), but there are a number of people that have contributed
or implemented key features over time. We do our best to keep this list
up-to-date, but you can also have a look at the nice contributor graphs
produced by GitHub: https://github.com/theangryangel/logstash-output-jdbc/graphs/contributors
* [hordijk](https://github.com/hordijk)
* [dmitryakadiamond](https://github.com/dmitryakadiamond)
* [MassimoSporchia](https://github.com/MassimoSporchia)
* [ebuildy](https://github.com/ebuildy)
* [kushtrimjunuzi](https://github.com/kushtrimjunuzi)
* [josemazo](https://github.com/josemazo)
* [aceoliver](https://github.com/aceoliver)
* [roflmao](https://github.com/roflmao)
* [onesuper](https://github.com/onesuper)
* [phr0gz](https://github.com/phr0gz)
* [jMonsinjon](https://github.com/jMonsinjon)
* [mlkmhd](https://github.com/mlkmhd)

35
Vagrantfile vendored Normal file

@ -0,0 +1,35 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
JRUBY_VERSION = "jruby-1.7"
Vagrant.configure(2) do |config|
config.vm.define "debian" do |deb|
deb.vm.box = 'debian/stretch64'
deb.vm.synced_folder '.', '/vagrant', type: :virtualbox
deb.vm.provision 'shell', inline: <<-EOP
apt-get update
apt-get install openjdk-8-jre ca-certificates-java git curl -y -q
curl -sSL https://rvm.io/mpapis.asc | sudo gpg --import -
curl -sSL https://get.rvm.io | bash -s stable --ruby=#{JRUBY_VERSION}
usermod -a -G rvm vagrant
EOP
end
config.vm.define "centos" do |centos|
centos.vm.box = 'centos/7'
centos.ssh.insert_key = false # https://github.com/mitchellh/vagrant/issues/7610
centos.vm.synced_folder '.', '/vagrant', type: :virtualbox
centos.vm.provision 'shell', inline: <<-EOP
yum update
yum install java-1.7.0-openjdk
curl -sSL https://rvm.io/mpapis.asc | sudo gpg --import -
curl -sSL https://get.rvm.io | bash -s stable --ruby=#{JRUBY_VERSION}
usermod -a -G rvm vagrant
EOP
end
end

@ -1,6 +1,7 @@
# Example: Apache Phoenix (HBase SQL)
* Tested with Ubuntu 14.04.03 / Logstash 2.1 / Apache Phoenix 4.6
* <!> HBase and Zookeeper must be both accessible from logstash machine <!>
* Please see apache-phoenix-thin-hbase-sql for phoenix-thin. The examples are different.
```
input
{

@ -0,0 +1,28 @@
# Example: Apache Phoenix-Thin (HBase SQL)
**There are special instructions for phoenix-thin. Please read carefully!**
* Tested with Logstash 5.1.1 / Apache Phoenix 4.9
* HBase and Zookeeper must be both accessible from logstash machine
* At time of writing phoenix-client does not include all the required jars (see https://issues.apache.org/jira/browse/PHOENIX-3476), therefore you must *not* use the driver_jar_path configuration option and instead:
- `mkdir -p vendor/jar/jdbc` in your logstash installation path
- copy `phoenix-queryserver-client-4.9.0-HBase-1.2.jar` from the phoenix distribution into this folder
- download the calcite jar from https://mvnrepository.com/artifact/org.apache.calcite/calcite-avatica/1.6.0 and place it into your `vendor/jar/jdbc` directory
* Use the following configuration as a base. The connection_test => false and connection_test_query are very important and should not be omitted. Phoenix-thin does not appear to support isValid and these are necessary for the connection to be added to the pool and be available.
```
input
{
stdin { }
}
output {
jdbc {
connection_test => false
connection_test_query => "select 1"
driver_class => "org.apache.phoenix.queryserver.client.Driver"
connection_string => "jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF"
statement => [ "UPSERT INTO log (host, timestamp, message) VALUES(?, ?, ?)", "host", "@timestamp", "message" ]
}
}
```

18
examples/cockroachdb.md Normal file

@ -0,0 +1,18 @@
# Example: CockroachDB
- Tested using postgresql-9.4.1209.jre6.jar
- **Warning** cockroach is known to throw a warning on connection test (at time of writing), thus the connection test is explicitly disabled.
```
input
{
stdin { }
}
output {
jdbc {
driver_jar_path => '/opt/postgresql-9.4.1209.jre6.jar'
connection_test => false
connection_string => 'jdbc:postgresql://127.0.0.1:26257/test?user=root'
statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, CAST (? AS timestamp), ?)", "host", "@timestamp", "message" ]
}
}
```

@ -9,8 +9,9 @@ input
}
output {
jdbc {
driver_class => "com.mysql.jdbc.Driver"
connection_string => "jdbc:mysql://HOSTNAME/DATABASE?user=USER&password=PASSWORD"
statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, CAST (? AS timestamp), ?)", "host", "@timestamp", "message" ]
statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, CAST(? AS timestamp), ?)", "host", "@timestamp", "message" ]
}
}
```

@ -1,5 +1,6 @@
# Example: SQL Server
* Tested using http://msdn.microsoft.com/en-gb/sqlserver/aa937724.aspx
* Known to be working with Microsoft SQL Server Always-On Cluster (see https://github.com/theangryangel/logstash-output-jdbc/issues/37). With thanks to [@phr0gz](https://github.com/phr0gz)
```
input
{
@ -7,8 +8,25 @@ input
}
output {
jdbc {
connection_string => "jdbc:sqlserver://server:1433;databaseName=databasename;user=username;password=password;autoReconnect=true;"
driver_jar_path => '/opt/sqljdbc42.jar'
connection_string => "jdbc:sqlserver://server:1433;databaseName=databasename;user=username;password=password"
statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, ?, ?)", "host", "@timestamp", "message" ]
}
}
```
Another example, with mixed static strings and parameters, with thanks to [@MassimoSporchia](https://github.com/MassimoSporchia)
```
input
{
stdin { }
}
output {
jdbc {
driver_jar_path => '/opt/sqljdbc42.jar'
connection_string => "jdbc:sqlserver://server:1433;databaseName=databasename;user=username;password=password"
statement => [ "INSERT INTO log (host, timestamp, message, comment) VALUES(?, ?, ?, 'static string')", "host", "@timestamp", "message" ]
}
}
```

@ -10,6 +10,7 @@ output {
stdout { }
jdbc {
driver_class => "org.sqlite.JDBC"
connection_string => 'jdbc:sqlite:test.db'
statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, ?, ?)", "host", "@timestamp", "message" ]
}

@ -1,5 +1,5 @@
# encoding: utf-8
require 'logstash/environment'
root_dir = File.expand_path(File.join(File.dirname(__FILE__), ".."))
LogStash::Environment.load_runtime_jars! File.join(root_dir, "vendor")
root_dir = File.expand_path(File.join(File.dirname(__FILE__), '..'))
LogStash::Environment.load_runtime_jars! File.join(root_dir, 'vendor')

@ -5,6 +5,8 @@ require 'concurrent'
require 'stud/interval'
require 'java'
require 'logstash-output-jdbc_jars'
require 'json'
require 'bigdecimal'
# Write events to a SQL engine, using JDBC.
#
@ -12,7 +14,7 @@ require 'logstash-output-jdbc_jars'
# includes correctly crafting the SQL statement, and matching the number of
# parameters correctly.
class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
declare_threadsafe! if self.respond_to?(:declare_threadsafe!)
concurrency :shared
STRFTIME_FMT = '%Y-%m-%d %T.%L'.freeze
@ -63,7 +65,7 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
config :unsafe_statement, validate: :boolean, default: false
# Number of connections in the pool to maintain
config :max_pool_size, validate: :number, default: 24
config :max_pool_size, validate: :number, default: 5
# Connection timeout
config :connection_timeout, validate: :number, default: 10000
@ -86,6 +88,10 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
# Run a connection test on start.
config :connection_test, validate: :boolean, default: true
# Connection test and init string, required for some JDBC endpoints
# notable phoenix-thin - see logstash-output-jdbc issue #60
config :connection_test_query, validate: :string, required: false
# Maximum number of sequential failed attempts, before we stop retrying.
# If set to < 1, then it will infinitely retry.
# At the default values this is a little over 10 minutes
@ -94,11 +100,16 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
config :max_repeat_exceptions, obsolete: 'This has been replaced by max_flush_exceptions - which behaves slightly differently. Please check the documentation.'
config :max_repeat_exceptions_time, obsolete: 'This is no longer required'
config :idle_flush_time, obsolete: 'No longer necessary under Logstash v5'
# Allows the whole event to be converted to JSON
config :enable_event_as_json_keyword, validate: :boolean, default: false
# The magic key used to convert the whole event to JSON. If you need this, and you have the default in your events, you can use this to change your magic keyword.
config :event_as_json_keyword, validate: :string, default: '@event'
def register
@logger.info('JDBC - Starting up')
LogStash::Logger.setup_log4j(@logger)
load_jar_files!
@stopping = Concurrent::AtomicBoolean.new(false)
@ -122,10 +133,6 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
end
end
def receive(event)
retrying_submit([event])
end
def close
@stopping.make_true
@pool.close
@ -151,12 +158,17 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
validate_connection_timeout = (@connection_timeout / 1000) / 2
if !@connection_test_query.nil? and @connection_test_query.length > 1
@pool.setConnectionTestQuery(@connection_test_query)
@pool.setConnectionInitSql(@connection_test_query)
end
return unless @connection_test
# Test connection
test_connection = @pool.getConnection
unless test_connection.isValid(validate_connection_timeout)
@logger.error('JDBC - Connection is not reporting as validate. Either connection is invalid, or driver is not getting the appropriate response.')
@logger.warn('JDBC - Connection is not reporting as validate. Either connection is invalid, or driver is not getting the appropriate response.')
end
test_connection.close
end
@ -177,13 +189,13 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
File.join(File.dirname(__FILE__), '../../../vendor/jar/jdbc/*.jar')
end
@logger.debug('JDBC - jarpath', path: jarpath)
@logger.trace('JDBC - jarpath', path: jarpath)
jars = Dir[jarpath]
raise LogStash::ConfigurationError, 'JDBC - No jars found. Have you read the README?' if jars.empty?
jars.each do |jar|
@logger.debug('JDBC - Loaded jar', jar: jar)
@logger.trace('JDBC - Loaded jar', jar: jar)
require jar
end
end
@ -196,7 +208,7 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
begin
connection = @pool.getConnection
rescue => e
log_jdbc_exception(e, true)
log_jdbc_exception(e, true, nil)
# If a connection is not available, then the server has gone away
# We're not counting that towards our retry count.
return events, false
@ -210,7 +222,7 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
statement = add_statement_event_params(statement, event) if @statement.length > 1
statement.execute
rescue => e
if retry_exception?(e)
if retry_exception?(e, event.to_json())
events_to_retry.push(event)
end
ensure
@ -257,15 +269,17 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
def add_statement_event_params(statement, event)
@statement[1..-1].each_with_index do |i, idx|
if i.is_a? String
value = event[i]
if @enable_event_as_json_keyword == true and i.is_a? String and i == @event_as_json_keyword
value = event.to_json
elsif i.is_a? String
value = event.get(i)
if value.nil? and i =~ /%\{/
value = event.sprintf(i)
end
else
value = i
end
case value
when Time
# See LogStash::Timestamp, below, for the why behind strftime.
@ -280,11 +294,20 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
# strftime appears to be the most reliable across drivers.
statement.setString(idx + 1, value.time.strftime(STRFTIME_FMT))
when Fixnum, Integer
statement.setInt(idx + 1, value)
if value > 2147483647 or value < -2147483648
statement.setLong(idx + 1, value)
else
statement.setInt(idx + 1, value)
end
when BigDecimal
# TODO: There has to be a better way than this. Find it.
statement.setBigDecimal(idx + 1, java.math.BigDecimal.new(value.to_s))
when Float
statement.setFloat(idx + 1, value)
when String
statement.setString(idx + 1, value)
when Array, Hash
statement.setString(idx + 1, value.to_json)
when true, false
statement.setBoolean(idx + 1, value)
else
@ -295,20 +318,23 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
statement
end
def retry_exception?(exception)
def retry_exception?(exception, event)
retrying = (exception.respond_to? 'getSQLState' and (RETRYABLE_SQLSTATE_CLASSES.include?(exception.getSQLState.to_s[0,2]) or @retry_sql_states.include?(exception.getSQLState)))
log_jdbc_exception(exception, retrying)
log_jdbc_exception(exception, retrying, event)
retrying
end
def log_jdbc_exception(exception, retrying)
def log_jdbc_exception(exception, retrying, event)
current_exception = exception
log_text = 'JDBC - Exception. ' + (retrying ? 'Retrying' : 'Not retrying') + '.'
log_text = 'JDBC - Exception. ' + (retrying ? 'Retrying' : 'Not retrying')
log_method = (retrying ? 'warn' : 'error')
loop do
@logger.send(log_method, log_text, :exception => current_exception, :backtrace => current_exception.backtrace)
# TODO reformat event output so that it only shows the fields necessary.
@logger.send(log_method, log_text, :exception => current_exception, :statement => @statement[0], :event => event)
if current_exception.respond_to? 'getNextException'
current_exception = current_exception.getNextException()

17
log4j2.xml Normal file

@ -0,0 +1,17 @@
<?xml version="1.0" encoding="UTF-8"?>
<Configuration>
<Appenders>
<File name="file" fileName="log4j2.log">
<PatternLayout pattern="%d{yyyy-mm-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
</File>
</Appenders>
<Loggers>
<!-- If we need to figure out whats happening for development purposes, disable this -->
<Logger name="com.zaxxer.hikari" level="off" />
<Root level="debug">
<AppenderRef ref="file"/>
</Root>
</Loggers>
</Configuration>

@ -1,13 +1,13 @@
Gem::Specification.new do |s|
s.name = 'logstash-output-jdbc'
s.version = "0.3.1"
s.licenses = [ "Apache License (2.0)" ]
s.summary = "This plugin allows you to output to SQL, via JDBC"
s.description = "This gem is a logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/plugin install gemname. This gem is not a stand-alone program"
s.authors = ["the_angry_angel"]
s.email = "karl+github@theangryangel.co.uk"
s.homepage = "https://github.com/theangryangel/logstash-output-jdbc"
s.require_paths = [ "lib" ]
s.version = '5.3.0'
s.licenses = ['Apache License (2.0)']
s.summary = 'This plugin allows you to output to SQL, via JDBC'
s.description = "This gem is a logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install 'logstash-output-jdbc'. This gem is not a stand-alone program"
s.authors = ['the_angry_angel']
s.email = 'karl+github@theangryangel.co.uk'
s.homepage = 'https://github.com/theangryangel/logstash-output-jdbc'
s.require_paths = ['lib']
# Java only
s.platform = 'java'
@ -15,24 +15,24 @@ Gem::Specification.new do |s|
# Files
s.files = Dir.glob('{lib,spec}/**/*.rb') + Dir.glob('vendor/**/*') + %w(LICENSE.txt README.md)
# Tests
# Tests
s.test_files = s.files.grep(%r{^(test|spec|features)/})
# Special flag to let us know this is actually a logstash plugin
s.metadata = { "logstash_plugin" => "true", "logstash_group" => "output" }
s.metadata = { 'logstash_plugin' => 'true', 'logstash_group' => 'output' }
# Gem dependencies
s.add_runtime_dependency 'logstash-core-plugin-api', '~> 1.0'
s.add_runtime_dependency 'logstash-core-plugin-api', '~> 2'
s.add_runtime_dependency 'stud'
s.add_runtime_dependency 'logstash-codec-plain'
s.requirements << "jar 'com.zaxxer:HikariCP', '2.4.2'"
s.requirements << "jar 'org.slf4j:slf4j-log4j12', '1.7.21'"
s.requirements << "jar 'com.zaxxer:HikariCP', '2.7.2'"
s.requirements << "jar 'org.apache.logging.log4j:log4j-slf4j-impl', '2.6.2'"
s.add_development_dependency 'jar-dependencies'
s.add_development_dependency 'ruby-maven', '~> 3.3'
s.add_development_dependency 'logstash-devutils'
s.add_development_dependency "logstash-devutils", "~> 1.3", ">= 1.3.1"
s.add_development_dependency 'rubocop', '0.41.2'
end

19
scripts/minutes_to_retries.rb Executable file

@ -0,0 +1,19 @@
#!/usr/bin/env ruby -w
seconds_to_reach = 10 * 60
retry_max_interval = 128
current_interval = 2
total_interval = 0
exceptions_count = 1
loop do
break if total_interval > seconds_to_reach
exceptions_count += 1
current_interval = current_interval * 2 > retry_max_interval ? retry_max_interval : current_interval * 2
total_interval += current_interval
end
puts exceptions_count

@ -1,8 +1,10 @@
#!/bin/bash
wget http://search.maven.org/remotecontent?filepath=org/apache/derby/derby/10.12.1.1/derby-10.12.1.1.jar -O /tmp/derby.jar
sudo apt-get install mysql-server -qq -y
echo "create database logstash_output_jdbc_test;" | mysql -u root
sudo apt-get install mysql-server postgresql-client postgresql -qq -y
echo "create database logstash; grant all privileges on logstash.* to 'logstash'@'localhost' identified by 'logstash'; flush privileges;" | sudo -u root mysql
echo "create user logstash PASSWORD 'logstash'; create database logstash; grant all privileges on database logstash to logstash;" | sudo -u postgres psql
wget http://search.maven.org/remotecontent?filepath=mysql/mysql-connector-java/5.1.38/mysql-connector-java-5.1.38.jar -O /tmp/mysql.jar
wget http://search.maven.org/remotecontent?filepath=org/xerial/sqlite-jdbc/3.8.11.2/sqlite-jdbc-3.8.11.2.jar -O /tmp/sqlite.jar
wget http://central.maven.org/maven2/org/postgresql/postgresql/42.1.4/postgresql-42.1.4.jar -O /tmp/postgres.jar

@ -1,3 +1,5 @@
export JDBC_DERBY_JAR=/tmp/derby.jar
export JDBC_MYSQL_JAR=/tmp/mysql.jar
export JDBC_SQLITE_JAR=/tmp/sqlite.jar
export JDBC_POSTGRES_JAR=/tmp/postgres.jar

@ -4,6 +4,34 @@ require 'stud/temporary'
require 'java'
require 'securerandom'
RSpec::Support::ObjectFormatter.default_instance.max_formatted_output_length = 80000
RSpec.configure do |c|
def start_service(name)
cmd = "sudo /etc/init.d/#{name}* start"
`which systemctl`
if $?.success?
cmd = "sudo systemctl start #{name}"
end
`#{cmd}`
end
def stop_service(name)
cmd = "sudo /etc/init.d/#{name}* stop"
`which systemctl`
if $?.success?
cmd = "sudo systemctl stop #{name}"
end
`#{cmd}`
end
end
RSpec.shared_context 'rspec setup' do
it 'ensure jar is available' do
expect(ENV[jdbc_jar_env]).not_to be_nil, "#{jdbc_jar_env} not defined, required to run tests"
@ -20,7 +48,9 @@ RSpec.shared_context 'when initializing' do
end
RSpec.shared_context 'when outputting messages' do
let(:logger) { double("logger") }
let(:logger) {
double("logger")
}
let(:jdbc_test_table) do
'logstash_output_jdbc_test'
@ -30,32 +60,76 @@ RSpec.shared_context 'when outputting messages' do
"DROP TABLE #{jdbc_test_table}"
end
let(:jdbc_statement_fields) do
[
{db_field: "created_at", db_type: "datetime", db_value: '?', event_field: '@timestamp'},
{db_field: "message", db_type: "varchar(512)", db_value: '?', event_field: 'message'},
{db_field: "message_sprintf", db_type: "varchar(512)", db_value: '?', event_field: 'sprintf-%{message}'},
{db_field: "static_int", db_type: "int", db_value: '?', event_field: 'int'},
{db_field: "static_bigint", db_type: "bigint", db_value: '?', event_field: 'bigint'},
{db_field: "static_float", db_type: "float", db_value: '?', event_field: 'float'},
{db_field: "static_bool", db_type: "boolean", db_value: '?', event_field: 'bool'},
{db_field: "static_bigdec", db_type: "decimal", db_value: '?', event_field: 'bigdec'}
]
end
let(:jdbc_create_table) do
"CREATE table #{jdbc_test_table} (created_at datetime not null, message varchar(512) not null, message_sprintf varchar(512) not null, static_int int not null, static_bit bit not null)"
fields = jdbc_statement_fields.collect { |entry| "#{entry[:db_field]} #{entry[:db_type]} not null" }.join(", ")
"CREATE table #{jdbc_test_table} (#{fields})"
end
let(:jdbc_drop_table) do
"DROP table #{jdbc_test_table}"
end
let(:jdbc_statement) do
["insert into #{jdbc_test_table} (created_at, message, message_sprintf, static_int, static_bit) values(?, ?, ?, ?, ?)", '@timestamp', 'message', 'sprintf-%{message}', 1, true]
fields = jdbc_statement_fields.collect { |entry| "#{entry[:db_field]}" }.join(", ")
values = jdbc_statement_fields.collect { |entry| "#{entry[:db_value]}" }.join(", ")
statement = jdbc_statement_fields.collect { |entry| entry[:event_field] }
statement.insert(0, "insert into #{jdbc_test_table} (#{fields}) values(#{values})")
end
let(:systemd_database_service) do
nil
end
let(:event_fields) do
{ 'message' => "test-message #{SecureRandom.uuid}" }
let(:event) do
# TODO: Auto generate fields from jdbc_statement_fields
LogStash::Event.new({
message: "test-message #{SecureRandom.uuid}",
float: 12.1,
bigint: 4000881632477184,
bool: true,
int: 1,
bigdec: BigDecimal.new("123.123")
})
end
let(:event) { LogStash::Event.new(event_fields) }
let(:plugin) do
# Setup logger
allow(LogStash::Outputs::Jdbc).to receive(:logger).and_return(logger)
# XXX: Suppress reflection logging. There has to be a better way around this.
allow(logger).to receive(:debug).with(/config LogStash::/)
# Suppress beta warnings.
allow(logger).to receive(:info).with(/Please let us know if you find bugs or have suggestions on how to improve this plugin./)
# Suppress start up messages.
expect(logger).to receive(:info).once.with(/JDBC - Starting up/)
# Setup plugin
output = LogStash::Plugin.lookup('output', 'jdbc').new(jdbc_settings)
output.register
output.logger = logger
output
end
before :each do
# Setup table
c = output.instance_variable_get(:@pool).getConnection
c = plugin.instance_variable_get(:@pool).getConnection
# Derby doesn't support IF EXISTS.
# Seems like the quickest solution. Bleurgh.
@ -72,8 +146,16 @@ RSpec.shared_context 'when outputting messages' do
stmt.close
c.close
end
end
output
# Delete table after each
after :each do
c = plugin.instance_variable_get(:@pool).getConnection
stmt = c.createStatement
stmt.executeUpdate(jdbc_drop_table)
stmt.close
c.close
end
it 'should save a event' do
@ -81,8 +163,11 @@ RSpec.shared_context 'when outputting messages' do
# Verify the number of items in the output table
c = plugin.instance_variable_get(:@pool).getConnection
# TODO replace this simple count with a check of the actual contents
stmt = c.prepareStatement("select count(*) as total from #{jdbc_test_table} where message = ?")
stmt.setString(1, event['message'])
stmt.setString(1, event.get('message'))
rs = stmt.executeQuery
count = 0
count = rs.getInt('total') while rs.next
@ -93,43 +178,39 @@ RSpec.shared_context 'when outputting messages' do
end
it 'should not save event, and log an unretryable exception' do
e = LogStash::Event.new({})
e = event
original_event = e.get('message')
e.set('message', nil)
expect(logger).to receive(:error).once.with(/JDBC - Exception. Not retrying/, Hash)
expect { plugin.multi_receive([e]) }.to_not raise_error
expect { plugin.multi_receive([event]) }.to_not raise_error
e.set('message', original_event)
end
it 'it should retry after a connection loss, and log a warning' do
skip "does not run as a service" if systemd_database_service.nil?
skip "does not run as a service, or known issue with test" if systemd_database_service.nil?
p = plugin
# Check that everything is fine right now
expect { p.multi_receive([event]) }.not_to raise_error
# Start a thread to stop and restart the service.
stop_service(systemd_database_service)
# Start a thread to restart the service after the fact.
t = Thread.new(systemd_database_service) { |systemd_database_service|
start_stop_cmd = 'sudo /etc/init.d/%<service>s* %<action>s'
sleep 20
`which systemctl`
if $?.success?
start_stop_cmd = 'sudo systemctl %<action>s %<service>s'
end
cmd = start_stop_cmd % { action: 'stop', service: systemd_database_service }
`#{cmd}`
sleep 10
cmd = start_stop_cmd % { action: 'start', service: systemd_database_service }
`#{cmd}`
start_service(systemd_database_service)
}
# Wait a few seconds to the service to stop
sleep 5
t.run
expect(logger).to receive(:warn).at_least(:once).with(/JDBC - Exception. Retrying/, Hash)
expect { p.multi_receive([event]) }.to_not raise_error
# Wait for the thread to finish
t.join
end
end

@ -2,17 +2,25 @@ require_relative '../jdbc_spec_helper'
describe 'logstash-output-jdbc: derby', if: ENV['JDBC_DERBY_JAR'] do
include_context 'rspec setup'
include_context 'when initializing'
include_context 'when outputting messages'
let(:jdbc_jar_env) do
'JDBC_DERBY_JAR'
end
let(:jdbc_create_table) do
"CREATE table #{jdbc_test_table} (created_at timestamp not null, message varchar(512) not null, message_sprintf varchar(512) not null, static_int int not null, static_bit boolean not null)"
let(:jdbc_statement_fields) do
[
{db_field: "created_at", db_type: "timestamp", db_value: 'CAST(? as timestamp)', event_field: '@timestamp'},
{db_field: "message", db_type: "varchar(512)", db_value: '?', event_field: 'message'},
{db_field: "message_sprintf", db_type: "varchar(512)", db_value: '?', event_field: 'sprintf-%{message}'},
{db_field: "static_int", db_type: "int", db_value: '?', event_field: 'int'},
{db_field: "static_bigint", db_type: "bigint", db_value: '?', event_field: 'bigint'},
{db_field: "static_float", db_type: "float", db_value: '?', event_field: 'float'},
{db_field: "static_bool", db_type: "boolean", db_value: '?', event_field: 'bool'},
{db_field: "static_bigdec", db_type: "decimal", db_value: '?', event_field: 'bigdec'}
]
end
let(:jdbc_settings) do
{
'driver_class' => 'org.apache.derby.jdbc.EmbeddedDriver',

@ -2,7 +2,6 @@ require_relative '../jdbc_spec_helper'
describe 'logstash-output-jdbc: mysql', if: ENV['JDBC_MYSQL_JAR'] do
include_context 'rspec setup'
include_context 'when initializing'
include_context 'when outputting messages'
let(:jdbc_jar_env) do
@ -16,7 +15,7 @@ describe 'logstash-output-jdbc: mysql', if: ENV['JDBC_MYSQL_JAR'] do
let(:jdbc_settings) do
{
'driver_class' => 'com.mysql.jdbc.Driver',
'connection_string' => 'jdbc:mysql://localhost/logstash_output_jdbc_test?user=root',
'connection_string' => 'jdbc:mysql://localhost/logstash?user=logstash&password=logstash',
'driver_jar_path' => ENV[jdbc_jar_env],
'statement' => jdbc_statement,
'max_flush_exceptions' => 1

@ -0,0 +1,41 @@
require_relative '../jdbc_spec_helper'
describe 'logstash-output-jdbc: postgres', if: ENV['JDBC_POSTGRES_JAR'] do
include_context 'rspec setup'
include_context 'when outputting messages'
let(:jdbc_jar_env) do
'JDBC_POSTGRES_JAR'
end
# TODO: Postgres doesnt kill connections fast enough for the test to pass
# Investigate options.
#let(:systemd_database_service) do
# 'postgresql'
#end
let(:jdbc_statement_fields) do
[
{db_field: "created_at", db_type: "timestamp", db_value: 'CAST(? as timestamp)', event_field: '@timestamp'},
{db_field: "message", db_type: "varchar(512)", db_value: '?', event_field: 'message'},
{db_field: "message_sprintf", db_type: "varchar(512)", db_value: '?', event_field: 'sprintf-%{message}'},
{db_field: "static_int", db_type: "int", db_value: '?', event_field: 'int'},
{db_field: "static_bigint", db_type: "bigint", db_value: '?', event_field: 'bigint'},
{db_field: "static_float", db_type: "float", db_value: '?', event_field: 'float'},
{db_field: "static_bool", db_type: "boolean", db_value: '?', event_field: 'bool'},
{db_field: "static_bigdec", db_type: "decimal", db_value: '?', event_field: 'bigdec'}
]
end
let(:jdbc_settings) do
{
'driver_class' => 'org.postgresql.Driver',
'connection_string' => 'jdbc:postgresql://localhost/logstash?user=logstash&password=logstash',
'driver_jar_path' => ENV[jdbc_jar_env],
'statement' => jdbc_statement,
'max_flush_exceptions' => 1
}
end
end

@ -8,7 +8,6 @@ describe 'logstash-output-jdbc: sqlite', if: ENV['JDBC_SQLITE_JAR'] do
end
include_context 'rspec setup'
include_context 'when initializing'
include_context 'when outputting messages'
let(:jdbc_jar_env) do