98 Commits
v1.5 ... v5.1.0

Author SHA1 Message Date
Karl Southern
6bae1d81e3 Apache phoenix-thin support 2016-12-17 13:07:48 +00:00
Karl Southern
667e066d74 Release v5 2016-11-03 16:25:19 +00:00
Karl Southern
8e15bc5f45 Prepare for v5 2016-11-03 16:00:41 +00:00
Karl
43142287de Update ISSUE_TEMPLATE.md 2016-10-20 09:20:00 +01:00
Karl
2a6f048fa0 Update README.md
Removes incorrect information on unsafe_statement
2016-10-05 11:47:33 +01:00
Karl
318c1bd86a Update jdbc.rb
Logstash v5 nomenclature for threadsafety/concurrency change
2016-10-05 11:44:03 +01:00
Karl
3085606eb7 Update ISSUE_TEMPLATE.md 2016-09-27 17:59:32 +01:00
Karl Southern
f0d88a237f Start. Stop. Probably stop trying to do this before bed. 2016-09-15 22:00:34 +01:00
Karl Southern
ca1c71ea68 Travis 2016-09-15 21:32:34 +01:00
Karl Southern
cb4cefdfad More duh. 2016-09-15 21:02:05 +01:00
Karl Southern
44e1947f31 Duh. 2016-09-15 20:43:48 +01:00
Karl Southern
64a6bcfd55 Forward port #61 from v2.x branch. Try and address #55 2016-09-15 20:33:40 +01:00
Karl Southern
238ef153e4 Start of setup for Logstash's log4j2 integration 2016-09-02 19:01:28 +01:00
Karl Southern
3bdd8ef3a8 Wut? 2016-09-02 15:18:15 +01:00
Karl Southern
9164605aae Merge branch 'master' of github.com:theangryangel/logstash-output-jdbc 2016-09-02 14:53:11 +01:00
Karl Southern
d2f99b05d2 Remove log4j setup. 2016-09-02 14:48:01 +01:00
Karl
c110bdd551 Update README.md 2016-08-28 23:20:17 +01:00
Karl Southern
b3a6de6340 Rubocop no longer supports 1.9 with newer releases 2016-08-28 22:57:47 +01:00
Karl Southern
37631a62b7 Forward port connection_test configuration option from v2.x 2016-08-28 22:24:26 +01:00
Karl Southern
76e0f439a0 Bring across settings from v2.x 2016-07-13 17:44:47 +01:00
Karl Southern
43eb5d969d Multiple types in statement now supported 2016-07-07 11:00:33 +01:00
Karl Southern
0f37792177 Passing current tests for issue 46 2016-07-07 09:32:48 +01:00
Karl Southern
0e2e883cd1 Different api for v5. 2016-07-07 09:04:06 +01:00
Karl Southern
34708157f4 Provisionally address issue 46 for v5 2016-07-07 08:53:49 +01:00
Karl Southern
6c852d21dc Rollback from trusty. Not adding anything at this point. 2016-06-29 21:25:05 +01:00
Karl Southern
53eaee001d Detect if systemd is available for specs. Fallback to sysvinit. 2016-06-29 21:06:56 +01:00
Karl Southern
fe131f750e TravisCI trusty workaround 2016-06-29 20:44:58 +01:00
Karl Southern
f04e00019b Yeah. I'm an idiot 2016-06-29 20:40:11 +01:00
Karl Southern
867fd37805 Travis? Plz. 2016-06-29 20:38:47 +01:00
Karl Southern
b3e8d1a0f8 Fix mising bundler in travis-ci trusty 2016-06-29 20:35:29 +01:00
Karl Southern
25d14f2624 TravisCI sudo 2016-06-29 20:28:33 +01:00
Karl Southern
ab566ee969 Adds tests for connection loss exception handling, and unretryable SQL exceptions 2016-06-29 18:48:12 +01:00
Karl Southern
7d699e400c Bring fix from v2.x branch for exception retry handling NameError exception 2016-06-29 13:52:29 +01:00
Karl
542003e4e5 Update ISSUE_TEMPLATE.md 2016-06-22 09:20:01 +01:00
Karl
290aa63d2d Update ISSUE_TEMPLATE.md 2016-06-22 09:17:55 +01:00
Karl
a61dd21046 Create ISSUE_TEMPLATE.md 2016-06-22 09:09:35 +01:00
Karl
4a573cb599 Update logstash-output-jdbc.gemspec
Update jar deps. Since Logstash v5 requires Java 8 we don't need to worry about brettwooldridge/HikariCP#600 in this branch anymore.
2016-06-16 17:09:57 +01:00
Karl
80560f6692 Update .travis.yml
Match elastic/logstash travis file.
2016-06-16 16:57:39 +01:00
Karl
2c00c5d016 Update CHANGELOG.md
Merge changes from v2 branch CHANGELOG.md
2016-06-16 16:33:46 +01:00
Karl Southern
a26b6106d4 Trying out some new stuff. 2016-05-27 12:09:48 +01:00
Karl Southern
d6869f594c Trying something new. 2016-05-20 16:30:48 +01:00
Karl Southern
5ec985b0df Fancy 2016-05-17 18:25:35 +01:00
Karl Southern
5a1fdb7c7f Files to ignore 2016-05-17 17:25:37 +01:00
Karl Southern
b14d61ccf0 Pre-release checks test. 2016-05-17 17:24:25 +01:00
Karl Southern
baaeba3c07 Adds more specific error exception checks to tests 2016-05-17 16:31:35 +01:00
Karl Southern
d362e791e5 Rubocop, in preparation for pre-release rake task that sanity checks for release 2016-05-17 16:21:37 +01:00
Karl Southern
5f0f897114 Fix rakefile 2016-05-17 12:53:13 +01:00
Karl Southern
721e128f29 Add success state. Just incase. 2016-05-17 12:05:02 +01:00
Karl Southern
fe2e23ac27 More v5 adjustments 2016-05-17 11:29:49 +01:00
Karl Southern
85b3f31051 Update README 2016-05-14 22:29:32 +01:00
Karl Southern
e32b6e9bbd Switches to what I believe is the prefered method for retrying in logstash v5 2016-05-14 22:17:34 +01:00
Karl Southern
f1202f6454 Stop breaking the bundler cache 2016-05-13 22:38:30 +01:00
Karl Southern
26a32079f1 Adds Microsoft Always-On note and credit 2016-05-13 22:32:31 +01:00
Karl Southern
8d27e0f90d Changelog 2016-05-13 22:14:47 +01:00
Karl Southern
df811f3d29 Switches from slf4j-nop to log4j. Uses built in logstash log4j setup. Switches to jar-dependencies (vendor'ed) instead of version controlled jars. Update logstash-api to v2. Does not yet support multi_recieve 2016-05-13 22:12:57 +01:00
Karl Southern
d056093ab8 Update changelog 2016-05-03 17:42:47 +01:00
Karl Southern
e83af287f0 Fix examples 2016-05-03 17:11:28 +01:00
Karl Southern
0ff6f16ec7 Quick and dirty setup so it's a bit quicker to use vagrant. I'll clean this up more later. 2016-05-03 17:09:21 +01:00
Karl Southern
e6e9ac3b04 Addressing tests. 2016-05-03 15:55:36 +01:00
Karl Southern
707c005979 Tests. 2016-05-03 15:28:01 +01:00
Karl Southern
8f5ceb451a Merge v2.x fixes 2016-05-02 18:14:12 +01:00
Karl Southern
927e532b2a 0.2.6 2016-05-02 18:11:27 +01:00
Karl Southern
e6537d053f Switch master to logstash v5 development 2016-04-16 14:49:03 +01:00
Karl Southern
26a32a3f08 README update 2016-04-16 14:48:21 +01:00
Karl Southern
6bb84b165f Fecking version strings 2016-04-16 14:34:34 +01:00
Karl Southern
4e0292d222 rc1 for #36 2016-04-16 14:33:30 +01:00
Karl Southern
909cae01b3 Adds travis-ci badge 2016-04-12 11:20:19 +01:00
Karl Southern
6f2bd2ab3e Fiddling with travis-ci 2016-04-12 11:16:37 +01:00
Karl Southern
c5aeae1b02 Tags and versions are out of sequence. Bugger. 2016-04-11 18:22:11 +01:00
Karl Southern
a7d5a2e623 v0.2.4 2016-04-11 18:11:52 +01:00
Karl
3a64a22ac4 Merge pull request #32 from hordijk/patch-1
Fix toString method of LogStash::Timestamp
2016-04-11 17:21:25 +01:00
hordijk
c4b62769b9 Fix toString method of LogStash::Timestamp
According to LogStash::Timestamp (bb30cc773b/logstash-core-event/lib/logstash/timestamp.rb) doesn't support iso8601, which results in error if the timestamp of logstash is used directly.

If should support to_s of to_iso8601.

 :message=>"Failed to flush outgoing items", :outgoing_count=>1, :exception=>"NoMethodError", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.3/lib/logstash/outputs/jdbc.rb:255:in `add_statement_event_params'", "org/jruby/RubyArray.java:1613:in `each'", "org/jruby/RubyEnumerable.java:974:in `each_with_index'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.3/lib/logstash/outputs/jdbc.rb:251:in `add_statement_event_params'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.3/lib/logstash/outputs/jdbc.rb:203:in `safe_flush'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.3/lib/logstash/outputs/jdbc.rb:200:in `safe_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.3/lib/logstash/outputs/jdbc.rb:120:in `flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1342:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:216:in `buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:159:in `buffer_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.3/lib/logstash/outputs/jdbc.rb:113:in `receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/outputs/base.rb:83:in `multi_receive'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/outputs/base.rb:83:in `multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/output_delegator.rb:130:in `worker_multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/output_delegator.rb:114:in `multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:305:in `output_batch'", "org/jruby/RubyHash.java:1342:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:305:in `output_batch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:236:in `worker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:205:in `start_workers'"], :level=>:warn}
2016-04-11 15:19:48 +02:00
Karl Southern
b9e5f64d40 Bump minor version to fix documentation 2016-04-07 08:40:14 +01:00
Karl
c0e358aafb Merge pull request #30 from hordijk/master
Fix incorrect configuration option in the README.md for driver_jar
With thanks to @hordijk
2016-04-07 08:38:27 +01:00
hordijk
442ddf16eb Update README.md
Fix issue in documentation: driver_jar is not supported, it should be driver_jar_path

If driver_jar is used logstash will generate this error message=>"Unknown setting 'driver_path' for jdbc"
Used the driver_jar_path which is used in class LogStash::Outputs::Jdbc instead.
2016-04-07 08:35:58 +02:00
Karl Southern
4e7985dafd Addresses #28 - connection timeout bug 2016-02-16 15:29:08 +00:00
Karl Southern
ae51d77f05 Move examples and split up connection code
Bump version
2015-12-30 12:05:05 +00:00
Karl Southern
529c98aadb Addresses 22 not giving warning about incorrectly configured statements 2015-12-23 10:06:50 +00:00
Karl Southern
bfcd9bf69a Addresses issue 26 2015-12-23 09:42:53 +00:00
Karl
af55fde54a Merge pull request #25 from ebuildy/patch-1
Add Apache Phoenix example from @ebuildy
2015-12-22 09:37:27 +00:00
Thomas Decaux
9e05a01dff Add Apache Phoenix example 2015-12-07 10:07:52 +01:00
Karl
064647607e Merge pull request #24 from dmitryakadiamond/MariaDB-working-example
Maria db working example kindly provided by @dmitryakadiamond
2015-12-04 18:28:58 +00:00
Dmitry Morozov
1ece7f9abc formatting fix 2015-12-04 13:20:58 +00:00
Dmitry Morozov
38b7096419 README.md updated 2015-12-04 13:16:19 +00:00
Karl Southern
eef7473a0b Pushing. 2015-11-22 23:19:29 +00:00
Karl Southern
7a1da5b7cd Fix exceptions counter 2015-11-22 18:57:13 +00:00
Karl Southern
e56176bbea Fix missing nil counter 2015-11-19 14:29:47 +00:00
Karl Southern
49e751a9f8 Have to start some real work. Will complete tests over lunch. 2015-11-18 10:06:11 +00:00
Karl Southern
9804850714 WIP 2015-11-17 10:32:16 +00:00
Karl Southern
a6c669cc52 Adds unsafe_statement support 2015-11-15 12:35:57 +00:00
Karl Southern
e615829310 Stupid gemspec version number bullshit 2015-11-14 20:09:38 +00:00
Karl Southern
362e9ad0a0 Adds connection pooling 2015-11-14 20:06:35 +00:00
Karl Southern
4994cd810b Bump version 2015-11-06 15:02:18 +00:00
Karl Southern
1487b41b3e In retrospective, when would nil ever enter the equation at all? 2015-11-06 15:01:02 +00:00
Karl
dd29d16a31 Update logstash-output-jdbc.gemspec
Bump version.
2015-11-06 14:56:14 +00:00
Karl
ebe5596469 Update jdbc.rb
Removes improper nil check which breaks event sprintf formatting examples
2015-11-06 14:55:24 +00:00
Karl Southern
275cd6fc2f v2.0 tested 2015-10-30 18:29:22 +00:00
Karl Southern
7da6317083 Initial untested v2.0 commit 2015-10-30 17:55:42 +00:00
30 changed files with 1055 additions and 252 deletions

26
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,26 @@
<!--
Trouble installing the plugin under Logstash 2.4.0 with the message "duplicate gems"? See https://github.com/elastic/logstash/issues/5852
Please remember:
- I have not used every database engine in the world
- I have not got access to every database engine in the world
- Any support I provide is done in my own personal time which is limited
- Understand that I won't always have the answer immediately
Please provide as much information as possible.
-->
<!--- Provide a general summary of the issue in the Title above -->
## Expected & Actual Behavior
<!--- If you're describing a bug, tell us what should happen, and what is actually happening, and if necessary how to reproduce it -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version of plugin used:
* Version of Logstash used:
* Database engine & version you're connecting to:
* Have you checked you've met the Logstash requirements for Java versions?:

7
.gitignore vendored
View File

@@ -2,4 +2,11 @@
Gemfile.lock Gemfile.lock
Gemfile.bak Gemfile.bak
.bundle .bundle
.vagrant
.mvn
vendor vendor
lib/**/*.jar
.DS_Store
*.swp
*.log

25
.rubocop.yml Normal file
View File

@@ -0,0 +1,25 @@
# I don't care for underscores in numbers.
Style/NumericLiterals:
Enabled: false
Style/ClassAndModuleChildren:
Enabled: false
Metrics/AbcSize:
Enabled: false
Metrics/CyclomaticComplexity:
Max: 9
Metrics/PerceivedComplexity:
Max: 10
Metrics/LineLength:
Enabled: false
Metrics/MethodLength:
Max: 50
Style/FileName:
Exclude:
- 'lib/logstash-output-jdbc_jars.rb'

13
.travis.yml Normal file
View File

@@ -0,0 +1,13 @@
sudo: required
language: ruby
cache: bundler
rvm:
- jruby-1.7.25
jdk:
- oraclejdk8
before_script:
- bundle exec rake vendor
- bundle exec rake install_jars
- ./scripts/travis-before_script.sh
- source ./scripts/travis-variables.sh
script: bundle exec rspec

52
CHANGELOG.md Normal file
View File

@@ -0,0 +1,52 @@
# Change Log
All notable changes to this project will be documented in this file, from 0.2.0.
## [5.1.0] - 2016-12-17
- phoenix-thin fixes for issue #60
## [5.0.0] - 2016-11-03
- logstash v5 support
## [0.3.1] - 2016-08-28
- Adds connection_test configuration option, to prevent the connection test from occuring, allowing the error to be suppressed.
Useful for cockroachdb deployments. https://github.com/theangryangel/logstash-output-jdbc/issues/53
## [0.3.0] - 2016-07-24
- Brings tests from v5 branch, providing greater coverage
- Removes bulk update support, due to inconsistent behaviour
- Plugin now marked as threadsafe, meaning only 1 instance per-Logstash
- Raises default max_pool_size to match the default number of workers (1 connection per worker)
## [0.2.10] - 2016-07-07
- Support non-string entries in statement array
- Adds backtrace to exception logging
## [0.2.9] - 2016-06-29
- Fix NameError exception.
- Moved log_jdbc_exception calls
## [0.2.7] - 2016-05-29
- Backport retry exception logic from v5 branch
- Backport improved timestamp compatibility from v5 branch
## [0.2.6] - 2016-05-02
- Fix for exception infinite loop
## [0.2.5] - 2016-04-11
### Added
- Basic tests running against DerbyDB
- Fix for converting Logstash::Timestamp to iso8601 from @hordijk
## [0.2.4] - 2016-04-07
- Documentation fixes from @hordijk
## [0.2.3] - 2016-02-16
- Bug fixes
## [0.2.2] - 2015-12-30
- Bug fixes
## [0.2.1] - 2015-12-22
- Support for connection pooling support added through HikariCP
- Support for unsafe statement handling (allowing dynamic queries)
- Altered exception handling to now count sequential flushes with exceptions thrown

157
README.md
View File

@@ -1,4 +1,7 @@
# logstash-output-jdbc # logstash-output-jdbc
[![Build Status](https://travis-ci.org/theangryangel/logstash-output-jdbc.svg?branch=master)](https://travis-ci.org/theangryangel/logstash-output-jdbc)
This plugin is provided as an external plugin and is not part of the Logstash project. This plugin is provided as an external plugin and is not part of the Logstash project.
This plugin allows you to output to SQL databases, using JDBC adapters. This plugin allows you to output to SQL databases, using JDBC adapters.
@@ -6,112 +9,78 @@ See below for tested adapters, and example configurations.
This has not yet been extensively tested with all JDBC drivers and may not yet work for you. This has not yet been extensively tested with all JDBC drivers and may not yet work for you.
If you do find this works for a JDBC driver without an example, let me know and provide a small example configuration if you can.
This plugin does not bundle any JDBC jar files, and does expect them to be in a This plugin does not bundle any JDBC jar files, and does expect them to be in a
particular location. Please ensure you read the 4 installation lines below. particular location. Please ensure you read the 4 installation lines below.
## Changelog
See CHANGELOG.md
## Versions ## Versions
- See master branch for logstash v1.5 Released versions are available via rubygems, and typically tagged.
For development:
- See master branch for logstash v5
- See v2.x branch for logstash v2
- See v1.5 branch for logstash v1.5
- See v1.4 branch for logstash 1.4 - See v1.4 branch for logstash 1.4
## Installation ## Installation
- Run `bin/plugin install logstash-output-jdbc` in your logstash installation directory - Run `bin/logstash-plugin install logstash-output-jdbc` in your logstash installation directory
- Create the directory vendor/jar/jdbc in your logstash installation (`mkdir -p vendor/jar/jdbc/`) - Now either:
- Add JDBC jar files to vendor/jar/jdbc in your logstash installation - Use driver_jar_path in your configuraton to specify a path to your jar file
- Configure - Or:
- Create the directory vendor/jar/jdbc in your logstash installation (`mkdir -p vendor/jar/jdbc/`)
- Add JDBC jar files to vendor/jar/jdbc in your logstash installation
- And then configure (examples can be found in the examples directory)
## Configuration options ## Configuration options
* driver_class, string, JDBC driver class to load
* connection_string, string, JDBC connection string | Option | Type | Description | Required? | Default |
* statement, array, an array of strings representing the SQL statement to run. Index 0 is the SQL statement that is prepared, all other array entries are passed in as parameters (in order). A parameter may either be a property of the event (i.e. "@timestamp", or "host") or a formatted string (i.e. "%{host} - %{message}" or "%{message}"). If a key is passed then it will be automatically converted as required for insertion into SQL. If it's a formatted string then it will be passed in verbatim. | ------ | ---- | ----------- | --------- | ------- |
* flush_size, number, default = 1000, number of entries to buffer before sending to SQL | driver_class | String | Specify a driver class if autoloading fails | No | |
* idle_flush_time, number, default = 1, number of idle seconds before sending data to SQL, even if the flush_size has not been reached. If you modify this value you should also consider altering max_repeat_exceptions_time | driver_auto_commit | Boolean | If the driver does not support auto commit, you should set this to false | No | True |
* max_repeat_exceptions, number, default = 5, number of times the same exception can repeat before we stop logstash. Set to a value less than 1 if you never want it to stop | driver_jar_path | String | File path to jar file containing your JDBC driver. This is optional, and all JDBC jars may be placed in $LOGSTASH_HOME/vendor/jar/jdbc instead. | No | |
* max_repeat_exceptions_time, number, default = 30, maxium number of seconds between exceptions before they're considered "different" exceptions. If you modify idle_flush_time you should consider this value | connection_string | String | JDBC connection URL | Yes | |
| connection_test | Boolean | Run a JDBC connection test. Some drivers do not function correctly, and you may need to disable the connection test to supress an error. Cockroach with the postgres JDBC driver is such an example. | No | Yes |
| connection_test_query | String | Connection test and init query string, required for some JDBC drivers that don't support isValid(). Typically you'd set to this "SELECT 1" | No | |
| username | String | JDBC username - this is optional as it may be included in the connection string, for many drivers | No | |
| password | String | JDBC password - this is optional as it may be included in the connection string, for many drivers | No | |
| statement | Array | An array of strings representing the SQL statement to run. Index 0 is the SQL statement that is prepared, all other array entries are passed in as parameters (in order). A parameter may either be a property of the event (i.e. "@timestamp", or "host") or a formatted string (i.e. "%{host} - %{message}" or "%{message}"). If a key is passed then it will be automatically converted as required for insertion into SQL. If it's a formatted string then it will be passed in verbatim. | Yes | |
| unsafe_statement | Boolean | If yes, the statement is evaluated for event fields - this allows you to use dynamic table names, etc. **This is highly dangerous** and you should **not** use this unless you are 100% sure that the field(s) you are passing in are 100% safe. Failure to do so will result in possible SQL injections. Example statement: [ "insert into %{table_name_field} (column) values(?)", "fieldname" ] | No | False |
| max_pool_size | Number | Maximum number of connections to open to the SQL server at any 1 time | No | 5 |
| connection_timeout | Number | Number of seconds before a SQL connection is closed | No | 2800 |
| flush_size | Number | Maximum number of entries to buffer before sending to SQL - if this is reached before idle_flush_time | No | 1000 |
| max_flush_exceptions | Number | Number of sequential flushes which cause an exception, before the set of events are discarded. Set to a value less than 1 if you never want it to stop. This should be carefully configured with respect to retry_initial_interval and retry_max_interval, if your SQL server is not highly available | No | 10 |
| retry_initial_interval | Number | Number of seconds before the initial retry in the event of a failure. On each failure it will be doubled until it reaches retry_max_interval | No | 2 |
| retry_max_interval | Number | Maximum number of seconds between each retry | No | 128 |
| retry_sql_states | Array of strings | An array of custom SQL state codes you wish to retry until `max_flush_exceptions`. Useful if you're using a JDBC driver which returns retry-able, but non-standard SQL state codes in it's exceptions. | No | [] |
## Example configurations ## Example configurations
### SQLite3 Example logstash configurations, can now be found in the examples directory. Where possible we try to link every configuration with a tested jar.
* Tested using https://bitbucket.org/xerial/sqlite-jdbc
* SQLite setup - `echo "CREATE table log (host text, timestamp datetime, message text);" | sqlite3 test.db`
```
input
{
stdin { }
}
output {
stdout { }
jdbc { If you have a working sample configuration, for a DB thats not listed, pull requests are welcome.
driver_class => 'org.sqlite.JDBC'
connection_string => 'jdbc:sqlite:test.db'
statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, ?, ?)", "host", "@timestamp", "message" ]
}
}
```
### SQL Server ## Development and Running tests
* Tested using http://msdn.microsoft.com/en-gb/sqlserver/aa937724.aspx For development tests are recommended to run inside a virtual machine (Vagrantfile is included in the repo), as it requires
``` access to various database engines and could completely destroy any data in a live system.
input
{
stdin { }
}
output {
jdbc {
driver_class => 'com.microsoft.sqlserver.jdbc.SQLServerDriver'
connection_string => "jdbc:sqlserver://server:1433;databaseName=databasename;user=username;password=password;autoReconnect=true;"
statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, ?, ?)", "host", "@timestamp", "message" ]
}
}
```
### Postgres If you have vagrant available (this is temporary whilst I'm hacking on v5 support. I'll make this more streamlined later):
With thanks to [@roflmao](https://github.com/roflmao) - `vagrant up`
``` - `vagrant ssh`
input - `cd /vagrant`
{ - `gem install bundler`
stdin { } - `cd /vagrant && bundle install && bundle exec rake vendor && bundle exec rake install_jars`
} - `./scripts/travis-before_script.sh && source ./scripts/travis-variables.sh`
output { - `bundle exec rspec`
jdbc {
driver_class => 'org.postgresql.Driver'
connection_string => 'jdbc:postgresql://hostname:5432/database?user=username&password=password'
statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, CAST (? AS timestamp), ?)", "host", "@timestamp", "message" ]
}
}
```
### Oracle ## Releasing
With thanks to [@josemazo](https://github.com/josemazo) - Update Changelog
* Tested with Express Edition 11g Release 2 - Bump version in gemspec
* Tested using http://www.oracle.com/technetwork/database/enterprise-edition/jdbc-112010-090769.html (ojdbc6.jar) - Commit
``` - Create tag `git tag v<version-number-in-gemspec>`
input - `bundle exec rake install_jars`
{ - `bundle exec rake pre_release_checks`
stdin { } - `gem build logstash-output-jdbc.gemspec`
} - `gem push`
output {
jdbc {
driver_class => "oracle.jdbc.driver.OracleDriver"
connection_string => "jdbc:oracle:thin:USER/PASS@HOST:PORT:SID"
statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, CAST (? AS timestamp), ?)", "host", "@timestamp", "message" ]
}
}
```
### Mysql
With thanks to [@jMonsinjon](https://github.com/jMonsinjon)
* Tested with Version 14.14 Distrib 5.5.43, for debian-linux-gnu (x86_64)
* Tested using http://dev.mysql.com/downloads/file.php?id=457911 (mysql-connector-java-5.1.36-bin.jar)
```
input
{
stdin { }
}
output {
jdbc {
driver_class => "com.mysql.jdbc.Driver"
connection_string => "jdbc:mysql://HOSTNAME/DATABASE?user=USER&password=PASSWORD"
statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, CAST (? AS timestamp), ?)", "host", "@timestamp", "message" ]
}
}
```

View File

@@ -1 +1,11 @@
require "logstash/devutils/rake" # encoding: utf-8
require 'logstash/devutils/rake'
require 'jars/installer'
require 'rubygems'
desc 'Fetch any jars required for this plugin'
task :install_jars do
ENV['JARS_HOME'] = Dir.pwd + '/vendor/jar-dependencies/runtime-jars'
ENV['JARS_VENDOR'] = 'false'
Jars::Installer.new.vendor_jars!(false)
end

36
Vagrantfile vendored Normal file
View File

@@ -0,0 +1,36 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.define "debian" do |deb|
deb.vm.box = 'debian/jessie64'
deb.vm.synced_folder '.', '/vagrant', type: :virtualbox
deb.vm.provision 'shell', inline: <<-EOP
echo "deb http://ftp.debian.org/debian jessie-backports main" | tee --append /etc/apt/sources.list > /dev/null
sed -i 's/main/main contrib non-free/g' /etc/apt/sources.list
apt-get update
apt-get remove openjdk-7-jre-headless -y -q
apt-get install git openjdk-8-jre curl -y -q
gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
curl -sSL https://get.rvm.io | bash -s stable --ruby=jruby-1.7
usermod -a -G rvm vagrant
EOP
end
config.vm.define "centos" do |centos|
centos.vm.box = 'centos/7'
centos.ssh.insert_key = false # https://github.com/mitchellh/vagrant/issues/7610
centos.vm.synced_folder '.', '/vagrant', type: :virtualbox
centos.vm.provision 'shell', inline: <<-EOP
yum update
yum install java-1.7.0-openjdk
gpg2 --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
curl -sSL https://get.rvm.io | bash -s stable --ruby=jruby-1.7
usermod -a -G rvm vagrant
EOP
end
end

View File

@@ -0,0 +1,17 @@
# Example: Apache Phoenix (HBase SQL)
* Tested with Ubuntu 14.04.03 / Logstash 2.1 / Apache Phoenix 4.6
* <!> HBase and Zookeeper must be both accessible from logstash machine <!>
* Please see apache-phoenix-thin-hbase-sql for phoenix-thin. The examples are different.
```
input
{
stdin { }
}
output {
jdbc {
connection_string => "jdbc:phoenix:ZOOKEEPER_HOSTNAME"
statement => [ "UPSERT INTO EVENTS log (host, timestamp, message) VALUES(?, ?, ?)", "host", "@timestamp", "message" ]
}
}
```

View File

@@ -0,0 +1,28 @@
# Example: Apache Phoenix-Thin (HBase SQL)
**There are special instructions for phoenix-thin. Please read carefully!**
* Tested with Logstash 5.1.1 / Apache Phoenix 4.9
* HBase and Zookeeper must be both accessible from logstash machine
* At time of writing phoenix-client does not include all the required jars (see https://issues.apache.org/jira/browse/PHOENIX-3476), therefore you must *not* use the driver_jar_path configuration option and instead:
- `mkdir -p vendor/jar/jdbc` in your logstash installation path
- copy `phoenix-queryserver-client-4.9.0-HBase-1.2.jar` from the phoenix distribution into this folder
- download the calcite jar from https://mvnrepository.com/artifact/org.apache.calcite/calcite-avatica/1.6.0 and place it into your `vendor/jar/jdbc` directory
* Use the following configuration as a base. The connection_test => false and connection_test_query are very important and should not be omitted. Phoenix-thin does not appear to support isValid and these are necessary for the connection to be added to the pool and be available.
```
input
{
stdin { }
}
output {
jdbc {
connection_test => false
connection_test_query => "select 1"
driver_class => "org.apache.phoenix.queryserver.client.Driver"
connection_string => "jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF"
statement => [ "UPSERT INTO log (host, timestamp, message) VALUES(?, ?, ?)", "host", "@timestamp", "message" ]
}
}
```

18
examples/cockroachdb.md Normal file
View File

@@ -0,0 +1,18 @@
# Example: CockroachDB
- Tested using postgresql-9.4.1209.jre6.jar
- **Warning** cockroach is known to throw a warning on connection test (at time of writing), thus the connection test is explicitly disabled.
```
input
{
stdin { }
}
output {
jdbc {
driver_jar_path => '/opt/postgresql-9.4.1209.jre6.jar'
connection_test => false
connection_string => 'jdbc:postgresql://127.0.0.1:26257/test?user=root'
statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, CAST (? AS timestamp), ?)", "host", "@timestamp", "message" ]
}
}
```

16
examples/mariadb.md Normal file
View File

@@ -0,0 +1,16 @@
# Example: MariaDB
* Tested with Ubuntu 14.04.3 LTS, Server version: 10.1.9-MariaDB-1~trusty-log mariadb.org binary distribution
* Tested using https://downloads.mariadb.com/enterprise/tqge-whfa/connectors/java/connector-java-1.3.2/mariadb-java-client-1.3.2.jar (mariadb-java-client-1.3.2.jar)
```
input
{
stdin { }
}
output {
jdbc {
connection_string => "jdbc:mariadb://HOSTNAME/DATABASE?user=USER&password=PASSWORD"
statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, ?, ?)", "host", "@timestamp", "message" ]
}
}
```

17
examples/mysql.md Normal file
View File

@@ -0,0 +1,17 @@
# Example: Mysql
With thanks to [@jMonsinjon](https://github.com/jMonsinjon)
* Tested with Version 14.14 Distrib 5.5.43, for debian-linux-gnu (x86_64)
* Tested using http://dev.mysql.com/downloads/file.php?id=457911 (mysql-connector-java-5.1.36-bin.jar)
```
input
{
stdin { }
}
output {
jdbc {
driver_class => "com.mysql.jdbc.Driver"
connection_string => "jdbc:mysql://HOSTNAME/DATABASE?user=USER&password=PASSWORD"
statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, CAST(? AS timestamp), ?)", "host", "@timestamp", "message" ]
}
}
```

20
examples/odps.md Normal file
View File

@@ -0,0 +1,20 @@
# Example: ODPS
With thanks to [@onesuper](https://github.com/onesuper)
```
input
{
stdin { }
}
output {
jdbc {
driver_class => "com.aliyun.odps.jdbc.OdpsDriver"
driver_auto_commit => false
connection_string => "jdbc:odps:http://service.odps.aliyun.com/api?project=meta_dev&loglevel=DEBUG"
username => "abcd"
password => "1234"
max_pool_size => 5
flush_size => 10
statement => [ "INSERT INTO test_logstash VALUES(?, ?, ?);", "host", "@timestamp", "message" ]
}
}
```

16
examples/oracle.md Normal file
View File

@@ -0,0 +1,16 @@
# Example: Oracle
With thanks to [@josemazo](https://github.com/josemazo)
* Tested with Express Edition 11g Release 2
* Tested using http://www.oracle.com/technetwork/database/enterprise-edition/jdbc-112010-090769.html (ojdbc6.jar)
```
input
{
stdin { }
}
output {
jdbc {
connection_string => "jdbc:oracle:thin:USER/PASS@HOST:PORT:SID"
statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, CAST (? AS timestamp), ?)", "host", "@timestamp", "message" ]
}
}
```

15
examples/postgres.md Normal file
View File

@@ -0,0 +1,15 @@
# Example: Postgres
With thanks to [@roflmao](https://github.com/roflmao)
```
input
{
stdin { }
}
output {
jdbc {
connection_string => 'jdbc:postgresql://hostname:5432/database?user=username&password=password'
statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, CAST (? AS timestamp), ?)", "host", "@timestamp", "message" ]
}
}
```

15
examples/sql-server.md Normal file
View File

@@ -0,0 +1,15 @@
# Example: SQL Server
* Tested using http://msdn.microsoft.com/en-gb/sqlserver/aa937724.aspx
* Known to be working with Microsoft SQL Server Always-On Cluster (see https://github.com/theangryangel/logstash-output-jdbc/issues/37). With thanks to [@phr0gz](https://github.com/phr0gz)
```
input
{
stdin { }
}
output {
jdbc {
connection_string => "jdbc:sqlserver://server:1433;databaseName=databasename;user=username;password=password"
statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, ?, ?)", "host", "@timestamp", "message" ]
}
}
```

18
examples/sqlite.md Normal file
View File

@@ -0,0 +1,18 @@
# Example: SQLite3
* Tested using https://bitbucket.org/xerial/sqlite-jdbc
* SQLite setup - `echo "CREATE table log (host text, timestamp datetime, message text);" | sqlite3 test.db`
```
input
{
stdin { }
}
output {
stdout { }
jdbc {
driver_class => "org.sqlite.JDBC"
connection_string => 'jdbc:sqlite:test.db'
statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, ?, ?)", "host", "@timestamp", "message" ]
}
}
```

View File

@@ -0,0 +1,5 @@
# encoding: utf-8
require 'logstash/environment'
root_dir = File.expand_path(File.join(File.dirname(__FILE__), '..'))
LogStash::Environment.load_runtime_jars! File.join(root_dir, 'vendor')

View File

@@ -1,174 +1,335 @@
# encoding: utf-8 # encoding: utf-8
require "logstash/outputs/base" require 'logstash/outputs/base'
require "logstash/namespace" require 'logstash/namespace'
require "stud/buffer" require 'concurrent'
require "java" require 'stud/interval'
require 'java'
require 'logstash-output-jdbc_jars'
# Write events to a SQL engine, using JDBC.
#
# It is upto the user of the plugin to correctly configure the plugin. This
# includes correctly crafting the SQL statement, and matching the number of
# parameters correctly.
class LogStash::Outputs::Jdbc < LogStash::Outputs::Base class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
# Adds buffer support concurrency :shared
include Stud::Buffer
config_name "jdbc" STRFTIME_FMT = '%Y-%m-%d %T.%L'.freeze
# Driver class RETRYABLE_SQLSTATE_CLASSES = [
config :driver_class, :validate => :string # Classes of retryable SQLSTATE codes
# Not all in the class will be retryable. However, this is the best that
# we've got right now.
# If a custom state code is required, set it in retry_sql_states.
'08', # Connection Exception
'24', # Invalid Cursor State (Maybe retry-able in some circumstances)
'25', # Invalid Transaction State
'40', # Transaction Rollback
'53', # Insufficient Resources
'54', # Program Limit Exceeded (MAYBE)
'55', # Object Not In Prerequisite State
'57', # Operator Intervention
'58', # System Error
].freeze
# connection string config_name 'jdbc'
config :connection_string, :validate => :string, :required => true
# Driver class - Reintroduced for https://github.com/theangryangel/logstash-output-jdbc/issues/26
config :driver_class, validate: :string
# Does the JDBC driver support autocommit?
config :driver_auto_commit, validate: :boolean, default: true, required: true
# Where to find the jar
# Defaults to not required, and to the original behaviour
config :driver_jar_path, validate: :string, required: false
# jdbc connection string
config :connection_string, validate: :string, required: true
# jdbc username - optional, maybe in the connection string
config :username, validate: :string, required: false
# jdbc password - optional, maybe in the connection string
config :password, validate: :string, required: false
# [ "insert into table (message) values(?)", "%{message}" ] # [ "insert into table (message) values(?)", "%{message}" ]
config :statement, :validate => :array, :required => true config :statement, validate: :array, required: true
# If this is an unsafe statement, use event.sprintf
# This also has potential performance penalties due to having to create a
# new statement for each event, rather than adding to the batch and issuing
# multiple inserts in 1 go
config :unsafe_statement, validate: :boolean, default: false
# Number of connections in the pool to maintain
config :max_pool_size, validate: :number, default: 24
# Connection timeout
config :connection_timeout, validate: :number, default: 10000
# We buffer a certain number of events before flushing that out to SQL. # We buffer a certain number of events before flushing that out to SQL.
# This setting controls how many events will be buffered before sending a # This setting controls how many events will be buffered before sending a
# batch of events. # batch of events.
config :flush_size, :validate => :number, :default => 1000 config :flush_size, validate: :number, default: 1000
# The amount of time since last flush before a flush is forced. # Set initial interval in seconds between retries. Doubled on each retry up to `retry_max_interval`
# config :retry_initial_interval, validate: :number, default: 2
# This setting helps ensure slow event rates don't get stuck in Logstash.
# For example, if your `flush_size` is 100, and you have received 10 events,
# and it has been more than `idle_flush_time` seconds since the last flush,
# Logstash will flush those 10 events automatically.
#
# This helps keep both fast and slow log streams moving along in
# a timely manner.
#
# If you change this value please ensure that you change
# max_repeat_exceptions_time accordingly.
config :idle_flush_time, :validate => :number, :default => 1
# Maximum number of repeating (sequential) exceptions, before we stop retrying # Maximum time between retries, in seconds
config :retry_max_interval, validate: :number, default: 128
# Any additional custom, retryable SQL state codes.
# Suitable for configuring retryable custom JDBC SQL state codes.
config :retry_sql_states, validate: :array, default: []
# Run a connection test on start.
config :connection_test, validate: :boolean, default: true
# Connection test and init string, required for some JDBC endpoints
# notable phoenix-thin - see logstash-output-jdbc issue #60
config :connection_test_query, validate: :string, required: false
# Maximum number of sequential failed attempts, before we stop retrying.
# If set to < 1, then it will infinitely retry. # If set to < 1, then it will infinitely retry.
config :max_repeat_exceptions, :validate => :number, :default => 5 # At the default values this is a little over 10 minutes
config :max_flush_exceptions, validate: :number, default: 10
# The max number of seconds since the last exception, before we consider it config :max_repeat_exceptions, obsolete: 'This has been replaced by max_flush_exceptions - which behaves slightly differently. Please check the documentation.'
# a different cause. config :max_repeat_exceptions_time, obsolete: 'This is no longer required'
# This value should be carefully considered in respect to idle_flush_time. config :idle_flush_time, obsolete: 'No longer necessary under Logstash v5'
config :max_repeat_exceptions_time, :validate => :number, :default => 30
public
def register def register
@logger.info('JDBC - Starting up')
@logger.info("JDBC - Starting up") load_jar_files!
if ENV['LOGSTASH_HOME'] @stopping = Concurrent::AtomicBoolean.new(false)
jarpath = File.join(ENV['LOGSTASH_HOME'], "/vendor/jar/jdbc/*.jar")
else @logger.warn('JDBC - Flush size is set to > 1000') if @flush_size > 1000
jarpath = File.join(File.dirname(__FILE__), "../../../vendor/jar/jdbc/*.jar")
if @statement.empty?
@logger.error('JDBC - No statement provided. Configuration error.')
end end
@logger.debug("JDBC - jarpath", path: jarpath) if !@unsafe_statement && @statement.length < 2
@logger.error("JDBC - Statement has no parameters. No events will be inserted into SQL as you're not passing any event data. Likely configuration error.")
jars = Dir[jarpath]
raise Exception.new("JDBC - No jars found in jarpath. Have you read the README?") if jars.empty?
jars.each do |jar|
@logger.debug("JDBC - Loaded jar", :jar => jar)
require jar
end end
import @driver_class setup_and_test_pool!
driver = Object.const_get(@driver_class[@driver_class.rindex('.') + 1, @driver_class.length]).new
@connection = driver.connect(@connection_string, java.util.Properties.new)
@logger.debug("JDBC - Created connection", :driver => driver, :connection => @connection)
if (@flush_size > 1000)
@logger.warn("JDBC - Flush size is set to > 1000. May have performance penalties, depending on your SQL engine.")
end
@repeat_exception_count = 0
@last_exception_time = Time.now
if (@max_repeat_exceptions > 0) and ((@idle_flush_time * @max_repeat_exceptions) > @max_repeat_exceptions_time)
@logger.warn("JDBC - max_repeat_exceptions_time is set such that it may still permit a looping exception. You probably changed idle_flush_time. Considering increasing max_repeat_exceptions_time.")
end
buffer_initialize(
:max_items => @flush_size,
:max_interval => @idle_flush_time,
:logger => @logger
)
end end
def receive(event) def multi_receive(events)
return unless output?(event) events.each_slice(@flush_size) do |slice|
return unless @statement.length > 0 retrying_submit(slice)
end
buffer_receive(event)
end end
def flush(events, teardown=false) def close
statement = @connection.prepareStatement(@statement[0]) @stopping.make_true
@pool.close
events.each do |event|
next if @statement.length < 2
@statement[1..-1].each_with_index do |i, idx|
case event[i]
when Time, LogStash::Timestamp
# Most reliable solution, cross JDBC driver
statement.setString(idx + 1, event[i].iso8601())
when Fixnum, Integer
statement.setInt(idx + 1, event[i])
when Float
statement.setFloat(idx + 1, event[i])
when String
statement.setString(idx + 1, event[i])
when true
statement.setBoolean(idx + 1, true)
when false
statement.setBoolean(idx + 1, false)
when nil
statement.setString(idx + 1, nil)
else
statement.setString(idx + 1, event.sprintf(i))
end
end
statement.addBatch()
end
begin
@logger.debug("JDBC - Sending SQL", :sql => statement.toString())
statement.executeBatch()
rescue => e
# Raising an exception will incur a retry from Stud::Buffer.
# Since the exceutebatch failed this should mean any events failed to be
# inserted will be re-run. We're going to log it for the lols anyway.
@logger.warn("JDBC - Exception. Will automatically retry", :exception => e)
if e.getNextException() != nil
@logger.warn("JDBC - Exception. Will automatically retry", :exception => e.getNextException())
end
end
statement.close()
end
def on_flush_error(e)
return if @max_repeat_exceptions < 1
if @last_exception == e.to_s
@repeat_exception_count += 1
else
@repeat_exception_count = 0
end
if (@repeat_exception_count >= @max_repeat_exceptions) and (Time.now - @last_exception_time) < @max_repeat_exceptions_time
@logger.error("JDBC - Exception repeated more than the maximum configured", :exception => e, :max_repeat_exceptions => @max_repeat_exceptions, :max_repeat_exceptions_time => @max_repeat_exceptions_time)
raise e
end
@last_exception_time = Time.now
@last_exception = e.to_s
end
def teardown
buffer_flush(:final => true)
@connection.close()
super super
end end
private
def setup_and_test_pool!
# Setup pool
@pool = Java::ComZaxxerHikari::HikariDataSource.new
@pool.setAutoCommit(@driver_auto_commit)
@pool.setDriverClassName(@driver_class) if @driver_class
@pool.setJdbcUrl(@connection_string)
@pool.setUsername(@username) if @username
@pool.setPassword(@password) if @password
@pool.setMaximumPoolSize(@max_pool_size)
@pool.setConnectionTimeout(@connection_timeout)
validate_connection_timeout = (@connection_timeout / 1000) / 2
if !@connection_test_query.nil? and @connection_test_query.length > 1
@pool.setConnectionTestQuery(@connection_test_query)
@pool.setConnectionInitSql(@connection_test_query)
end
return unless @connection_test
# Test connection
test_connection = @pool.getConnection
unless test_connection.isValid(validate_connection_timeout)
@logger.warn('JDBC - Connection is not reporting as validate. Either connection is invalid, or driver is not getting the appropriate response.')
end
test_connection.close
end
def load_jar_files!
# Load jar from driver path
unless @driver_jar_path.nil?
raise LogStash::ConfigurationError, 'JDBC - Could not find jar file at given path. Check config.' unless File.exist? @driver_jar_path
require @driver_jar_path
return
end
# Revert original behaviour of loading from vendor directory
# if no path given
jarpath = if ENV['LOGSTASH_HOME']
File.join(ENV['LOGSTASH_HOME'], '/vendor/jar/jdbc/*.jar')
else
File.join(File.dirname(__FILE__), '../../../vendor/jar/jdbc/*.jar')
end
@logger.trace('JDBC - jarpath', path: jarpath)
jars = Dir[jarpath]
raise LogStash::ConfigurationError, 'JDBC - No jars found. Have you read the README?' if jars.empty?
jars.each do |jar|
@logger.trace('JDBC - Loaded jar', jar: jar)
require jar
end
end
def submit(events)
connection = nil
statement = nil
events_to_retry = []
begin
connection = @pool.getConnection
rescue => e
log_jdbc_exception(e, true)
# If a connection is not available, then the server has gone away
# We're not counting that towards our retry count.
return events, false
end
events.each do |event|
begin
statement = connection.prepareStatement(
(@unsafe_statement == true) ? event.sprintf(@statement[0]) : @statement[0]
)
statement = add_statement_event_params(statement, event) if @statement.length > 1
statement.execute
rescue => e
if retry_exception?(e)
events_to_retry.push(event)
end
ensure
statement.close unless statement.nil?
end
end
connection.close unless connection.nil?
return events_to_retry, true
end
def retrying_submit(actions)
# Initially we submit the full list of actions
submit_actions = actions
count_as_attempt = true
attempts = 1
sleep_interval = @retry_initial_interval
while @stopping.false? and (submit_actions and !submit_actions.empty?)
return if !submit_actions || submit_actions.empty? # If everything's a success we move along
# We retry whatever didn't succeed
submit_actions, count_as_attempt = submit(submit_actions)
# Everything was a success!
break if !submit_actions || submit_actions.empty?
if @max_flush_exceptions > 0 and count_as_attempt == true
attempts += 1
if attempts > @max_flush_exceptions
@logger.error("JDBC - max_flush_exceptions has been reached. #{submit_actions.length} events have been unable to be sent to SQL and are being dropped. See previously logged exceptions for details.")
break
end
end
# If we're retrying the action sleep for the recommended interval
# Double the interval for the next time through to achieve exponential backoff
Stud.stoppable_sleep(sleep_interval) { @stopping.true? }
sleep_interval = next_sleep_interval(sleep_interval)
end
end
def add_statement_event_params(statement, event)
@statement[1..-1].each_with_index do |i, idx|
if i.is_a? String
value = event.get(i)
if value.nil? and i =~ /%\{/
value = event.sprintf(i)
end
else
value = i
end
case value
when Time
# See LogStash::Timestamp, below, for the why behind strftime.
statement.setString(idx + 1, value.strftime(STRFTIME_FMT))
when LogStash::Timestamp
# XXX: Using setString as opposed to setTimestamp, because setTimestamp
# doesn't behave correctly in some drivers (Known: sqlite)
#
# Additionally this does not use `to_iso8601`, since some SQL databases
# choke on the 'T' in the string (Known: Derby).
#
# strftime appears to be the most reliable across drivers.
statement.setString(idx + 1, value.time.strftime(STRFTIME_FMT))
when Fixnum, Integer
if value > 2147483647 or value < -2147483648
statement.setLong(idx + 1, value)
else
statement.setInt(idx + 1, value)
end
when Float
statement.setFloat(idx + 1, value)
when String
statement.setString(idx + 1, value)
when true, false
statement.setBoolean(idx + 1, value)
else
statement.setString(idx + 1, nil)
end
end
statement
end
def retry_exception?(exception)
retrying = (exception.respond_to? 'getSQLState' and (RETRYABLE_SQLSTATE_CLASSES.include?(exception.getSQLState.to_s[0,2]) or @retry_sql_states.include?(exception.getSQLState)))
log_jdbc_exception(exception, retrying)
retrying
end
def log_jdbc_exception(exception, retrying)
current_exception = exception
log_text = 'JDBC - Exception. ' + (retrying ? 'Retrying' : 'Not retrying') + '.'
log_method = (retrying ? 'warn' : 'error')
loop do
@logger.send(log_method, log_text, :exception => current_exception)
if current_exception.respond_to? 'getNextException'
current_exception = current_exception.getNextException()
else
current_exception = nil
end
break if current_exception == nil
end
end
def next_sleep_interval(current_interval)
doubled = current_interval * 2
doubled > @retry_max_interval ? @retry_max_interval : doubled
end
end # class LogStash::Outputs::jdbc end # class LogStash::Outputs::jdbc

17
log4j2.xml Normal file
View File

@@ -0,0 +1,17 @@
<?xml version="1.0" encoding="UTF-8"?>
<Configuration>
<Appenders>
<File name="file" fileName="log4j2.log">
<PatternLayout pattern="%d{yyyy-mm-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
</File>
</Appenders>
<Loggers>
<!-- If we need to figure out whats happening for development purposes, disable this -->
<Logger name="com.zaxxer.hikari" level="off" />
<Root level="debug">
<AppenderRef ref="file"/>
</Root>
</Loggers>
</Configuration>

View File

@@ -1,24 +1,38 @@
Gem::Specification.new do |s| Gem::Specification.new do |s|
s.name = 'logstash-output-jdbc' s.name = 'logstash-output-jdbc'
s.version = "0.1.1" s.version = '5.1.0'
s.licenses = [ "Apache License (2.0)" ] s.licenses = ['Apache License (2.0)']
s.summary = "This plugin allows you to output to SQL, via JDBC" s.summary = 'This plugin allows you to output to SQL, via JDBC'
s.description = "This gem is a logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/plugin install gemname. This gem is not a stand-alone program" s.description = "This gem is a logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install 'logstash-output-jdbc'. This gem is not a stand-alone program"
s.authors = ["the_angry_angel"] s.authors = ['the_angry_angel']
s.email = "karl+github@theangryangel.co.uk" s.email = 'karl+github@theangryangel.co.uk'
s.homepage = "https://github.com/theangryangel/logstash-output-jdbc" s.homepage = 'https://github.com/theangryangel/logstash-output-jdbc'
s.require_paths = [ "lib" ] s.require_paths = ['lib']
# Java only
s.platform = 'java'
# Files # Files
s.files = `git ls-files`.split($\) s.files = Dir.glob('{lib,spec}/**/*.rb') + Dir.glob('vendor/**/*') + %w(LICENSE.txt README.md)
# Tests
# Tests
s.test_files = s.files.grep(%r{^(test|spec|features)/}) s.test_files = s.files.grep(%r{^(test|spec|features)/})
# Special flag to let us know this is actually a logstash plugin # Special flag to let us know this is actually a logstash plugin
s.metadata = { "logstash_plugin" => "true", "logstash_group" => "output" } s.metadata = { 'logstash_plugin' => 'true', 'logstash_group' => 'output' }
# Gem dependencies # Gem dependencies
s.add_runtime_dependency "logstash-core", ">= 1.4.0", "< 2.0.0" s.add_runtime_dependency 'logstash-core-plugin-api', '>= 1.60', '<= 2.99'
s.add_runtime_dependency "logstash-codec-plain" s.add_runtime_dependency 'stud'
s.add_development_dependency "logstash-devutils" s.add_runtime_dependency 'logstash-codec-plain'
s.requirements << "jar 'com.zaxxer:HikariCP', '2.4.7'"
s.requirements << "jar 'org.slf4j:slf4j-log4j12', '1.7.21'"
s.add_development_dependency 'jar-dependencies'
s.add_development_dependency 'ruby-maven', '~> 3.3'
s.add_development_dependency 'logstash-devutils'
s.add_development_dependency 'rubocop', '0.41.2'
end end

19
scripts/minutes_to_retries.rb Executable file
View File

@@ -0,0 +1,19 @@
#!/usr/bin/env ruby -w
seconds_to_reach = 10 * 60
retry_max_interval = 128
current_interval = 2
total_interval = 0
exceptions_count = 1
loop do
break if total_interval > seconds_to_reach
exceptions_count += 1
current_interval = current_interval * 2 > retry_max_interval ? retry_max_interval : current_interval * 2
total_interval += current_interval
end
puts exceptions_count

View File

@@ -0,0 +1,8 @@
#!/bin/bash
wget http://search.maven.org/remotecontent?filepath=org/apache/derby/derby/10.12.1.1/derby-10.12.1.1.jar -O /tmp/derby.jar
sudo apt-get install mysql-server -qq -y
echo "create database logstash_output_jdbc_test;" | mysql -u root
wget http://search.maven.org/remotecontent?filepath=mysql/mysql-connector-java/5.1.38/mysql-connector-java-5.1.38.jar -O /tmp/mysql.jar
wget http://search.maven.org/remotecontent?filepath=org/xerial/sqlite-jdbc/3.8.11.2/sqlite-jdbc-3.8.11.2.jar -O /tmp/sqlite.jar

View File

@@ -0,0 +1,3 @@
export JDBC_DERBY_JAR=/tmp/derby.jar
export JDBC_MYSQL_JAR=/tmp/mysql.jar
export JDBC_SQLITE_JAR=/tmp/sqlite.jar

170
spec/jdbc_spec_helper.rb Normal file
View File

@@ -0,0 +1,170 @@
require 'logstash/devutils/rspec/spec_helper'
require 'logstash/outputs/jdbc'
require 'stud/temporary'
require 'java'
require 'securerandom'
RSpec.configure do |c|
def start_service(name)
cmd = "sudo /etc/init.d/#{name}* start"
`which systemctl`
if $?.success?
cmd = "sudo systemctl start #{name}"
end
`#{cmd}`
end
def stop_service(name)
cmd = "sudo /etc/init.d/#{name}* stop"
`which systemctl`
if $?.success?
cmd = "sudo systemctl stop #{name}"
end
`#{cmd}`
end
end
RSpec.shared_context 'rspec setup' do
it 'ensure jar is available' do
expect(ENV[jdbc_jar_env]).not_to be_nil, "#{jdbc_jar_env} not defined, required to run tests"
expect(File.exist?(ENV[jdbc_jar_env])).to eq(true), "#{jdbc_jar_env} defined, but not valid"
end
end
RSpec.shared_context 'when initializing' do
it 'shouldn\'t register with a missing jar file' do
jdbc_settings['driver_jar_path'] = nil
plugin = LogStash::Plugin.lookup('output', 'jdbc').new(jdbc_settings)
expect { plugin.register }.to raise_error(LogStash::ConfigurationError)
end
end
RSpec.shared_context 'when outputting messages' do
let(:logger) {
double("logger")
}
let(:jdbc_test_table) do
'logstash_output_jdbc_test'
end
let(:jdbc_drop_table) do
"DROP TABLE #{jdbc_test_table}"
end
let(:jdbc_create_table) do
"CREATE table #{jdbc_test_table} (created_at datetime not null, message varchar(512) not null, message_sprintf varchar(512) not null, static_int int not null, static_bit bit not null, static_bigint bigint not null)"
end
let(:jdbc_statement) do
["insert into #{jdbc_test_table} (created_at, message, message_sprintf, static_int, static_bit, static_bigint) values(?, ?, ?, ?, ?, ?)", '@timestamp', 'message', 'sprintf-%{message}', 1, true, 4000881632477184]
end
let(:systemd_database_service) do
nil
end
let(:event_fields) do
{ message: "test-message #{SecureRandom.uuid}" }
end
let(:event) { LogStash::Event.new(event_fields) }
let(:plugin) do
# Setup logger
allow(LogStash::Outputs::Jdbc).to receive(:logger).and_return(logger)
# XXX: Suppress reflection logging. There has to be a better way around this.
allow(logger).to receive(:debug).with(/config LogStash::/)
# Suppress beta warnings.
allow(logger).to receive(:info).with(/Please let us know if you find bugs or have suggestions on how to improve this plugin./)
# Suppress start up messages.
expect(logger).to receive(:info).once.with(/JDBC - Starting up/)
# Setup plugin
output = LogStash::Plugin.lookup('output', 'jdbc').new(jdbc_settings)
output.register
# Setup table
c = output.instance_variable_get(:@pool).getConnection
# Derby doesn't support IF EXISTS.
# Seems like the quickest solution. Bleurgh.
begin
stmt = c.createStatement
stmt.executeUpdate(jdbc_drop_table)
rescue
# noop
ensure
stmt.close
stmt = c.createStatement
stmt.executeUpdate(jdbc_create_table)
stmt.close
c.close
end
output
end
it 'should save a event' do
expect { plugin.multi_receive([event]) }.to_not raise_error
# Verify the number of items in the output table
c = plugin.instance_variable_get(:@pool).getConnection
stmt = c.prepareStatement("select count(*) as total from #{jdbc_test_table} where message = ?")
stmt.setString(1, event.get('message'))
rs = stmt.executeQuery
count = 0
count = rs.getInt('total') while rs.next
stmt.close
c.close
expect(count).to eq(1)
end
it 'should not save event, and log an unretryable exception' do
e = event
original_event = e.get('message')
e.set('message', nil)
expect(logger).to receive(:error).once.with(/JDBC - Exception. Not retrying/, Hash)
expect { plugin.multi_receive([event]) }.to_not raise_error
e.set('message', original_event)
end
it 'it should retry after a connection loss, and log a warning' do
skip "does not run as a service" if systemd_database_service.nil?
p = plugin
# Check that everything is fine right now
expect { p.multi_receive([event]) }.not_to raise_error
stop_service(systemd_database_service)
# Start a thread to restart the service after the fact.
t = Thread.new(systemd_database_service) { |systemd_database_service|
sleep 20
start_service(systemd_database_service)
}
t.run
expect(logger).to receive(:warn).at_least(:once).with(/JDBC - Exception. Retrying/, Hash)
expect { p.multi_receive([event]) }.to_not raise_error
# Wait for the thread to finish
t.join
end
end

View File

@@ -0,0 +1,25 @@
require_relative '../jdbc_spec_helper'
describe 'logstash-output-jdbc: derby', if: ENV['JDBC_DERBY_JAR'] do
include_context 'rspec setup'
include_context 'when initializing'
include_context 'when outputting messages'
let(:jdbc_jar_env) do
'JDBC_DERBY_JAR'
end
let(:jdbc_create_table) do
"CREATE table #{jdbc_test_table} (created_at timestamp not null, message varchar(512) not null, message_sprintf varchar(512) not null, static_int int not null, static_bit boolean not null, static_bigint bigint not null)"
end
let(:jdbc_settings) do
{
'driver_class' => 'org.apache.derby.jdbc.EmbeddedDriver',
'connection_string' => 'jdbc:derby:memory:testdb;create=true',
'driver_jar_path' => ENV[jdbc_jar_env],
'statement' => jdbc_statement,
'max_flush_exceptions' => 1
}
end
end

View File

@@ -0,0 +1,25 @@
require_relative '../jdbc_spec_helper'
describe 'logstash-output-jdbc: mysql', if: ENV['JDBC_MYSQL_JAR'] do
include_context 'rspec setup'
include_context 'when initializing'
include_context 'when outputting messages'
let(:jdbc_jar_env) do
'JDBC_MYSQL_JAR'
end
let(:systemd_database_service) do
'mysql'
end
let(:jdbc_settings) do
{
'driver_class' => 'com.mysql.jdbc.Driver',
'connection_string' => 'jdbc:mysql://localhost/logstash_output_jdbc_test?user=root',
'driver_jar_path' => ENV[jdbc_jar_env],
'statement' => jdbc_statement,
'max_flush_exceptions' => 1
}
end
end

11
spec/outputs/jdbc_spec.rb Normal file
View File

@@ -0,0 +1,11 @@
require_relative '../jdbc_spec_helper'
describe LogStash::Outputs::Jdbc do
context 'when initializing' do
it 'shouldn\'t register without a config' do
expect do
LogStash::Plugin.lookup('output', 'jdbc').new
end.to raise_error(LogStash::ConfigurationError)
end
end
end

View File

@@ -0,0 +1,27 @@
require_relative '../jdbc_spec_helper'
describe 'logstash-output-jdbc: sqlite', if: ENV['JDBC_SQLITE_JAR'] do
JDBC_SQLITE_FILE = '/tmp/logstash_output_jdbc_test.db'.freeze
before(:context) do
File.delete(JDBC_SQLITE_FILE) if File.exist? JDBC_SQLITE_FILE
end
include_context 'rspec setup'
include_context 'when initializing'
include_context 'when outputting messages'
let(:jdbc_jar_env) do
'JDBC_SQLITE_JAR'
end
let(:jdbc_settings) do
{
'driver_class' => 'org.sqlite.JDBC',
'connection_string' => "jdbc:sqlite:#{JDBC_SQLITE_FILE}",
'driver_jar_path' => ENV[jdbc_jar_env],
'statement' => jdbc_statement,
'max_flush_exceptions' => 1
}
end
end