Compare commits
27 Commits
Author | SHA1 | Date | |
---|---|---|---|
![]() |
eacd2a2c38 | ||
![]() |
309768c893 | ||
![]() |
aba4e08bf5 | ||
![]() |
e91db61e8c | ||
![]() |
5f2a99c4a6 | ||
![]() |
8e73958359 | ||
![]() |
f48c12a8da | ||
![]() |
7a2996e985 | ||
![]() |
4c04b8c24d | ||
![]() |
fe982c95aa | ||
![]() |
d1a733d195 | ||
![]() |
21217f7b03 | ||
![]() |
2bdb75f1b7 | ||
![]() |
6b5398b152 | ||
![]() |
daebe44f32 | ||
![]() |
079c3a6c78 | ||
![]() |
3804eb59d2 | ||
![]() |
ef6ed66cdd | ||
![]() |
147cd3d67b | ||
![]() |
6affac0a0c | ||
![]() |
508c769650 | ||
![]() |
d36d659e16 | ||
![]() |
51a04faca3 | ||
![]() |
cdd88fe322 | ||
![]() |
e74d67b477 | ||
![]() |
710791c3aa | ||
![]() |
ccb30c7edd |
12
CHANGELOG.md
12
CHANGELOG.md
@ -1,6 +1,18 @@
|
||||
# Change Log
|
||||
All notable changes to this project will be documented in this file, from 0.2.0.
|
||||
|
||||
## [5.3.0] - 2017-11-08
|
||||
- Adds configuration options `enable_event_as_json_keyword` and `event_as_json_keyword`
|
||||
- Adds BigDecimal support
|
||||
- Adds additional logging for debugging purposes (with thanks to @mlkmhd's work)
|
||||
|
||||
## [5.2.1] - 2017-04-09
|
||||
- Adds Array and Hash to_json support for non-sprintf syntax
|
||||
|
||||
## [5.2.0] - 2017-04-01
|
||||
- Upgrades HikariCP to latest
|
||||
- Fixes HikariCP logging integration issues
|
||||
|
||||
## [5.1.0] - 2016-12-17
|
||||
- phoenix-thin fixes for issue #60
|
||||
|
||||
|
@ -56,6 +56,8 @@ For development:
|
||||
| retry_initial_interval | Number | Number of seconds before the initial retry in the event of a failure. On each failure it will be doubled until it reaches retry_max_interval | No | 2 |
|
||||
| retry_max_interval | Number | Maximum number of seconds between each retry | No | 128 |
|
||||
| retry_sql_states | Array of strings | An array of custom SQL state codes you wish to retry until `max_flush_exceptions`. Useful if you're using a JDBC driver which returns retry-able, but non-standard SQL state codes in it's exceptions. | No | [] |
|
||||
| event_as_json_keyword | String | The magic key word that the plugin looks for to convert the entire event into a JSON object. As Logstash does not support this out of the box with it's `sprintf` implementation, you can use whatever this field is set to in the statement parameters | No | @event |
|
||||
| enable_event_as_json_keyword | Boolean | Enables the magic keyword set in the configuration option `event_as_json_keyword`. Without this enabled the plugin will not convert the `event_as_json_keyword` into JSON encoding of the entire event. | No | False |
|
||||
|
||||
## Example configurations
|
||||
Example logstash configurations, can now be found in the examples directory. Where possible we try to link every configuration with a tested jar.
|
||||
|
18
THANKS.md
Normal file
18
THANKS.md
Normal file
@ -0,0 +1,18 @@
|
||||
logstash-output-jdbc is a project originally created by Karl Southern
|
||||
(the_angry_angel), but there are a number of people that have contributed
|
||||
or implemented key features over time. We do our best to keep this list
|
||||
up-to-date, but you can also have a look at the nice contributor graphs
|
||||
produced by GitHub: https://github.com/theangryangel/logstash-output-jdbc/graphs/contributors
|
||||
|
||||
* [hordijk](https://github.com/hordijk)
|
||||
* [dmitryakadiamond](https://github.com/dmitryakadiamond)
|
||||
* [MassimoSporchia](https://github.com/MassimoSporchia)
|
||||
* [ebuildy](https://github.com/ebuildy)
|
||||
* [kushtrimjunuzi](https://github.com/kushtrimjunuzi)
|
||||
* [josemazo](https://github.com/josemazo)
|
||||
* [aceoliver](https://github.com/aceoliver)
|
||||
* [roflmao](https://github.com/roflmao)
|
||||
* [onesuper](https://github.com/onesuper)
|
||||
* [phr0gz](https://github.com/phr0gz)
|
||||
* [jMonsinjon](https://github.com/jMonsinjon)
|
||||
* [mlkmhd](https://github.com/mlkmhd)
|
17
Vagrantfile
vendored
17
Vagrantfile
vendored
@ -1,20 +1,19 @@
|
||||
# -*- mode: ruby -*-
|
||||
# vi: set ft=ruby :
|
||||
|
||||
JRUBY_VERSION = "jruby-1.7"
|
||||
|
||||
Vagrant.configure(2) do |config|
|
||||
|
||||
config.vm.define "debian" do |deb|
|
||||
deb.vm.box = 'debian/jessie64'
|
||||
deb.vm.box = 'debian/stretch64'
|
||||
deb.vm.synced_folder '.', '/vagrant', type: :virtualbox
|
||||
|
||||
deb.vm.provision 'shell', inline: <<-EOP
|
||||
echo "deb http://ftp.debian.org/debian jessie-backports main" | tee --append /etc/apt/sources.list > /dev/null
|
||||
sed -i 's/main/main contrib non-free/g' /etc/apt/sources.list
|
||||
apt-get update
|
||||
apt-get remove openjdk-7-jre-headless -y -q
|
||||
apt-get install git openjdk-8-jre curl -y -q
|
||||
gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
|
||||
curl -sSL https://get.rvm.io | bash -s stable --ruby=jruby-1.7
|
||||
apt-get install openjdk-8-jre ca-certificates-java git curl -y -q
|
||||
curl -sSL https://rvm.io/mpapis.asc | sudo gpg --import -
|
||||
curl -sSL https://get.rvm.io | bash -s stable --ruby=#{JRUBY_VERSION}
|
||||
usermod -a -G rvm vagrant
|
||||
EOP
|
||||
end
|
||||
@ -27,8 +26,8 @@ Vagrant.configure(2) do |config|
|
||||
centos.vm.provision 'shell', inline: <<-EOP
|
||||
yum update
|
||||
yum install java-1.7.0-openjdk
|
||||
gpg2 --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
|
||||
curl -sSL https://get.rvm.io | bash -s stable --ruby=jruby-1.7
|
||||
curl -sSL https://rvm.io/mpapis.asc | sudo gpg --import -
|
||||
curl -sSL https://get.rvm.io | bash -s stable --ruby=#{JRUBY_VERSION}
|
||||
usermod -a -G rvm vagrant
|
||||
EOP
|
||||
end
|
||||
|
@ -8,8 +8,25 @@ input
|
||||
}
|
||||
output {
|
||||
jdbc {
|
||||
driver_jar_path => '/opt/sqljdbc42.jar'
|
||||
connection_string => "jdbc:sqlserver://server:1433;databaseName=databasename;user=username;password=password"
|
||||
statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, ?, ?)", "host", "@timestamp", "message" ]
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
Another example, with mixed static strings and parameters, with thanks to [@MassimoSporchia](https://github.com/MassimoSporchia)
|
||||
```
|
||||
input
|
||||
{
|
||||
stdin { }
|
||||
}
|
||||
output {
|
||||
jdbc {
|
||||
driver_jar_path => '/opt/sqljdbc42.jar'
|
||||
connection_string => "jdbc:sqlserver://server:1433;databaseName=databasename;user=username;password=password"
|
||||
statement => [ "INSERT INTO log (host, timestamp, message, comment) VALUES(?, ?, ?, 'static string')", "host", "@timestamp", "message" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
@ -5,6 +5,8 @@ require 'concurrent'
|
||||
require 'stud/interval'
|
||||
require 'java'
|
||||
require 'logstash-output-jdbc_jars'
|
||||
require 'json'
|
||||
require 'bigdecimal'
|
||||
|
||||
# Write events to a SQL engine, using JDBC.
|
||||
#
|
||||
@ -63,7 +65,7 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
|
||||
config :unsafe_statement, validate: :boolean, default: false
|
||||
|
||||
# Number of connections in the pool to maintain
|
||||
config :max_pool_size, validate: :number, default: 24
|
||||
config :max_pool_size, validate: :number, default: 5
|
||||
|
||||
# Connection timeout
|
||||
config :connection_timeout, validate: :number, default: 10000
|
||||
@ -99,6 +101,12 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
|
||||
config :max_repeat_exceptions_time, obsolete: 'This is no longer required'
|
||||
config :idle_flush_time, obsolete: 'No longer necessary under Logstash v5'
|
||||
|
||||
# Allows the whole event to be converted to JSON
|
||||
config :enable_event_as_json_keyword, validate: :boolean, default: false
|
||||
|
||||
# The magic key used to convert the whole event to JSON. If you need this, and you have the default in your events, you can use this to change your magic keyword.
|
||||
config :event_as_json_keyword, validate: :string, default: '@event'
|
||||
|
||||
def register
|
||||
@logger.info('JDBC - Starting up')
|
||||
|
||||
@ -200,7 +208,7 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
|
||||
begin
|
||||
connection = @pool.getConnection
|
||||
rescue => e
|
||||
log_jdbc_exception(e, true)
|
||||
log_jdbc_exception(e, true, nil)
|
||||
# If a connection is not available, then the server has gone away
|
||||
# We're not counting that towards our retry count.
|
||||
return events, false
|
||||
@ -214,7 +222,7 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
|
||||
statement = add_statement_event_params(statement, event) if @statement.length > 1
|
||||
statement.execute
|
||||
rescue => e
|
||||
if retry_exception?(e)
|
||||
if retry_exception?(e, event.to_json())
|
||||
events_to_retry.push(event)
|
||||
end
|
||||
ensure
|
||||
@ -261,7 +269,9 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
|
||||
|
||||
def add_statement_event_params(statement, event)
|
||||
@statement[1..-1].each_with_index do |i, idx|
|
||||
if i.is_a? String
|
||||
if @enable_event_as_json_keyword == true and i.is_a? String and i == @event_as_json_keyword
|
||||
value = event.to_json
|
||||
elsif i.is_a? String
|
||||
value = event.get(i)
|
||||
if value.nil? and i =~ /%\{/
|
||||
value = event.sprintf(i)
|
||||
@ -289,10 +299,15 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
|
||||
else
|
||||
statement.setInt(idx + 1, value)
|
||||
end
|
||||
when BigDecimal
|
||||
# TODO: There has to be a better way than this. Find it.
|
||||
statement.setBigDecimal(idx + 1, java.math.BigDecimal.new(value.to_s))
|
||||
when Float
|
||||
statement.setFloat(idx + 1, value)
|
||||
when String
|
||||
statement.setString(idx + 1, value)
|
||||
when Array, Hash
|
||||
statement.setString(idx + 1, value.to_json)
|
||||
when true, false
|
||||
statement.setBoolean(idx + 1, value)
|
||||
else
|
||||
@ -303,20 +318,23 @@ class LogStash::Outputs::Jdbc < LogStash::Outputs::Base
|
||||
statement
|
||||
end
|
||||
|
||||
def retry_exception?(exception)
|
||||
def retry_exception?(exception, event)
|
||||
retrying = (exception.respond_to? 'getSQLState' and (RETRYABLE_SQLSTATE_CLASSES.include?(exception.getSQLState.to_s[0,2]) or @retry_sql_states.include?(exception.getSQLState)))
|
||||
log_jdbc_exception(exception, retrying)
|
||||
log_jdbc_exception(exception, retrying, event)
|
||||
|
||||
retrying
|
||||
end
|
||||
|
||||
def log_jdbc_exception(exception, retrying)
|
||||
def log_jdbc_exception(exception, retrying, event)
|
||||
current_exception = exception
|
||||
log_text = 'JDBC - Exception. ' + (retrying ? 'Retrying' : 'Not retrying') + '.'
|
||||
log_text = 'JDBC - Exception. ' + (retrying ? 'Retrying' : 'Not retrying')
|
||||
|
||||
log_method = (retrying ? 'warn' : 'error')
|
||||
|
||||
loop do
|
||||
@logger.send(log_method, log_text, :exception => current_exception)
|
||||
# TODO reformat event output so that it only shows the fields necessary.
|
||||
|
||||
@logger.send(log_method, log_text, :exception => current_exception, :statement => @statement[0], :event => event)
|
||||
|
||||
if current_exception.respond_to? 'getNextException'
|
||||
current_exception = current_exception.getNextException()
|
||||
|
@ -1,6 +1,6 @@
|
||||
Gem::Specification.new do |s|
|
||||
s.name = 'logstash-output-jdbc'
|
||||
s.version = '5.1.0'
|
||||
s.version = '5.3.0'
|
||||
s.licenses = ['Apache License (2.0)']
|
||||
s.summary = 'This plugin allows you to output to SQL, via JDBC'
|
||||
s.description = "This gem is a logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install 'logstash-output-jdbc'. This gem is not a stand-alone program"
|
||||
@ -22,17 +22,17 @@ Gem::Specification.new do |s|
|
||||
s.metadata = { 'logstash_plugin' => 'true', 'logstash_group' => 'output' }
|
||||
|
||||
# Gem dependencies
|
||||
s.add_runtime_dependency 'logstash-core-plugin-api', '>= 1.60', '<= 2.99'
|
||||
s.add_runtime_dependency 'logstash-core-plugin-api', '~> 2'
|
||||
s.add_runtime_dependency 'stud'
|
||||
s.add_runtime_dependency 'logstash-codec-plain'
|
||||
|
||||
s.requirements << "jar 'com.zaxxer:HikariCP', '2.4.7'"
|
||||
s.requirements << "jar 'org.slf4j:slf4j-log4j12', '1.7.21'"
|
||||
s.requirements << "jar 'com.zaxxer:HikariCP', '2.7.2'"
|
||||
s.requirements << "jar 'org.apache.logging.log4j:log4j-slf4j-impl', '2.6.2'"
|
||||
|
||||
s.add_development_dependency 'jar-dependencies'
|
||||
s.add_development_dependency 'ruby-maven', '~> 3.3'
|
||||
|
||||
s.add_development_dependency 'logstash-devutils'
|
||||
s.add_development_dependency "logstash-devutils", "~> 1.3", ">= 1.3.1"
|
||||
|
||||
s.add_development_dependency 'rubocop', '0.41.2'
|
||||
end
|
||||
|
@ -1,8 +1,10 @@
|
||||
#!/bin/bash
|
||||
wget http://search.maven.org/remotecontent?filepath=org/apache/derby/derby/10.12.1.1/derby-10.12.1.1.jar -O /tmp/derby.jar
|
||||
|
||||
sudo apt-get install mysql-server -qq -y
|
||||
echo "create database logstash_output_jdbc_test;" | mysql -u root
|
||||
sudo apt-get install mysql-server postgresql-client postgresql -qq -y
|
||||
echo "create database logstash; grant all privileges on logstash.* to 'logstash'@'localhost' identified by 'logstash'; flush privileges;" | sudo -u root mysql
|
||||
echo "create user logstash PASSWORD 'logstash'; create database logstash; grant all privileges on database logstash to logstash;" | sudo -u postgres psql
|
||||
|
||||
wget http://search.maven.org/remotecontent?filepath=mysql/mysql-connector-java/5.1.38/mysql-connector-java-5.1.38.jar -O /tmp/mysql.jar
|
||||
wget http://search.maven.org/remotecontent?filepath=org/xerial/sqlite-jdbc/3.8.11.2/sqlite-jdbc-3.8.11.2.jar -O /tmp/sqlite.jar
|
||||
wget http://central.maven.org/maven2/org/postgresql/postgresql/42.1.4/postgresql-42.1.4.jar -O /tmp/postgres.jar
|
||||
|
@ -1,3 +1,5 @@
|
||||
export JDBC_DERBY_JAR=/tmp/derby.jar
|
||||
export JDBC_MYSQL_JAR=/tmp/mysql.jar
|
||||
export JDBC_SQLITE_JAR=/tmp/sqlite.jar
|
||||
export JDBC_POSTGRES_JAR=/tmp/postgres.jar
|
||||
|
||||
|
@ -4,6 +4,8 @@ require 'stud/temporary'
|
||||
require 'java'
|
||||
require 'securerandom'
|
||||
|
||||
RSpec::Support::ObjectFormatter.default_instance.max_formatted_output_length = 80000
|
||||
|
||||
RSpec.configure do |c|
|
||||
|
||||
def start_service(name)
|
||||
@ -58,24 +60,53 @@ RSpec.shared_context 'when outputting messages' do
|
||||
"DROP TABLE #{jdbc_test_table}"
|
||||
end
|
||||
|
||||
let(:jdbc_statement_fields) do
|
||||
[
|
||||
{db_field: "created_at", db_type: "datetime", db_value: '?', event_field: '@timestamp'},
|
||||
{db_field: "message", db_type: "varchar(512)", db_value: '?', event_field: 'message'},
|
||||
{db_field: "message_sprintf", db_type: "varchar(512)", db_value: '?', event_field: 'sprintf-%{message}'},
|
||||
{db_field: "static_int", db_type: "int", db_value: '?', event_field: 'int'},
|
||||
{db_field: "static_bigint", db_type: "bigint", db_value: '?', event_field: 'bigint'},
|
||||
{db_field: "static_float", db_type: "float", db_value: '?', event_field: 'float'},
|
||||
{db_field: "static_bool", db_type: "boolean", db_value: '?', event_field: 'bool'},
|
||||
{db_field: "static_bigdec", db_type: "decimal", db_value: '?', event_field: 'bigdec'}
|
||||
]
|
||||
end
|
||||
|
||||
let(:jdbc_create_table) do
|
||||
"CREATE table #{jdbc_test_table} (created_at datetime not null, message varchar(512) not null, message_sprintf varchar(512) not null, static_int int not null, static_bit bit not null, static_bigint bigint not null)"
|
||||
fields = jdbc_statement_fields.collect { |entry| "#{entry[:db_field]} #{entry[:db_type]} not null" }.join(", ")
|
||||
|
||||
"CREATE table #{jdbc_test_table} (#{fields})"
|
||||
end
|
||||
|
||||
let(:jdbc_drop_table) do
|
||||
"DROP table #{jdbc_test_table}"
|
||||
end
|
||||
|
||||
let(:jdbc_statement) do
|
||||
["insert into #{jdbc_test_table} (created_at, message, message_sprintf, static_int, static_bit, static_bigint) values(?, ?, ?, ?, ?, ?)", '@timestamp', 'message', 'sprintf-%{message}', 1, true, 4000881632477184]
|
||||
fields = jdbc_statement_fields.collect { |entry| "#{entry[:db_field]}" }.join(", ")
|
||||
values = jdbc_statement_fields.collect { |entry| "#{entry[:db_value]}" }.join(", ")
|
||||
statement = jdbc_statement_fields.collect { |entry| entry[:event_field] }
|
||||
|
||||
statement.insert(0, "insert into #{jdbc_test_table} (#{fields}) values(#{values})")
|
||||
end
|
||||
|
||||
let(:systemd_database_service) do
|
||||
nil
|
||||
end
|
||||
|
||||
let(:event_fields) do
|
||||
{ message: "test-message #{SecureRandom.uuid}" }
|
||||
let(:event) do
|
||||
# TODO: Auto generate fields from jdbc_statement_fields
|
||||
LogStash::Event.new({
|
||||
message: "test-message #{SecureRandom.uuid}",
|
||||
float: 12.1,
|
||||
bigint: 4000881632477184,
|
||||
bool: true,
|
||||
int: 1,
|
||||
bigdec: BigDecimal.new("123.123")
|
||||
})
|
||||
end
|
||||
|
||||
let(:event) { LogStash::Event.new(event_fields) }
|
||||
|
||||
let(:plugin) do
|
||||
# Setup logger
|
||||
allow(LogStash::Outputs::Jdbc).to receive(:logger).and_return(logger)
|
||||
@ -93,8 +124,12 @@ RSpec.shared_context 'when outputting messages' do
|
||||
output = LogStash::Plugin.lookup('output', 'jdbc').new(jdbc_settings)
|
||||
output.register
|
||||
|
||||
output
|
||||
end
|
||||
|
||||
before :each do
|
||||
# Setup table
|
||||
c = output.instance_variable_get(:@pool).getConnection
|
||||
c = plugin.instance_variable_get(:@pool).getConnection
|
||||
|
||||
# Derby doesn't support IF EXISTS.
|
||||
# Seems like the quickest solution. Bleurgh.
|
||||
@ -111,8 +146,16 @@ RSpec.shared_context 'when outputting messages' do
|
||||
stmt.close
|
||||
c.close
|
||||
end
|
||||
end
|
||||
|
||||
output
|
||||
# Delete table after each
|
||||
after :each do
|
||||
c = plugin.instance_variable_get(:@pool).getConnection
|
||||
|
||||
stmt = c.createStatement
|
||||
stmt.executeUpdate(jdbc_drop_table)
|
||||
stmt.close
|
||||
c.close
|
||||
end
|
||||
|
||||
it 'should save a event' do
|
||||
@ -120,6 +163,9 @@ RSpec.shared_context 'when outputting messages' do
|
||||
|
||||
# Verify the number of items in the output table
|
||||
c = plugin.instance_variable_get(:@pool).getConnection
|
||||
|
||||
# TODO replace this simple count with a check of the actual contents
|
||||
|
||||
stmt = c.prepareStatement("select count(*) as total from #{jdbc_test_table} where message = ?")
|
||||
stmt.setString(1, event.get('message'))
|
||||
rs = stmt.executeQuery
|
||||
@ -143,7 +189,7 @@ RSpec.shared_context 'when outputting messages' do
|
||||
end
|
||||
|
||||
it 'it should retry after a connection loss, and log a warning' do
|
||||
skip "does not run as a service" if systemd_database_service.nil?
|
||||
skip "does not run as a service, or known issue with test" if systemd_database_service.nil?
|
||||
|
||||
p = plugin
|
||||
|
||||
|
@ -2,15 +2,23 @@ require_relative '../jdbc_spec_helper'
|
||||
|
||||
describe 'logstash-output-jdbc: derby', if: ENV['JDBC_DERBY_JAR'] do
|
||||
include_context 'rspec setup'
|
||||
include_context 'when initializing'
|
||||
include_context 'when outputting messages'
|
||||
|
||||
let(:jdbc_jar_env) do
|
||||
'JDBC_DERBY_JAR'
|
||||
end
|
||||
|
||||
let(:jdbc_create_table) do
|
||||
"CREATE table #{jdbc_test_table} (created_at timestamp not null, message varchar(512) not null, message_sprintf varchar(512) not null, static_int int not null, static_bit boolean not null, static_bigint bigint not null)"
|
||||
let(:jdbc_statement_fields) do
|
||||
[
|
||||
{db_field: "created_at", db_type: "timestamp", db_value: 'CAST(? as timestamp)', event_field: '@timestamp'},
|
||||
{db_field: "message", db_type: "varchar(512)", db_value: '?', event_field: 'message'},
|
||||
{db_field: "message_sprintf", db_type: "varchar(512)", db_value: '?', event_field: 'sprintf-%{message}'},
|
||||
{db_field: "static_int", db_type: "int", db_value: '?', event_field: 'int'},
|
||||
{db_field: "static_bigint", db_type: "bigint", db_value: '?', event_field: 'bigint'},
|
||||
{db_field: "static_float", db_type: "float", db_value: '?', event_field: 'float'},
|
||||
{db_field: "static_bool", db_type: "boolean", db_value: '?', event_field: 'bool'},
|
||||
{db_field: "static_bigdec", db_type: "decimal", db_value: '?', event_field: 'bigdec'}
|
||||
]
|
||||
end
|
||||
|
||||
let(:jdbc_settings) do
|
||||
|
@ -2,7 +2,6 @@ require_relative '../jdbc_spec_helper'
|
||||
|
||||
describe 'logstash-output-jdbc: mysql', if: ENV['JDBC_MYSQL_JAR'] do
|
||||
include_context 'rspec setup'
|
||||
include_context 'when initializing'
|
||||
include_context 'when outputting messages'
|
||||
|
||||
let(:jdbc_jar_env) do
|
||||
@ -16,7 +15,7 @@ describe 'logstash-output-jdbc: mysql', if: ENV['JDBC_MYSQL_JAR'] do
|
||||
let(:jdbc_settings) do
|
||||
{
|
||||
'driver_class' => 'com.mysql.jdbc.Driver',
|
||||
'connection_string' => 'jdbc:mysql://localhost/logstash_output_jdbc_test?user=root',
|
||||
'connection_string' => 'jdbc:mysql://localhost/logstash?user=logstash&password=logstash',
|
||||
'driver_jar_path' => ENV[jdbc_jar_env],
|
||||
'statement' => jdbc_statement,
|
||||
'max_flush_exceptions' => 1
|
||||
|
41
spec/outputs/jdbc_postgres_spec.rb
Normal file
41
spec/outputs/jdbc_postgres_spec.rb
Normal file
@ -0,0 +1,41 @@
|
||||
require_relative '../jdbc_spec_helper'
|
||||
|
||||
describe 'logstash-output-jdbc: postgres', if: ENV['JDBC_POSTGRES_JAR'] do
|
||||
include_context 'rspec setup'
|
||||
include_context 'when outputting messages'
|
||||
|
||||
let(:jdbc_jar_env) do
|
||||
'JDBC_POSTGRES_JAR'
|
||||
end
|
||||
|
||||
# TODO: Postgres doesnt kill connections fast enough for the test to pass
|
||||
# Investigate options.
|
||||
|
||||
#let(:systemd_database_service) do
|
||||
# 'postgresql'
|
||||
#end
|
||||
|
||||
let(:jdbc_statement_fields) do
|
||||
[
|
||||
{db_field: "created_at", db_type: "timestamp", db_value: 'CAST(? as timestamp)', event_field: '@timestamp'},
|
||||
{db_field: "message", db_type: "varchar(512)", db_value: '?', event_field: 'message'},
|
||||
{db_field: "message_sprintf", db_type: "varchar(512)", db_value: '?', event_field: 'sprintf-%{message}'},
|
||||
{db_field: "static_int", db_type: "int", db_value: '?', event_field: 'int'},
|
||||
{db_field: "static_bigint", db_type: "bigint", db_value: '?', event_field: 'bigint'},
|
||||
{db_field: "static_float", db_type: "float", db_value: '?', event_field: 'float'},
|
||||
{db_field: "static_bool", db_type: "boolean", db_value: '?', event_field: 'bool'},
|
||||
{db_field: "static_bigdec", db_type: "decimal", db_value: '?', event_field: 'bigdec'}
|
||||
|
||||
]
|
||||
end
|
||||
|
||||
let(:jdbc_settings) do
|
||||
{
|
||||
'driver_class' => 'org.postgresql.Driver',
|
||||
'connection_string' => 'jdbc:postgresql://localhost/logstash?user=logstash&password=logstash',
|
||||
'driver_jar_path' => ENV[jdbc_jar_env],
|
||||
'statement' => jdbc_statement,
|
||||
'max_flush_exceptions' => 1
|
||||
}
|
||||
end
|
||||
end
|
@ -8,7 +8,6 @@ describe 'logstash-output-jdbc: sqlite', if: ENV['JDBC_SQLITE_JAR'] do
|
||||
end
|
||||
|
||||
include_context 'rspec setup'
|
||||
include_context 'when initializing'
|
||||
include_context 'when outputting messages'
|
||||
|
||||
let(:jdbc_jar_env) do
|
||||
|
Loading…
x
Reference in New Issue
Block a user