Skip to content

Commit 21486fb

Browse files
authored
Test: remove doc type (#188)
* Test: remove doc type This commit removes the document type from the testing application, to match updated Elasticsearch interface. Collaterally, it also updates the README and sets the execution bit of the testing app. * address review notes correct spelling
1 parent a6a9641 commit 21486fb

File tree

3 files changed

+27
-31
lines changed

3 files changed

+27
-31
lines changed

README.md

Lines changed: 22 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -12,15 +12,12 @@ subject to the Elastic License are in the [libs](libs) directory.
1212

1313
## Supported platforms
1414

15-
The desired target platform is Microsoft Windows, past and including version 7.
16-
Full support will include Linux and OSX, on both x86 and amd64 architectures.
15+
The currently supported platforms on both x86 and amd64 architectures are:
1716

18-
For `alpha` release the supported platforms are:
17+
- Microsoft Windows 10
18+
- Microsoft Windows Server 2016
1919

20-
- Windows 10
21-
- Windows Server 2016
22-
23-
The installer will check the platform using `winver` and abort installation if not running on a supported platform.
20+
Support for other platforms might be added at a later time.
2421

2522
## Running Requirements
2623

@@ -43,31 +40,30 @@ make).
4340
The driver makes use of the following libraries/headers:
4441

4542
* ODBC-Specification
46-
- this is the project that currently contains the ODBC specification,
47-
including the headers defining the ODBC C API;
43+
- this is the project that contains the ODBC specification, including the
44+
headers defining the ODBC C API;
4845
* libcurl
4946
- the library is used for the HTTP(S) communication with Elasticsearch REST
50-
endpoint;
47+
API;
5148
* c-timestamp
52-
- the library is used for parsing the ISO 8601 formated timestamps received
53-
from Elasticsearch;
49+
- library used for parsing ISO 8601 formated timestamps;
5450
* ujson4c
55-
- fast scanner library for JSON.
51+
- fast scanner library for JSON;
52+
* tinycbor
53+
- a small CBOR encoder and decoder library.
5654

5755
The required libraries are added as subtrees to the project, in the libs directory:
5856
```
5957
somedirectory\
6058
|_elasticsearch-sql-odbc
61-
|_README.md
6259
|_CMakeLists.txt
63-
|_build.bat
64-
|_driver
65-
|_builds
60+
|_...
6661
|_libs
6762
|_ODBC-Specification
6863
|_curl
6964
|_c-timestamp
7065
|_ujson4c
66+
|_tinycbor
7167
```
7268

7369

@@ -78,7 +74,7 @@ The required libraries are added as subtrees to the project, in the libs directo
7874
Building the driver requires the installation of Microsoft tools. These can be
7975
from the Visual Studio pack or with the [standalone tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/).
8076

81-
Version 2017 Enterprize 15.5.2 is used to develop with, older versions
77+
Version 2017 Enterprise 15 is used to develop with, older versions
8278
should work fine too, with their corresponding modules. The lists of packages
8379
for MSVC 2017 are given below.
8480

@@ -91,11 +87,11 @@ Required packages:
9187
* VC++ toolset
9288
- for the compiler;
9389
* C++/CLI support
94-
- for the DSN editor C to C# CLI binding;
90+
- for DSN editor's C-to-C# CLI binding;
9591
* C# support
96-
- for the DSN editor C# form;
92+
- for DSN editor's C# form;
9793
* F# support
98-
- for the MSI packaging.
94+
- for building the MSI package.
9995

10096
Optional packages:
10197

@@ -119,9 +115,8 @@ steps for building the ODBC driver.
119115
Some environment parameters can be set to customized its behavior (see start
120116
of script).
121117

122-
The script can also take a set of parameters, run ```build.bat help``` to see
123-
what they mean. ```build.bat``` will build the driver itself, by invoking
124-
CMake and MSBuild, as needed. ```build.bat proper``` will clean the project to initial state. ```build.bat all tests``` will run the unit tests.
118+
The script will take a set of parameters, run ```build.bat help``` to see
119+
which these are.
125120

126121
## Testing
127122

@@ -131,7 +126,8 @@ Testing the driver is done with unit tests and integration tests.
131126

132127
The unit testing makes use of the Googletest framework. This is being fetched and built at testing time.
133128

134-
The integration testing makes use of a Python application that requires the following packages installed:
129+
The integration testing makes use of a Python application that requires the
130+
following packages be installed:
135131

136132
* Python3, both x86 and amd64 distributions
137133
- both x86 and x64 driver builds are tested;
@@ -149,4 +145,4 @@ For each of the two Python releases, the following packages must be installed:
149145

150146
## Installation
151147

152-
See: https://www.elastic.co/guide/en/elasticsearch/sql-odbc/current/index.html
148+
See: https://www.elastic.co/guide/en/elasticsearch/reference/current/sql-odbc.html

test/integration/data.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -234,7 +234,7 @@ def csv_to_ndjson(csv_text, index_name):
234234
stream = io.StringIO(csv_text)
235235
reader = csv.reader(stream, delimiter=',', quotechar='"')
236236

237-
index_string = '{"index": {"_index": "%s", "_type": "_doc"}}' % index_name
237+
index_string = '{"index": {"_index": "%s"}}' % index_name
238238
ndjson = ""
239239

240240
header_row = next(reader)
@@ -325,7 +325,7 @@ def _docs_to_ndjson_batch(self, docs, index_string):
325325
return ndjson
326326

327327
def _docs_to_ndjson(self, index_name, docs):
328-
index_json = {"index": {"_index": index_name, "_type": "_doc"}}
328+
index_json = {"index": {"_index": index_name}}
329329
index_string = json.dumps(index_json)
330330

331331
ndjsons = []
@@ -386,19 +386,19 @@ def _prepare_tableau_load(self, file_name, index_name, index_template):
386386

387387
def _post_ndjson(self, ndjsons, index_name, pipeline_name=None):
388388
print("Indexing data for index '%s'." % index_name)
389-
url = "%s/%s/_doc/_bulk" % (self._es.base_url(), index_name)
389+
url = "%s/%s/_bulk" % (self._es.base_url(), index_name)
390390
if pipeline_name:
391391
url += "?pipeline=%s" % pipeline_name
392392
if type(ndjsons) is not list:
393393
ndjsons = [ndjsons]
394394
for n in ndjsons:
395395
with requests.post(url, data=n, headers = {"Content-Type": "application/x-ndjson"},
396396
auth=self._es.credentials()) as req:
397-
if req.status_code != 200:
397+
if req.status_code not in [200, 201]:
398398
raise Exception("bulk POST to %s failed with code: %s (content: %s)" % (index_name,
399399
req.status_code, req.text))
400400
reply = json.loads(req.text)
401-
if reply["errors"]:
401+
if reply.get("errors"):
402402
raise Exception("bulk POST to %s failed with content: %s" % (index_name, req.text))
403403

404404
def _wait_for_results(self, index_name):

test/integration/ites.py

100644100755
File mode changed.

0 commit comments

Comments
 (0)