Skip to content

Releases: Azure/azure-kusto-go

Fixed critical bug that didn't allow ingesting to different clusters

07 Feb 09:25
b89f168
Compare
Choose a tag to compare

Fixes

  • Critical bug - When ingesting to multiple clusters all data is sent to one cluster.
    As always, we recommend re-using clients and ingestors whenever possible.

Lower memory consumption and control, Fixed and added formats, Security fixes

23 Jan 07:46
8f1ba97
Compare
Choose a tag to compare

Breaking Changes

  • The minimal supported go version has been bumped from 1.13 to 1.16.
    1.13 has been EOL for a while, and 1.16 is required for secure versions of our dependencies.

Features

  • Lower memory consumption when ingesting from file - instead of using a huge static buffer to upload files to an azure storage blob, the default is now using a synchronized buffer pool. This can reduce memory consumption by a factor of 10x.
  • An WithStaticBuffer option to ingesting is available when the old behavior is requested, with the added control of the buffers count and size
  • Added support for the singlejson format

Fixes

  • Fixed security alert for github.com/dgrijalva/jwt-go by updating inner dependencies to avoid the package
  • Fixed security alert for github.com/satori/go.uuid by updating inner dependencies to avoid the package
  • Fixed typos that caused w3clog and sstream files to have the wrong extension

Fixed management query bugs

02 Nov 11:34
86303a3
Compare
Choose a tag to compare
Pre-release
  • Management queries now work correctly with complex queries or multiple tables. #55

Fixed synapse integration

13 Oct 06:43
8af3e50
Compare
Choose a tag to compare
Pre-release
  • The fix for synapse URLs now works in all flows

Added CreationTime property

13 Jul 08:58
4095326
Compare
Choose a tag to compare
Pre-release

New features:

  • Added additional ingestion property CreationTime, which allows to specify the creation time of ingested data in kusto's extents. When unspecifies default to now()

Added support for synapse clusters

05 Jul 08:19
5485a24
Compare
Choose a tag to compare
Pre-release
  • Added special support for synapse by fixing audience for URIs having azuresynapse like segments

Added auth token caching, Fixed e2e tests

03 May 13:39
6cd2f75
Compare
Choose a tag to compare

Fixes:

  • Ingestion does not call the cluster for identity token for each file/reader
  • E2E tests now run properly

Documentation for Ingestion Status Reporting (Breaking)

02 Dec 07:16
3887a94
Compare
Choose a tag to compare

Modify documentation to reflect changes in ingestion APIs

Add Support for Ingestion Status Reporting (Breaking)

29 Nov 08:23
7468aad
Compare
Choose a tag to compare

Table Based Ingestion Status Reporting - Breaking Change

You can use Kusto Go SDK to get table-based status reporting of ingestion operations.
Ingestion commands now return an error and a channel that can be waited upon for a final status.
If the error is not nil, the operation has failed locally.
If the error is nil and Table Status Reporting option was used, the SDK user can wait on the channel for a success (nil) or failure (Error) status.

Note!
This feature is not suitable for users running ingestion at high rates, and may slow down the ingestion operation.

Usage:

// Upload a file with status reporting.
status, err := ingestor.FromFile(ctx, "/path/to/file", ingest.ReportResultToTable())
if err != nil {
	// The ingestion command failed to be sent, Do something
}

err = <-status.Wait(ctx)
if err != nil {
	// the operation complete with an error
	if ingest.IsRetryable(err) {
		// Handle retries
	} else {
		// inspect the failure
		// statusCode, _ := ingest.GetIngestionStatus(err)
		// failureStatus, _ := ingest.GetIngestionFailureStatus(err)
	}
}

Fix Timespan marshal, Remove "show ingestion mappings" call

27 Sep 22:52
6b08777
Compare
Choose a tag to compare
  • Fixes Timepsan to marshal correctly (had nanosecond precision, Timespan has "tick" precision)
  • Removes "show ingestion mapping" calls that validated mapping references exist. This was causing cluster issues because our last cache time wasn't being written. However, we are moving in the next release to ingestion status calls for debugging, which are significantly better than this.