Introduction
Exporting data to tabular file formats can be tedious and cumbersome - especially when business wants to have reports covering vast majority of system functionalities. Writing every exporting method using imperative API directly will soon make code verbose, error prone, hard to read and maintain. In such cases You want to hide implementation details using abstractions, but this is additional effort which is not desirable.
Tabulate tries to mitigate above problems with the help of Kotlin, its type-safe DSL builders and extension functions.
Key concepts
Table model.
Table model defines how table will look like after data exporting. Its building blocks are:
-
column- defines a single column in table, -
row- may be user defined custom row or row that carries attributes for enriching existing record, -
row cell- defines cell within row. Cell is bound to a column via column id, -
attribute- introduces extensions to model.
Table model is internal concept and is not exposed to API consumers (only attribute model can be exposed as it is extensible and customizable). Table is always built using table builders as follows:
productList.tabulate("file.xlsx") {
name = "Table id" (1)
columns { (2)
column("nr")
}
rows { (3)
row { // first row when no index provided.
cell("nr") { value = "Nr.:" } (4)
}
}
}
| 1 | Firstly we give the table name. It can be used by exporter e.g., to add metadata like sheet name. |
| 2 | Secondly we can provide column definitions. Column definition can be used to aggregate ColumnAttributes as well as CellAttributes. All attributes associated with particular column will apply to each cell in that column. Specifying column can also help to make table layout more readable. |
| 3 | Next step is to define table rows. Here we can create additional custom rows (like header or footer) or enhance table look and feel with attributes associated with particular row. |
| 4 | Each row can contain as many cells as many columns exist. Similarly to row - cell may be used to assign cell attributes with selected cell within row. You can also create cell with custom predefined or computed value. |
Above, we have created table definition with single column and one row with single cell. Cell binds to column by column identifier which in our case is simple text identifier.
This is very basic example. In order to gain more powers You will need to start using attributes.
Attributes are plain objects with inner properties that extends base model. Attributes can be mounted on multiple levels: table, column, row and single cell levels.
Example with attributes included:
productList.tabulate("file.xlsx") {
name = "Table id"
attributes {
filterAndSort {} (1)
}
columns {
column("nr") {
attributes { width { px = 40 }} (2)
}
column(Product::code) {
attributes { width { auto = true}}
attributes {
text {
weight = DefaultWeightStyle.BOLD (3)
}
}
}
}
rows {
row { // first row when no explicit index provided.
cell("nr") {
value = "Nr.:"
attributes {
text { (4)
fontFamily = "Times New Roman"
fontColor = Colors.BLACK
fontSize = 12
}
background { color = Colors.BLUE }
}
}
}
}
}
| 1 | Top level table attribute TableAttribute. |
| 2 | Column level ColumnAttribute that defines width of entire column |
| 3 | Column level CellAttribute - an attribute applicable for every cell in particular column. |
| 4 | Cell level attribute. This is the lowest possible level where we can mount custom attributes. Only CellAttribute can be used on that level. |
Table DSL API - type-safe builders.
Kotlin type-safe builders fit well into describing table structure. They make source code look more concise and readable and developement becomes easier. At coding time, your IDE makes use of type-safety offered by builders and shows completion hints which elevates developer experience. Almost zero documentation is required to start. You can start playing with the API right now.
DSL functions by convention take lambda with receivers as arguments which abstract away internal API instantiation details from consumers. Within lambda you can call other API methods which in turn, can take downstream builders as arguments. This way - we can end up having multi-level DSL API structure, where each level is extensible via Kotlin extension functions. On each DSL level You are allowed to invoke receiver scope methods and access lexical scope variables which can lead to interesting results:
val additionalProducts = ... (1)
tabulate {
name = "Products table"
rows {
header("Code", "Name", "Description", "Manufacturer") (2)
additionalProducts.forEach { (3)
row {
cell { value = it.code }
cell { value = it.name }
cell { value = it.description }
cell { value = it.manufacturer }
}
}
}
}.export("products.xlsx")
| 1 | Here we are using additionalProducts val which is collection of elements to be exported. |
| 2 | After that, we define header as long as we know that our template doesn’t mention it. |
| 3 | Finally, we are iterating over collection elements to build static table model. |
Although it is possible to build row definitions by iterating over collection directly, you should always prefer to use Column-scoped cell value providers.. They are much faster and consume much less memory than approach shown in point number 3.
|
As already said, it is possible to extend each DSL level by using extension functions on DSL API builder classes.
Take the example from previous section:
tabulate {
rows {
header("Code", "Name", "Description", "Manufacturer")
}
}.export("products.xlsx")
Function .header is implemented as follows:
fun <T> RowsBuilderApi<T>.header(vararg names: String) =
newRow(0) { (1)
cells {
names.forEach {
cell { value = it }
}
}
}
| 1 | Calling .newRow(0) RowsBuilderApi method internally ensures that .header extension function always defines custom row at index 0. |
This way you can create various shortcuts and templates, making DSL vocabulary richer and more expressive.
It is worth mentioning that by using extension functions on DSL builders - scope becomes restricted by DslMarker annotation, so it is not possible to break table definition by calling methods from upstream builders.
Column-scoped cell value providers.
Column API makes it possible to pass property getter reference as a column key. This creates object property to column binding which is applied later at run time for cell value evaluation.
productsRepository.loadProductsByDate(now()).tabulate("file/path/products.xlsx") {
name = "Products table"
columns {
column(Product::code)
column(Product::name)
column(Product::description)
}
}
Property getter as column key kills two birds with one stone:
-
It allows to reference column later in cell builder,
-
it allows to extract collection element property value when row context is built for rendering.
Presence of Column-scoped cell value providers. in table definition removes the requirement of explicit row definition.
It is enough to use Product::code getter reference as column key to determine value of each consecutive row cell.
You are still allowed to define new rows explicitly (through call newRow([index value or Row index predicates.])) or to
provide extensions to existing rows (through call matching { Record row predicates } assign { … }).
Row predicates.
Row predicates allow choosing row definitions matching only specific conditions. This way you can insert custom rows at specific index or index range, or enrich dynamic data row with custom attributes. There are two kinds of predicates:
-
Row index predicates, that are used to define only custom rows (like header or footer)
-
Row record predicates, that are used to enrich existing row (custom or dynamic data) with additional attributes.
Row index predicates.
You have already seen how .header extension function is implemented. Internally it invokes .newRow(0) which requests rendering of a row at index 0. What if You want to apply entire row definition for several indices ?
You may repeat .newRow() invocation as many times as required, but there is better option.
You can use row index predicate as follows:
atIndex { gt(0) and lt(100) } newRow { (1)
cell { expression = RowCellExpression { "index : ${it.rowIndex.getIndex()}" } } (2)
}
| 1 | We start the row line with method atIndex { … } which takes row index predicate gt(0) and lt(100). It literally says: 'Apply this row definition to all indices between index 0 and index 100'. Last 'keyword' sounds: newRow and delivers row definition from within curly braces. |
| 2 | This line represents definition of a row which is about to be created for each matching index. It contains single cell with runtime expression evaluated at context rendering time. |
There is also alternative notation used to achieve the same result:
newRow({ gt(0) and lt(100) }) {
cell { expression = RowCellExpression { "index : ${it.rowIndex.getIndex()}" } }
}
| One important thing to remember about row index predicate is that it is always defined as data structure not as a predicate function. This is because data structure can be materialized into internal map with row indices as keys which enables fast lookup. This approach makes it much faster than iterating over available predicate functions and evaluating them each time next row is requested (that would be required in order to synthesize applicable row definition). Additionally, we can’t get more flexibility for custom rows, as long as their indices should be known at definition time and dynamic data context can’t be value added. |
Record row predicates
Record predicates differs from row index predicates in that they cannot be used to insert new custom rows. They can only enrich existing row, that is:
-
custom row that is created by
newRowAPI method, -
or a row that is derived from collection element (it is always produced from Column-scoped cell value providers. column binding).
| Record row predicates are always represented by a predicate function that checks if currently processed record or custom row meets specific conditions. |
On API level we can define row predicate in two ways:
(1)
matching { <predicate> } assign {
// row attributes, cells definition
}
(2)
row({ <predicate> }) {
// row attributes, cells definition
}
| 1 | First method seems to be closer to natural language but takes more space. Also it does not mention row so it may be not intuitive for some users. |
| 2 | Second method uses DSL keyword row in first place which is desired, but as long as we associate predicate with row builder where both are lambdas, we are forced to use syntax like ({ … }) which I personally do not like in Kotlin. |
Mixing custom rows with collection elements.
Tabulate makes it possible to define table consisting only of custom rows that are known at build time.
It also allows You to generate table where each row is dynamically computed from collection of any type.
What is more, there is nothing that stops You from using both techniques for single table export:
contracts.tabulate("contracts.xlsx") {
name = "Active Contracts"
(1)
columns {
column(Contract::client)
column(Contract::contractCode)
column(Contract::contractLength)
column(Contract::dateSigned)
}
rows {
(2)
header {
columnTitles(
"Client",
"Code",
"Contract Length",
"Date Signed",
)
}
}
}
| 1 | In order to export collection of elements, all we need to do is do define column bindings with getter property references as identifiers. As long as there are no custom row defined in 'rows' section, all rows in table will be rows originating from collection elements. |
| 2 | If You declare custom row at specific index (or matching index predicate), then it will take precedence over dynamic rows generated from collection. So if You declare header row it will be the very first row in exported table, but when You write newRow(2) - this will create new custom row as third. Rows: 0 and 1 will be then reserved for dynamic data (collection elements) as long as there are no other custom rows declarations matching previous indices. |
There are still cases where this flexibility is not enough. How can we define custom row that will be rendered after all dynamic data ? We cannot just use index based predicate as long as we cannot tell the size of collection in advance. The solution for above is multi-pass enabled RowIndex cursor used by context iterator. This RowIndex contains additional 'step' component which increments after there are no row index definitions for current pass. After 'step' is advanced, its local step-scope index is set to zero (this counter will increment per each row matching current pass). Global-scope row index is still maintained to support predicates using it.
Here is how You can add footer row:
contracts.tabulate("contracts.xlsx") {
name = "Active Contracts"
columns {
column(Contract::client)
column(Contract::contractValue)
}
rows {
header("Client","Contract Value")
(1)
footer {
cell { value = "Summary:"}
cell { value = "=SUM" }
}
}
}
| 1 | In above example, footer is an extension function as header, but with one small difference: |
fun <T> RowsBuilderApi<T>.footer(block: RowBuilderApi<T>.() -> Unit) {
newRow(0, AdditionalSteps.TRAILING_ROWS, block) (1)
}
| 1 | As you can see above, it uses additional method argument: AdditionalSteps.TRAILING_ROWS. Internally this will create row index definition with index value / predicate, which is relative to TRAILING_ROWS step. The order of additional steps is calculated by using enum ordinal values. |
Extension points.
I have put lots of effort to make Tabulate extensible. Currently, it is possible to:
-
add user defined attributes,
-
add custom renderers for already defined attributes,
-
implement table export operations from scratch (e.g., html table, cli table, mock renderer for testing),
-
extend DSL type-safe builder APIS on all possible levels.
Implementing table export operations from scratch.
In order to support new tabular file format you will have to:
-
Create
RenderingContextclass. It represents internal state and low-level API to communicate with 3rd party library like Apache POI. Object of that class is passed to all table export operations as well as to all attribute rendering operations that are registered byServiceLoaderinfrastructure. Such common denominator element is required in order to enable table modifications coming from within various render operations. -
Create
OutputBindingclass. It defines transformation ofRenderingContextinto different kind out outputs. By separatingOutputBindingfromRenderingContextwe can enable multiple outputs for particularRenderingContextclass dynamically. -
Create
OutputBindingsProviderthat holds set of all supported output bindings. Then put its fully qualified class name intoio.github.voytech.document.template.spi.OutputBindingsProvider -
Define
ExportOperationsFactoryand put its fully qualified class name intoresource/META-INF/io.github.voytech.document.template.spi.ExportOperationsProvider, and put fully qualified class name of your custom factory in the first line. This step is required by a template in order to resolve your extension at run-time.
Below, basic CSV export operations implementation:
First step is to define RenderingContext:
(1)
open class CsvRenderingContext: RenderingContext {
internal lateinit var bufferedWriter: BufferedWriter
internal val line = StringBuilder()
}
| 1 | CsvRenderingContext implements RenderingContext marker interface and provides state responsible for generating table in selected format. It is a common denominator used as argument of all export operation methods in order to share rendering state and allow interaction with it. |
Then we need to create at least one OutputBinding in order to be able to flush results int output:
class CsvOutputStreamOutputBinding : OutputStreamOutputBinding<CsvRenderingContext>() {
override fun onBind(renderingContext: CsvRenderingContext, output: OutputStream) {
renderingContext.bufferedWriter = output.bufferedWriter()
}
override fun flush(output: OutputStream) {
renderingContext.bufferedWriter.close()
output.close()
}
}
| 1 | The .onBind method is called internally by TableTemplate as soon as both: output and rendering context instances are available. It connects rendering context with particular output and allows implementing flush logic. |
| 2 | The .flush dumps in-memory rendering context into given output. |
Then You need to define provider exposing all compatible OutputBindings instances.
class CsvOutputBindingsProvider : OutputBindingsProvider<CsvRenderingContext> {
override fun createOutputBindings(): List<OutputBinding<CsvRenderingContext, *>> = listOf(
CsvOutputStreamOutputBinding()
)
override fun getDocumentFormat(): DocumentFormat<CsvRenderingContext> =
DocumentFormat.format("csv")
}
Finally, we are implementing ExportOperationsFactory compatible with RenderingContext of choice:
class CsvExportOperationsFactory : ExportOperationsFactory<CsvRenderingContext, Table<*>>() {
override fun getDocumentFormat(): DocumentFormat<CsvRenderingContext> =
format("csv") (1)
(2)
override fun provideExportOperations(): OperationsBuilder<CsvRenderingContext, Table<*>>.() -> Unit = {
operation(OpenRowOperation { renderingContext, _ ->
renderingContext.line.clear()
})
operation(CloseRowOperation { renderingContext, context ->
val lastIndex = context.rowCellValues.size - 1
with(renderingContext) {
context.rowCellValues.values.forEachIndexed { index, cell ->
line.append(cell.rawValue.toString())
if (index < lastIndex) line.append(cell.getSeparatorCharacter())
}
bufferedWriter.write(line.toString())
bufferedWriter.newLine()
}
})
}
(3)
override fun getAggregateModelClass(): Class<Table<*>> = reify()
private fun CellContext.getSeparatorCharacter(): String =
getModelAttribute(CellSeparatorCharacterAttribute::class.java)?.separator ?: ","
}
| 1 | Define DocumentFormat first. It consists from RenderingContext class and provider id string, |
| 2 | This is the most important step. Here we implement actual table rendering logic. We need to provide operations that transform captured context models using RenderingContext. |
| 3 | Finally - we need to provide class name for supported [AggregateRootModel]. This class is a top level model which is going to be exported by all previously defined operation implementations. |
If target tabular format supports styles, You may add support for rendering built-in attributes as follows:
class ExampleExportOperationsConfiguringFactory : ExportOperationsConfiguringFactory<SomeRenderingContext>() {
..
override fun getAttributeOperationsFactory(renderingContext: SomeRenderingContext): AttributeRenderOperationsFactory<SomeRenderingContext> =
object: StandardAttributeRenderOperationsProvider<SomeRenderingContext>{
override fun createTemplateFileRenderer(renderingContext: SomeRenderingContext): TableAttributeRenderOperation<TemplateFileAttribute> =
TemplateFileAttributeRenderOperation(renderingContext)
override fun createColumnWidthRenderer(renderingContext: SomeRenderingContext): ColumnAttributeRenderOperation<ColumnWidthAttribute> =
ColumnWidthAttributeRenderOperation(renderingContext)
override fun createRowHeightRenderer(renderingContext: SomeRenderingContext): RowAttributeRenderOperation<T, RowHeightAttribute> =
RowHeightAttributeRenderOperation(renderingContext)
override fun createCellTextStyleRenderer(renderingContext: SomeRenderingContext): CellAttributeRenderOperation<CellTextStylesAttribute> =
CellTextStylesAttributeRenderOperation(renderingContext)
override fun createCellBordersRenderer(renderingContext: SomeRenderingContext): CellAttributeRenderOperation<CellBordersAttribute> =
CellBordersAttributeRenderOperation(renderingContext)
override fun createCellAlignmentRenderer(renderingContext: SomeRenderingContext): CellAttributeRenderOperation<CellAlignmentAttribute> =
CellAlignmentAttributeRenderOperation(renderingContext)
override fun createCellBackgroundRenderer(renderingContext: SomeRenderingContext): CellAttributeRenderOperation<CellBackgroundAttribute> =
CellBackgroundAttributeRenderOperation(renderingContext)
})
}
Factory class StandardAttributeOperationsFactory exposes API which assumes specific standard library attributes.
If your file format allow additional attributes which are not present in standard library (tabulate-core), you may use AttributeOperationsFactory interface directly, or fill additional constructor properties on StandardAttributeOperationsFactory as below:
class ExampleExportOperationsConfiguringFactory<T> : ExportOperationsConfiguringFactory<T,SomeRenderingContext>() {
...
override fun getAttributeOperationsFactory(renderingContext: SomeRenderingContext): AttributeRenderOperationsFactory<T> =
StandardAttributeRenderOperationsFactory(renderingContext, object: StandardAttributeRenderOperationsProvider<SomeRenderingContext,T>{
override fun createTemplateFileRenderer(renderingContext: SomeRenderingContext): TableAttributeRenderOperation<TemplateFileAttribute> = TemplateFileAttributeRenderOperation(renderingContext)
},
additionalCellAttributeRenderers = setOf( .. )
additionalTableAttributeRenderers = setOf( .. )
)
}
Registering new attribute types for existing export operations.
It is possible that you have requirements which cannot be achieved with standard set of attributes, and your code is in different compilation unit than specific table export operation implementation. Assume You want to use existing Apache POI excel table exporter, but there is lack of certain attribute support. In such situation - You can still register attribute by implementing dedicated AttributeOperation:
data class MarkerCellAttribute(val text: String) : CellAttribute<MarkerCellAttribute>() {
class Builder(var text: String = "") : CellAttributeBuilder<MarkerCellAttribute> {
override fun build(): MarkerCellAttribute = MarkerCellAttribute(text)
}
}
class SimpleMarkerCellAttributeRenderOperation : CellAttributeRenderOperation<ApachePoiRenderingContext, SimpleTestCellAttribute>() {
override fun renderingContextClass(): Class<ApachePoiRenderingContext> = reify()
override fun attributeClass(): Class<MarkerCellAttribute> = reify()
override fun renderAttribute(renderingContext: ApachePoiRenderingContext, context: RowCellContext, attribute: MarkerCellAttribute) {
with(renderingContext.assertCell(context.getTableId(), context.rowIndex, context.columnIndex)) {
this.setCellValue("${this.stringCellValue} [ ${attribute.label} ]")
}
}
}
fun <T> CellLevelAttributesBuilderApi<T>.label(block: MarkerCellAttribute.Builder.() -> Unit) =
attribute(MarkerCellAttribute.Builder().apply(block))
At the end, You need to create file resource/META-INF/io.github.voytech.document.template.operation.AttributeOperation, and put fully qualified class name of your AttributeOperation into it.
Extending Table DSL API
In the last section You saw how to define custom user attributes. The last step involves creating extension function on specific DSL attribute API. As DSL builder class name suggests (CellLevelAttributesBuilderApi<T>) this builder is part of a Cell DSL API only , which means that it won’t be possible to add this attribute on row, column and table. You can leverage this behaviour for restricting say 'mounting points' of specific attributes. In order to enable cell attribute on all levels You will need to add more extension functions:
fun <T> ColumnLevelAttributesBuilderApi<T>.label(block: MarkerCellAttribute.Builder.() -> Unit) =
attribute(MarkerCellAttribute.Builder().apply(block).build())
fun <T> RowLevelAttributesBuilderApi<T>.label(block: MarkerCellAttribute.Builder.() -> Unit) =
attribute(MarkerCellAttribute.Builder().apply(block).build())
fun <T> TableLevelAttributesBuilderApi<T>.label(block: MarkerCellAttribute.Builder.() -> Unit) =
attribute(MarkerCellAttribute.Builder().apply(block).build())
Now You can call label on all DSL API levels in attributes scope like:
productList.tabulate("file.xlsx") {
name = "Table id"
attributes {
label { text = "TABLE" }
}
columns {
column("nr") {
attributes { label { text = "COLUMN" } }
..
}
}
rows {
row {
attributes { label { text = "ROW" } }
cell("nr") {
value = "Nr.:"
attributes {
attributes { label { text = "CELL" } }
}
}
..
}
}
}
The result of above configuration will be as such:
- In the first row, cell at a column with id "nr" will end with [ CELL ], and rest of cells will end with [ ROW ],
- Remaining cells (starting from second row) in a column with id "nr" will end with [ COLUMN ],
- All remaining cells will end with [ TABLE ].
Java interop - fluent builders Java API.
Old-fashioned Java fluent builder API is also supported. It is needless to say it looks much less attractive:
(1)
FluentTableBuilderApi<Employee> employeeTable = TableBuilder<Employee>()
.attribute(TemplateFileAttribute::builder, builder -> builder.setFileName("file.xlsx"))
.attribute(ColumnWidthAttribute::builder, builder -> builder.setAuto(true))
.columns()
.column("id",Employee::getId)
.column("firstName",Employee::getFirstName)
.column("lastName",Employee::getLastName)
.rows()
.row(0)
.attribute(RowHeightAttribute::builder, builder -> builder.setPx(100))
.build();
(2)
List<Employee> employeeList = Collections.singletonList(new Employee("#00010", "Joshua", "Novak"));
new TabulationTemplate(format("xlsx")).export(employeeList, new FileOutputStream("employees.xlsx"), employeeTable);
| 1 | As a first step, You have to declare table definition using Java FluentTableBuilderApi |
| 2 | Now You have to pass table definition into TableTemplate in order to export data with declared tabular layout. |
Library of attributes.
You may need attributes for various reasons - for styling, for formatting etc.
Currently, with tabulate-core and tabulate-excel modules, you will get following attributes included:
Name |
Description |
Attribute type |
Context |
Provider |
Applicable levels |
|
Enables excel table feature that allows filtering and sorting |
Table |
Table opening |
poi (Apache POI) |
table |
|
Exports data into source template file. (Interpolates excel file) |
Table |
Table opening |
poi (Apache POI) |
table |
|
Sets printing attributes on file. |
Table |
Table opening |
poi (Apache POI) |
table |
|
Sets width of column. Applies to column or all cells within column (depending on rendering context capabilities). |
Column |
Column opening |
any |
column |
|
Sets the height of row. Applies to row or to all cells within row (depending on rendering context capabilities). |
Row |
Row opening |
any |
row |
|
Sets border properties of entire row. |
Row |
Row closing |
any |
row |
|
Sets text styles like: |
Cell |
Cell |
any |
table, column, row, cell |
|
Sets border properties of cell. |
Cell |
Cell |
any |
table, column, row, cell |
|
Sets the background color for cell. |
Cell |
Cell |
any |
table, column, row, cell |
|
Aligns text within cell (vertically/ horizontally). |
Cell |
Cell |
any |
table, column, row, cell |
|
Associates comment (and comment author) with cell. |
Cell |
Cell |
poi (Apache POI) |
cell |
|
Sets delimiter for CSV. |
Cell |
Cell |
csv |
table |
Internal algorithms and rules.
This section does not cover consumer API, but instead focuses entirely on internal algorithms implemented in tabulate-core module. You won’t find here any information needed to start using this library. You may refer to below information if you are curious about how things work under the hood. It can be also good starting point before you deep-dive into source code
Template and operations pattern.
Library sole purpose is to provide means for data exporting. This goal is achieved through simple, intuitive pattern of a template class dispatching workload to managed, pluggable operations.
A template which is referred to as TableTemplate iterates lazily through RowContextResolver progressing each time, when next row context is requested by TabulationApi.
Consumer interaction with library may go through TabulationApi and then it looks as follows:
-
declare table model through DSL (or java fluent) builder,
-
enqueue a collection element (or enqueue nothing when exporting only custom rows). Adding new collection element enables derived row context resolution.
RowContextexposes all required row related properties to third party operation implementor. Operation implementation uses row context to participate in table rendering into target format. -
request next row rendering. As mentioned above, each time next row is requested,
RowContextResolvertakes row coordinates as well as additional properties and attributes, then it computes row context that is immediately rendered by specific operation implementation. There are certain rules regarding row context computation that forms unique algorithm which will be explained in following sections in more details.
Consumer interaction may be also simplified by using extension method on exported collection or custom table builder.
In fact this should be leading usage scenario. In this scenario TabulationApi calls are wrapped by extension method on TableTemplate.
Builder materialization.
Before rows can be rendered, a table definition must be built. Effective table definition is always the result of TableBuilder materialisation (or freezing). After materialising table builder state, it can be no longer mutated, and as long as there is no use for builder instance, it is marked for GC. At the same time, Table definition becomes a final builder snapshot and cannot be modified by any means. Since then, it can be only used as an input for exporting job. During this step attributes are merged together for the first time. This can be done here because we can define multiple attributes of the same type on separate builder APIs.
Table postprocessing.
Next step after building table definition is postprocessing phase. It consists of:
-
Table rows indexing - building row index to row definitions associations that enables efficient lookup.
-
Table rows partitioning. The result of partitioning are two groups of rows - previously mentioned custom rows addressed by row indices, and enriching row definitions addressed by predicate functions.
-
Initializing synthetic rows cache. As long as row context computation request may qualify multiple table row definitions - they are bundled together and forms intermediate entity called
SyntheticRow. The same row definitions can be qualified multiple times that is why synthetic rows cache exists. The cache consists of associations of row definitions as keys withSyntheticRowas a values.
One can even say that table rows indexing produces first level cache, while synthetic rows cache can be referred as second level cache:
When requesting row definition by row index, algorithm is performing lookup to retrieve all applicable table row definitions (this is the first level cache). Next, having say multiple table row definitions it uses them as a key to find a SyntheticRow instance (this is the second level cache).
At this point we have table definition with indexed rows, and yet cold cache for keeping synthetic row definitions.
SyntheticRow resolution.
SyntheticRow keeps bundled row definitions matching specific row index. During object initialization following actions take place:
-
all cell values for all qualified table rows are merged so that all values from pair on the right overrides values on the left,
-
all cell attributes from table and row levels are merged similarly from left to right so that only explicitly changed properties of attributes of the same class are overridden,
-
all row attributes from table and row levels are merged similarly from left to right so that only explicitly changed properties of attributes of same class are overridden,
Phrase explicitly changed mentioned above means: choose only attribute property changes made by explicit DSL/fluent builder method calls. Builders track all attribute property changes (to determine which change - left or right - should be applied) because they can have defaults on the right-hand side that would in other case override explicitly changed left-hand side attribute properties.
Please note that during SyntheticRow resolution only row level cell and row attributes are merged. Cell level cell attributes are merged-in later when resolving CellContext values. They are deferred in time because to merge attributes from all levels - column level attributes are required. Column level attributes can be only accessed while resolving specific CellContext and CellContext responsibility is to resolve cell value which can be done only when value for particular row and column can be obtained. This is always the very last step in completing row operation context.
Row context resolution.
Row context resolution is the final step of completing row data before dispatching to third party operation code. During this step, couple of intermediate contexts are produced:
-
RowOpeningContext- a context containing row coordinate (only row index) plus associated row attributes. 'AbstractRowContextResolver' notifiesTableTemplateabout completingRowOpeningContextusingRowCompletionListener. Specific operation implementation associated with this type of context (RowOpeningContext) can be invoked even before having all row associated data. -
CellContext- a context holdingCellValuewith all associated cell attributes (merged all level attributes: table, column, row and cell). Similarly toRowOpeningContext-AbstractRowContextResolverusesRowCompletionListenerin order to notifyTableTemplateabout eachCellContextwhen it is ready for rendering by corresponding operation implementation. -
RowClosingContext- a context with complete row data. It contains all row attributes together with all cell values (also with attributes). As long asRowClosingContextis the last step it is also the value returned fromAbstractRowContextResolver.resolve(…)method.
Rendering operations dispatching.
Rendering operations are operations executing rendering logic that is specific to particular TabulationFormat. There are two main interfaces that establish contract to be used by implementations:
-
Operation
fun interface Operation<CTX : RenderingContext, ATTR_CAT : Attribute<*>, E : AttributedModel<ATTR_CAT>> {
fun render(renderingContext: CTX, operationContext: E)
}
This is the most basic interface. It expresses an intent to render operationContext associated data by using renderingContext low-level API.
-
AttributeOperation
interface AttributeOperation<CTX : RenderingContext, ATTR_CAT : Attribute<*>, ATTR : ATTR_CAT, E : AttributedModel<ATTR_CAT>> {
fun typeInfo(): AttributeOperationTypeInfo<CTX, ATTR_CAT, ATTR, E>
fun priority(): Int = DEFAULT
fun renderAttribute(renderingContext: CTX, operationContext: E, attribute: ATTR)
companion object {
const val LOWEST = Int.MIN_VALUE
const val LOWER = -1
const val DEFAULT = 1
}
}
AttributeOperation is next level abstraction. It is used to render each attribute from AttributedModel context. The AttributeOperation dispatching can be achieved by AttributesHandlingOperation wrapper that is responsible for resolving all AttributeOperations and matching them with corresponding attributes.
Now some more words on operation contexts.
As can be seen on above code snippets, operation context instance is an object that is passed to operation for rendering purposes. Operation context class determines specific operation applicability, because operation can be invoked only when context is present in scope. In Row context resolution. You saw that there are three different context classes around row rendering, but in fact there are more context classes present alongside the process.
Have a look at the contract of OperationsBuilder. Here we can see all available operation interfaces. Each interface extends base Operation interface by filling in corresponding AttributedModel context class:
class OperationsBuilder<CTX : RenderingContext> {
var openTable: OpenTableOperation<CTX>? = OpenTableOperation { _, _ -> } (1)
var closeTable: CloseTableOperation<CTX>? = CloseTableOperation { _, _ -> } (2)
var openColumn: OpenColumnOperation<CTX>? = OpenColumnOperation { _, _ -> } (3)
var closeColumn: CloseColumnOperation<CTX>? = CloseColumnOperation { _, _ -> } (4)
var openRow: OpenRowOperation<CTX>? = OpenRowOperation { _, _ -> } (5)
var closeRow: CloseRowOperation<CTX>? = CloseRowOperation { _, _ -> } (6)
var renderRowCell: RenderRowCellOperation<CTX>? = RenderRowCellOperation { _, _ -> } (7)
}
| 1 | OpenTableOperation is defined as: fun interface OpenTableOperation<CTX : RenderingContext> : Operation<CTX, TableAttribute<*>, TableOpeningContext> |
| 2 | CloseTableOperation is defined as: fun interface CloseTableOperation<CTX : RenderingContext> : Operation<CTX, TableAttribute<*>, TableClosingContext> |
| 3 | OpenColumnOperation is defined as: fun interface OpenColumnOperation<CTX : RenderingContext> : Operation<CTX, ColumnAttribute<*>, ColumnOpeningContext> |
| 4 | CloseColumnOperation is defined as: fun interface CloseColumnOperation<CTX : RenderingContext> : Operation<CTX, ColumnAttribute<*>, ColumnClosingContext> |
| 5 | OpenRowOperation is defined as: fun interface OpenRowOperation<CTX : RenderingContext> : Operation<CTX, RowAttribute<*>, RowOpeningContext> |
| 6 | CloseRowOperation is defined as: fun interface CloseRowOperation<CTX : RenderingContext> : Operation<CTX, RowAttribute<*>, RowClosingContext<*>> |
| 7 | RenderRowCellOperation is defined as: fun interface RenderRowCellOperation<CTX : RenderingContext> : Operation<CTX, CellAttribute<*>, CellContext> |
Order of appearance:
-
TableOpeningContextandOpenTableOperation, -
ColumnOpeningContextandOpenColumnOperation, -
RowOpeningContextandOpenRowOperation, -
CellContextandRenderRowCellOperation, -
RowClosingContextandCloseRowOperation, -
TableClosingContextandCloseTableOperation.
Cookbook recipes.
In this section You can find some ready to use usage scenarios.
Export collection with header and summary.
productsRepository.loadProducts().tabulate("product_with_styles.xlsx") {
name = "Products table"
attribtues { width { auto = true }}
columns {
column(Product::code) {
attributes(
text {
weight = DefaultWeightStyle.BOLD
fontColor = Colors.WHITE
},
background { color = Colors.GRAY }
)
}
column(Product::name)
column(Product::releaseDate) {
attributes(
dataFormat { value = "dd.mm.YYYY" }
)
}
column(Product::mgQty)
}
rows {
header("Product Node", "Product Name", "Release Date", "Available")
footer {
cell {
value = "."
colSpan = 4
attributes {
alignment {
horizontal = DefaultHorizontalAlignment.CENTER
}
}
}
}
}
}
Add Excel formula for summing column values.
agreementRepository.loadAgreements().run {
tabulate("agreements.xlsx") {
name = "Customer Agreements"
attribtues { width { auto = true }}
columns {
column(Agreement::agreementNumber) {
attributes{
text {
weight = DefaultWeightStyle.BOLD
fontColor = Colors.WHITE
},
background { color = Colors.GRAY }
}
}
column(Agreement::serviceCode)
column(Agreement::netCostValue)
column(Agreement::grossCostValue)
column(Agreement::signDate) {
attributes {
dataFormat { value = "dd.mm.YYYY" }
}
}
}
rows {
header("Agreement Number", "Code", "Net Cost", "Gross Cost","Sign Date")
footer {
cell(Agreement::netCostValue) {
value = "=SUM(C1:C${size() + 1})" (1)
type = ExcelTypeHints.FORMULA
}
cell(Agreement::grossCostValue) {
value = "=SUM(D1:D${size() + 1})" (2)
type = ExcelTypeHints.FORMULA
}
}
}
}
}
| 1 | This looks ugly and will change. |
| 2 | The same. |
Reusable, composable table builder declarations.
val whiteOnBlackHeader = table {
rows {
matching { header() } assign {
attributes {
background {
color = Colors.BLACK
}
text {
fontColor = Colors.WHITE
}
}
}
}
}
val printingDetails = table {
attributes {
printing {
blackAndWhite = true
footerCenter = "Page ${HeaderFooter.page()} of ${HeaderFooter.numPages()}" (1)
}
}
}
contracts.tabulate("contracts_list.xlsx", printingDetails + whiteOnBlackHeader + typedTable {
columns {
column(Contract::client)
column(Contract::contractCode)
column(Contract::contractLength)
column(Contract::dateSigned)
column(Contract::expirationDate)
column(Contract::dateOfFirstPayment)
column(Contract::lastPaymentDate)
column(Contract::monthlyGrossValue)
}
rows {
header("Client", "Code", "Contract Length",
"Date Signed", "Expiration Date", "First Payment",
"Last Payment","Monthly Gross Value")
}
})
| 1 | HeaderFooter.page() and HeaderFooter.numPages() are Apache POI utilities. |
Fill monthly revenue template with trend chart.
TBD.
Create invoice.
Please refer to project tabulate-examples in order to see complete invoice DSL vocabulary.
Search for classes: InvoiceDsl.kt, InvoiceData.kt, Layouts.kt and sections package (contains invoice layout section extensions).
Please note that Layouts.kt in tabulate-examples project contains API extensions which are currently incubating and may be dropped in future ;)
|
Invoice DSL is rather too much to put it in here so I am presenting only top level consumer facing constructs:
listOf(
InvoiceLineItem("Laptop: Acer", 1,BigDecimal.valueOf(2333.33),BigDecimal.valueOf(0.23)),
InvoiceLineItem("Monitor: Lenovo", 1,BigDecimal.valueOf(1333.33),BigDecimal.valueOf(0.23)),
InvoiceLineItem("Keyboard: Genesys 110", 1,BigDecimal.valueOf(233.99),BigDecimal.valueOf(0.23)),
InvoiceLineItem("Headset: Syperlux HD330", 1,BigDecimal.valueOf(134.99),BigDecimal.valueOf(0.23)),
InvoiceLineItem("Mouse: Logitech M185", 1,BigDecimal.valueOf(34.99),BigDecimal.valueOf(0.23)),
InvoiceLineItem("IPhone 11", 1,BigDecimal.valueOf(3004.99),BigDecimal.valueOf(0.23)),
InvoiceLineItem("DynaDesk", 1,BigDecimal.valueOf(1234.99),BigDecimal.valueOf(0.23)),
).printInvoice(
fileName = "invoice.csv",
invoiceNumber = "#00001",
invoiceIssueDate = LocalDate.now(),
invoiceDueDate = LocalDate.now(),
issuerDetails = CompanyAddress(
contactName = "Brad Kovalsky",
companyName = "Best Computers",
address = "Macintosh Square St. 1/22",
address2 = "brad@bestcomputers.com",
phone = "988-324-342"
),
clientDetails = CompanyAddress(
contactName = "Jeremy Cooper",
companyName = "JerCo.",
address = "Genuine St. 22/202",
address2 = "jerco@gmail.com",
phone = "435-324-555"
)
)
And below, well known 'tabulate' call:
fun Iterable<InvoiceLineItem>.printInvoice(
fileName: String,
issuerDetails: CompanyAddress,
clientDetails: CompanyAddress,
invoiceNumber: String = "#00001",
invoiceIssueDate: LocalDate = LocalDate.now(),
invoiceDueDate: LocalDate = LocalDate.now(),
) {
val items = this
tabulate(fileName) {
attributes { columnWidth { auto = true } }
columns {
column(InvoiceLineItem::description)
column(InvoiceLineItem::qty)
column(InvoiceLineItem::unitPrice)
column(InvoiceLineItem::vat)
column(InvoiceLineItem::total)
}
rows {
layout { (1)
horizontal { titleSection() } (2)
horizontal {
issuerSection {
issuer = issuerDetails
imageUrl = "src/main/resources/logo.png"
}
}
horizontal {
section { separator(1,5) }
}
horizontal {
addressDetailsSection {
addressTitle = "BILL TO"
address = issuerDetails
}
addressDetailsSection {
addressTitle = "SHIP TO"
address = clientDetails
}
invoiceDetailsSection {
number = invoiceNumber
issueDate = invoiceIssueDate
dueDate = invoiceDueDate
}
}
horizontal {
section { separator(1,5) }
}
horizontal {
lineItemsHeaderSection()
}
horizontal(AdditionalSteps.TRAILING_ROWS) {
section { separator(1,5) }
}
horizontal(AdditionalSteps.TRAILING_ROWS) {
invoiceSummarySection(column = 3) {
subtotal = items.sumOf { it.unitPrice.multiply(it.qty.toBigDecimal()) }
discounts = BigDecimal.ZERO
taxes = items.sumOf { it.vat.multiply(it.unitPrice.multiply(it.qty.toBigDecimal())) }
total = items.sumOf { it.total }
}
}
horizontal(AdditionalSteps.TRAILING_ROWS) {
section { separator(1,5) }
}
horizontal(AdditionalSteps.TRAILING_ROWS) {
thankYou(span = 5)
}
}
}
}
}
| 1 | layout is an extension method on RowBuilderApi. |
| 2 | horizontal is also the same type of extension method. It allows to place sections that contains multiple newRow calls next to each other horizontally. This is something You cannot achieve using standard DSL API, but this is something which is done using pure extension functions with additional builder state on top of core API state. It does not require any modifications to core API exposed by tabulate-core. |