id
stringlengths 40
40
| text
stringlengths 9
86.7k
| metadata
stringlengths 3k
16.2k
| source
stringclasses 1
value | added
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
| created
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
|
|---|---|---|---|---|---|
d918a2328fcefcd218908d60bf441007f2e3879c
|
ARM = Advanced RISC Machines, Ltd.
ARM licenses IP to other companies (ARM does not fabricate chips)
2005: ARM had 75% of embedded RISC market, with 2.5 billion processors
ARM available as microcontrollers, IP cores, etc.
www.arm.com
ARM instruction set architecture
- ARM versions.
- ARM programming model.
- ARM memory organization.
- ARM assembly language.
- ARM data operations.
- ARM flow of control.
ARM Architecture versions
(From arm.com)
Instruction Sets
- Thumb-2
(16-bit and 32-bit instructions)
- Cortex-M4
- Cortex-M3
- Cortex-M0
- Thumb
(16-bit Instructions)
- ARM
(32-bit Instructions)
### Arm Processor Families
#### Cortex-A series (advanced application)
- High-performance processors for open OSs
- App’s: smartphones, digital TV, server solutions, and home gateways.
#### Cortex-R series (real-time)
- Exceptional performance for real-time applications
- App’s: automotive braking systems and powertrains.
#### Cortex-M series (microcontroller)
- Cost-sensitive solutions for deterministic microcontroller applications
- App’s: microcontrollers, smart sensors, automotive body electronics, and airbags.
#### SecurCore series
- High-security applications such as smartcards and e-government
#### Classic processors
- Include Arm7, Arm9, and Arm11 families
---
### Processors As of Nov 2017
<table>
<thead>
<tr>
<th>Cortex-A</th>
<th>Cortex-R</th>
<th>Cortex-M</th>
<th>SecurCore</th>
<th>Classic</th>
</tr>
</thead>
<tbody>
<tr>
<td>Cortex-A75</td>
<td>Cortex-R8</td>
<td>Cortex-M7</td>
<td>SC000</td>
<td>Arm11</td>
</tr>
<tr>
<td>Cortex-A73</td>
<td>Cortex-R7</td>
<td>Cortex-M4</td>
<td>SC100</td>
<td>Arm9</td>
</tr>
<tr>
<td>Cortex-A72</td>
<td>Cortex-R5</td>
<td>Cortex-M3</td>
<td>SC300</td>
<td>Arm7</td>
</tr>
<tr>
<td>Cortex-A57</td>
<td>Cortex-R52</td>
<td>Cortex-M1</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Cortex-A57</td>
<td>Cortex-R53</td>
<td>Cortex-M0</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Cortex-A17</td>
<td>Cortex-R8</td>
<td>Cortex-M23</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Cortex-A15</td>
<td>Cortex-A8</td>
<td>Cortex-M33</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Cortex-A15</td>
<td>Cortex-A7</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Cortex-A9</td>
<td>Cortex-A7</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Cortex-A9</td>
<td>Cortex-A5</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Cortex-A9</td>
<td>Cortex-A5</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Cortex-A9</td>
<td>Cortex-A5</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Cortex-A9</td>
<td>Cortex-A5</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
---
*As of Nov 2017*
ARM Cortex-M instruction sets
Programmer’s model of a CPU
- What information is specified in an “instruction” to accomplish a task?
- Operations: add, subtract, move, jump
- Operands: data manipulated by operations
- # of operands per instruction (1-2-3)
- Data sizes & types
- # bits (1, 8, 16, 32, …)
- signed/unsigned integer, floating-point, character …
- Locations of operands
- Memory – specify location by a memory “address”
- CPU Registers – specify register name/number
- Immediate – data embedded in the instruction code
- Input/output device “ports”/interfaces
RISC vs. CISC architectures
- **CISC = “Complex Instruction Set Computer”**
- Rich set of instructions and options to minimize #operations required to perform a given task
- Example: Intel x86 instruction set architecture
- **RISC = “Reduced Instruction Set Computer”**
- Fixed instruction length
- Fewer/simpler instructions than CISC CPU 32-bit load/store architecture
- Limited addressing modes, operand types
- Simple design easier to speed up, pipeline & scale
- Example: ARM architecture
Program execution time =
\[(\# \text{ instructions}) \times (\# \text{ clock cycles/instruction}) \times (\text{clock period})\]
ARM instruction format
Add instruction: \( \text{ADD } R1, R2, R3 \); 2\textsuperscript{nd} source operand = register
\( \text{ADD } R1, R2, #5 \); 2\textsuperscript{nd} source operand = constant
1. operation: binary addition \( \text{compute } R1 = R2 + 5 \)
2. destination: register \( R1 \) (replaces original contents of \( R1 \))
3. left-hand operand: register \( R2 \)
4. right-hand operand:
- Option 1: register \( R3 \)
- Option 2: constant 5 (# indicates constant)
operand size: 32 bits (all arithmetic/logical instructions)
operand type: signed or unsigned integer
ARM assembly language
- Fairly standard assembly language format:
LDR r0, [r8] ; a comment
label ADD r4, r0, r1 ; r4 = r0 + r1
label (optional) refers to the location of this instruction
Processor core registers
- All registers are 32 bits wide
- 13 general purpose registers
- Registers r0 – r7 (Low registers)
- Registers r8 – r12 (High registers)
- Use to hold data, addresses, etc.
- 3 registers with special meaning/usage
- Stack Pointer (SP) – r13
- Link Register (LR) – r14
- Program Counter (PC) – r15
- xPSR – Program Status Register
- Composite of three PSRs
- Includes ALU flags (N,Z,C,V)
Program status register (PSR)
- Program Status Register xPSR is a composite of 3 PSRs:
- **APSR** - Application Program Status Register – ALU condition flags
- N (negative), Z (zero), C (carry/borrow), V (2’s complement overflow)
- Flags set by ALU operations; tested by conditional jumps/execution
- **IPSR** - Interrupt Program Status Register
- Interrupt/Exception No.
- **EPSR** - Execution Program Status Register
- T bit = 1 if CPU in “Thumb mode” (always for Cortex-M4), 0 in “ARM mode”
- IT field – If/Then block information
- ICI field – Interruptible-Continuuable Instruction information
- xPSR stored on the stack on exception entry
Data types supported in ARM
- Integer ALU operations are performed **only on 32-bit data**
- Signed or unsigned integers
- Data sizes in memory:
- Byte (8-bit), Half-word (16-bit), Word (32-bit), Double Word (64-bit)
- Bytes/half-words are converted to 32-bits when moved into a register
- Signed numbers – extend sign bit to upper bits of a 32-bit register
- Unsigned numbers – fill upper bits of a 32-bit register with 0’s
- Examples:
- 255 (unsigned byte) 0xFF=>0x000000FF (fill upper 24 bits with 0)
- -1 (signed byte) 0xFF=>0xFFFFFFFF (fill upper 24 bits with sign bit 1)
- +1 (signed byte) 0x01=>0x00000001 (fill upper 24 bits with sign bit 0)
- -32768 (signed half-word) 0x8000=>0xFFFF8000 (sign bit = 1)
- 32768 (unsigned half-word) 0x8000=>0x00008000
- +32767 (signed half-word) 0x7FFF=>0x00007FFF (sign bit = 0)
- Cortex-M4F supports single and double-precision IEEE floating-point data
(Floating-point ALU is **optional** in Cortex-M4 implementations)
## C/C++ language data types
<table>
<thead>
<tr>
<th>Type</th>
<th>Size (bits)</th>
<th>Range of values</th>
</tr>
</thead>
<tbody>
<tr>
<td>char</td>
<td>8</td>
<td>[-2(^7) .. +2(^7)–1] = [-128 .. +127]</td>
</tr>
<tr>
<td>signed char</td>
<td></td>
<td>Compiler-specific (not specified in C standard)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>ARM compiler default is signed</td>
</tr>
<tr>
<td>unsigned char</td>
<td>8</td>
<td>[0 .. 2(^8)–1] = [0..255]</td>
</tr>
<tr>
<td>short</td>
<td>16</td>
<td>[-2(^{15}) .. +2(^{15})–1]</td>
</tr>
<tr>
<td>signed short</td>
<td></td>
<td></td>
</tr>
<tr>
<td>unsigned short</td>
<td>16</td>
<td>[0 .. 2(^{16})–1]</td>
</tr>
<tr>
<td>int</td>
<td>32</td>
<td>[-2(^{31}) .. +2(^{31})–1]</td>
</tr>
<tr>
<td>signed int</td>
<td></td>
<td>(natural size of host CPU)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>int specified as signed as signed in the C standard</td>
</tr>
<tr>
<td>unsigned int</td>
<td>32</td>
<td>[0 .. 2(^{32})–1]</td>
</tr>
<tr>
<td>long</td>
<td>32</td>
<td>[-2(^{31}) .. +2(^{31})–1]</td>
</tr>
<tr>
<td>long long</td>
<td>64</td>
<td>[-2(^{63}) .. +2(^{63})–1]</td>
</tr>
<tr>
<td>float</td>
<td>32</td>
<td>IEEE single-precision floating-point format</td>
</tr>
<tr>
<td>double</td>
<td>64</td>
<td>IEEE double-precision floating-point format</td>
</tr>
</tbody>
</table>
# Directive: Data Allocation
<table>
<thead>
<tr>
<th>Directive</th>
<th>Description</th>
<th>Memory Space</th>
</tr>
</thead>
<tbody>
<tr>
<td>DCB</td>
<td>Define Constant Byte</td>
<td>Reserve 8-bit values</td>
</tr>
<tr>
<td>DCW</td>
<td>Define Constant Half-word</td>
<td>Reserve 16-bit values</td>
</tr>
<tr>
<td>DCD</td>
<td>Define Constant Word</td>
<td>Reserve 32-bit values</td>
</tr>
<tr>
<td>DCQ</td>
<td>Define Constant</td>
<td>Reserve 64-bit values</td>
</tr>
<tr>
<td>SPACE</td>
<td>Defined Zeroed Bytes</td>
<td>Reserve a number of zeroed bytes</td>
</tr>
<tr>
<td>FILL</td>
<td>Defined Initialized Bytes</td>
<td>Reserve and fill each byte with a value</td>
</tr>
</tbody>
</table>
DCx : reserve space and initialize value(s) for ROM
*(initial values ignored for RAM)*
SPACE : reserve space without assigning initial values
*(especially useful for RAM)*
<table>
<thead>
<tr>
<th>AREA</th>
<th>myData, DATA, READWRITE</th>
</tr>
</thead>
<tbody>
<tr>
<td>hello</td>
<td>DCB "Hello World!",0 ; Allocate a string that is null-terminated</td>
</tr>
<tr>
<td>dollar</td>
<td>DCB 2,10,0,200 ; Allocate integers ranging from -128 to 255</td>
</tr>
<tr>
<td>scores</td>
<td>DCD 2,3,-8,4 ; Allocate 4 words containing decimal values</td>
</tr>
<tr>
<td>miles</td>
<td>DCW 100,200,50,0 ; Allocate integers between -32768 and 65535</td>
</tr>
<tr>
<td>p</td>
<td>SPACE 255 ; Allocate 255 bytes of zeroed memory space</td>
</tr>
<tr>
<td>f</td>
<td>FILL 20,0xFF,1 ; Allocate 20 bytes and set each byte to 0xFF</td>
</tr>
<tr>
<td>binary</td>
<td>DCB 2_01010101 ; Allocate a byte in binary</td>
</tr>
<tr>
<td>octal</td>
<td>DCB 8_73 ; Allocate a byte in octal</td>
</tr>
<tr>
<td>char</td>
<td>DCB ‘A’ ; Allocate a byte initialized to ASCII of ‘A’</td>
</tr>
</tbody>
</table>
Memory usage
- **Code memory** (normally read-only memory)
- Program instructions
- Constant data
- **Data memory** (normally read/write memory – RAM)
- Variable data/operands
- **Stack** (located in data memory)
- Special Last-In/First-Out (LIFO) data structure
- Save information temporarily and retrieve it later
- Return addresses for subroutines and interrupt/exception handlers
- Data to be passed to/from a subroutine/function
- Stack Pointer register (r13/sp) points to last item placed on the stack
- **Peripheral addresses**
- Used to access registers in “peripheral functions” (timers, ADCs, communication modules, etc.) **outside** the CPU
Cortex-M4 processor memory map
**Cortex** peripheral function registers (NVIC, tick timer, etc.)
**STM32F407 microcontroller:**
Peripheral function registers
SRAM1 (128Kbyte):
\[0x2000_0000 \ldots 0x2001_FFFF\]
SRAM2 (64Kbyte):
\[0x1000_0000 \ldots 0x1000_FFFF\]
Flash memory (1MByte):
\[0x0800_0000 \ldots 0x0800F_FFFF\]
We will use Flash for code, SRAM1 for data.
Endianness
- Relationship between bit and byte/word ordering defines “endianness”:
<table>
<thead>
<tr>
<th>Address</th>
<th>100</th>
<th>101</th>
<th>102</th>
<th>103</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>0x78</td>
<td>0x56</td>
<td>0x34</td>
<td>0x12</td>
</tr>
</tbody>
</table>
Example:
32-bit data =
0x12345678
<table>
<thead>
<tr>
<th>Address</th>
<th>100</th>
<th>101</th>
<th>102</th>
<th>103</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>0x12</td>
<td>0x34</td>
<td>0x56</td>
<td>0x78</td>
</tr>
</tbody>
</table>
Little-endian (default)
<table>
<thead>
<tr>
<th>Address</th>
<th>100</th>
<th>101</th>
<th>102</th>
<th>103</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Big-endian (option)
Physical memory organization
- Physical memory may be organized as N bytes per addressable word
- ARM memories normally 4-bytes wide
- "Align" 32-bit data to a Word boundary (address that is a multiple of 4)
- All bytes of a word must be accessible with one memory read/write
```
<table>
<thead>
<tr>
<th>Byte 3</th>
<th>Byte 2</th>
<th>Byte 1</th>
<th>Byte 0</th>
</tr>
</thead>
<tbody>
<tr>
<td>103</td>
<td>102</td>
<td>101</td>
<td>100</td>
</tr>
<tr>
<td>107</td>
<td>106</td>
<td>105</td>
<td>104</td>
</tr>
<tr>
<td>10B</td>
<td>10A</td>
<td>109</td>
<td>108</td>
</tr>
<tr>
<td>10F</td>
<td>10E</td>
<td>10D</td>
<td>10C</td>
</tr>
</tbody>
</table>
```
ARM instructions can read/write 8/16/32-bit data values
First Assembly
```
AREA string_copy, CODE, READONLY
EXPORT __main
ALIGN
ENTRY
__main PROC
strcpy LDR r1, =srcStr ; Retrieve address of the source string
LDR r0, =dstStr ; Retrieve address of the destination string
loop LDRB r2, [r1], #1 ; Load a byte & increase src address pointer
STRB r2, [r0], #1 ; Store a byte & increase dst address pointer
CMP r2, #0 ; Check for the null terminator
BNE loop ; Copy the next byte if string is not ended
stop B stop ; Dead loop. Embedded program never exits.
ENDP
AREA myData, DATA, READWRITE
ALIGN
srcStr DCB "The source string.",0 ; Strings are null terminated
dstStr DCB "The destination string.",0 ; dststr has more space than srcstr
END
```
### Directive: AREA
<table>
<thead>
<tr>
<th>Array</th>
<th>AREA myData, DATA, READWRITE ; Define a data section</th>
</tr>
</thead>
<tbody>
<tr>
<td>Array</td>
<td>DCD 1, 2, 3, 4, 5 ; Define an array with five integers</td>
</tr>
<tr>
<td>AREA myCode, CODE, READONLY ; Define a code section</td>
<td></td>
</tr>
<tr>
<td>EXPORT __main ; Make __main visible to the linker</td>
<td></td>
</tr>
<tr>
<td>ENTRY</td>
<td>__main PROC ; PROC marks the beginning of a subroutine</td>
</tr>
<tr>
<td>ENTRY</td>
<td>... ; Assembly program starts here.</td>
</tr>
<tr>
<td>ENTRY</td>
<td>ENDP ; Mark the end of a subroutine</td>
</tr>
<tr>
<td>ENTRY</td>
<td>END ; Mark the end of a program</td>
</tr>
</tbody>
</table>
- The AREA directive indicates to the assembler the start of a new data or code section.
- Areas are the basic independent and indivisible unit processed by the linker.
- Each area is identified by a name and areas within the same source file cannot share the same name.
- An assembly program must have at least one code area.
- By default, a code area can only be read and a data area may be read from and written to.
## Directive: END
The END directive indicates the end of a source file.
- Each assembly program must end with this directive.
<table>
<thead>
<tr>
<th>Array</th>
<th>AREA myData, DATA, READWRITE ; Define a data section</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>DCD 1, 2, 3, 4, 5 ; Define an array with five integers</td>
</tr>
<tr>
<td>AREA</td>
<td>myCode, CODE, READONLY ; Define a code section</td>
</tr>
<tr>
<td>EXPORT</td>
<td>__main ; Make __main visible to the linker</td>
</tr>
<tr>
<td>ENTRY</td>
<td>__main PROC ; PROC marks the begin of a subroutine</td>
</tr>
<tr>
<td></td>
<td>... ; Assembly program starts here.</td>
</tr>
<tr>
<td></td>
<td>ENDP ; Mark the end of a subroutine</td>
</tr>
<tr>
<td>END</td>
<td>; Mark the end of a program</td>
</tr>
</tbody>
</table>
### Directive: ENTRY
<table>
<thead>
<tr>
<th>Array</th>
<th>AREA myData, DATA, READWRITE ; Define a data section</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>DCD 1, 2, 3, 4, 5 ; Define an array with five integers</td>
</tr>
<tr>
<td>_main</td>
<td>AREA myCode, CODE, READONLY ; Define a code section</td>
</tr>
<tr>
<td></td>
<td>EXPORT __main ; Make __main visible to the linker</td>
</tr>
<tr>
<td></td>
<td>ENTRY ; Mark the entrance to the entire program</td>
</tr>
<tr>
<td></td>
<td>__main PROC ; PROC marks the begin of a subroutine</td>
</tr>
<tr>
<td></td>
<td>... ; Assembly program starts here.</td>
</tr>
<tr>
<td></td>
<td>ENDP ; Mark the end of a subroutine</td>
</tr>
<tr>
<td></td>
<td>END ; Mark the end of a program</td>
</tr>
</tbody>
</table>
- The ENTRY directive marks the first instruction to be executed within an application.
- **There must be one and only one entry directive in an application**, no matter how many source files the application has.
### Directive: PROC and ENDP
<table>
<thead>
<tr>
<th>Array</th>
<th>AREA myData, DATA, READWRITE ; Define a data section</th>
</tr>
</thead>
<tbody>
<tr>
<td>Array</td>
<td>DCD 1, 2, 3, 4, 5 ; Define an array with five integers</td>
</tr>
<tr>
<td>__main</td>
<td>AREA myCode, CODE, READONLY ; Define a code section</td>
</tr>
<tr>
<td>__main</td>
<td>EXPORT __main ; Make __main visible to the linker</td>
</tr>
<tr>
<td>__main</td>
<td>ENTRY ; Mark the entrance to the entire program</td>
</tr>
<tr>
<td>__main</td>
<td>PROC ; PROC marks the begin of a subroutine</td>
</tr>
<tr>
<td>__main</td>
<td>... ; Assembly program starts here.</td>
</tr>
<tr>
<td>__main</td>
<td>ENDP ; Mark the end of a subroutine</td>
</tr>
<tr>
<td>__main</td>
<td>END ; Mark the end of a program</td>
</tr>
</tbody>
</table>
- PROC and ENDP are to mark the start and end of a function (also called subroutine or procedure).
- A single source file can contain multiple subroutines, with each of them defined by a pair of PROC and ENDP.
- PROC and ENDP cannot be nested. We cannot define a subroutine within another subroutine.
### Directive: EXPORT and IMPORT
**AREA myData, DATA, READWRITE** ; Define a data section
**DCD 1, 2, 3, 4, 5** ; Define an array with five integers
**AREA myCode, CODE, READONLY** ; Define a code section
**EXPORT __main** ; Make __main visible to the linker
**IMPORT sinx** ; Function sinx defined in another file
**ENTRY** ; Mark the entrance to the entire program
**ENTRY __main** PROC ; PROC marks the begin of a subroutine
... ; Assembly program starts here.
**BL sinx** ; Call the sinx function
**ENDP** ; Mark the end of a subroutine
**END** ; Mark the end of a program
- The EXPORT declares a symbol and makes this symbol visible to the linker.
- The IMPORT gives the assembler a symbol that is not defined locally in the current assembly file.
- The IMPORT is similar to the “extern” keyword in C.
The EQU directive associates a symbolic name to a numeric constant. Similar to the use of \#define in a C program, the EQU can be used to define a constant in an assembly code.
Example:
MOV R0, #MyConstant ; Constant 1234 placed in R0
Directive: ALIGN
AREA example, CODE, ALIGN = 3 ; Memory address begins at a multiple of 8
ADD r0, r1, r2 ; Instructions start at a multiple of 8
AREA myData, DATA, ALIGN = 2 ; Address starts at a multiple of four
a DCB 0xFF ; The first byte of a 4-byte word
ALIGN 4, 3 ; Align to the last byte of a word
b DCB 0x33 ; Set the fourth byte of a 4-byte word
c DCB 0x44 ; Add a byte to make next data misaligned
ALIGN ; Force the next data to be aligned
d DCD 12345 ; Skip three bytes and store the word
ALIGN generally used as in this example, to align a variable to its data type.
Directive: INCLUDE or GET
- The INCLUDE or GET directive is to include an assembly source file within another source file.
- It is useful to include constant symbols defined by using EQU and stored in a separate source file.
```assembly
INCLUDE constants.s ; Load Constant Definitions
AREA main, CODE, READONLY
EXPORT __main
ENTRY
__main PROC
...
ENDP
END
```
|
{"Source-Url": "http://www.eng.auburn.edu:80/~nelsovp/courses/elec2220/slides/ARM%20prog%20model%201.pdf", "len_cl100k_base": 5970, "olmocr-version": "0.1.50", "pdf-total-pages": 29, "total-fallback-pages": 0, "total-input-tokens": 40943, "total-output-tokens": 6158, "length": "2e12", "weborganizer": {"__label__adult": 0.0007386207580566406, "__label__art_design": 0.0005550384521484375, "__label__crime_law": 0.000659942626953125, "__label__education_jobs": 0.0004124641418457031, "__label__entertainment": 0.0001061558723449707, "__label__fashion_beauty": 0.00039458274841308594, "__label__finance_business": 0.0004203319549560547, "__label__food_dining": 0.0005960464477539062, "__label__games": 0.001331329345703125, "__label__hardware": 0.08721923828125, "__label__health": 0.0006313323974609375, "__label__history": 0.0003910064697265625, "__label__home_hobbies": 0.00033092498779296875, "__label__industrial": 0.003009796142578125, "__label__literature": 0.0001958608627319336, "__label__politics": 0.00037169456481933594, "__label__religion": 0.0008387565612792969, "__label__science_tech": 0.07940673828125, "__label__social_life": 5.9664249420166016e-05, "__label__software": 0.014923095703125, "__label__software_dev": 0.80517578125, "__label__sports_fitness": 0.0007390975952148438, "__label__transportation": 0.0012731552124023438, "__label__travel": 0.0002696514129638672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18215, 0.04125]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18215, 0.33328]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18215, 0.67317]], "google_gemma-3-12b-it_contains_pii": [[0, 238, false], [238, 411, null], [411, 453, null], [453, 614, null], [614, 1998, null], [1998, 2028, null], [2028, 2607, null], [2607, 3248, null], [3248, 3832, null], [3832, 4029, null], [4029, 4462, null], [4462, 5138, null], [5138, 6124, null], [6124, 8080, null], [8080, 9065, null], [9065, 9842, null], [9842, 10522, null], [10522, 10895, null], [10895, 11387, null], [11387, 11966, null], [11966, 12661, null], [12661, 13616, null], [13616, 14257, null], [14257, 15153, null], [15153, 16213, null], [16213, 17032, null], [17032, 17273, null], [17273, 17854, null], [17854, 18215, null]], "google_gemma-3-12b-it_is_public_document": [[0, 238, true], [238, 411, null], [411, 453, null], [453, 614, null], [614, 1998, null], [1998, 2028, null], [2028, 2607, null], [2607, 3248, null], [3248, 3832, null], [3832, 4029, null], [4029, 4462, null], [4462, 5138, null], [5138, 6124, null], [6124, 8080, null], [8080, 9065, null], [9065, 9842, null], [9842, 10522, null], [10522, 10895, null], [10895, 11387, null], [11387, 11966, null], [11966, 12661, null], [12661, 13616, null], [13616, 14257, null], [14257, 15153, null], [15153, 16213, null], [16213, 17032, null], [17032, 17273, null], [17273, 17854, null], [17854, 18215, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18215, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18215, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18215, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18215, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18215, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18215, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18215, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18215, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18215, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18215, null]], "pdf_page_numbers": [[0, 238, 1], [238, 411, 2], [411, 453, 3], [453, 614, 4], [614, 1998, 5], [1998, 2028, 6], [2028, 2607, 7], [2607, 3248, 8], [3248, 3832, 9], [3832, 4029, 10], [4029, 4462, 11], [4462, 5138, 12], [5138, 6124, 13], [6124, 8080, 14], [8080, 9065, 15], [9065, 9842, 16], [9842, 10522, 17], [10522, 10895, 18], [10895, 11387, 19], [11387, 11966, 20], [11966, 12661, 21], [12661, 13616, 22], [13616, 14257, 23], [14257, 15153, 24], [15153, 16213, 25], [16213, 17032, 26], [17032, 17273, 27], [17273, 17854, 28], [17854, 18215, 29]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18215, 0.29296]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
cc26fb3aa5e0c585d3575ad1aa5438cd11a8266e
|
An IR-based Evaluation Framework for Web Search Query Segmentation
Rishiraj Saha Roy and Niloy Ganguly
IIT Kharagpur
India
Monojit Choudhury and Srivatsan Laxman
Microsoft Research India
India
Query Segmentation
- Dividing a query into individual semantic units (Bergsma and Wang, 2007)
- Example
- *history of all saints church south australia* →
- *history of| all saints church | south australia* ✔
- *history of all | saints church south | australia* ✗
- Goes beyond multiword named entity recognition (gprs config, history of, how to)
- Helps in better query understanding
- Can improve IR performance (Bendersky et al. 2009; Li et al. 2011)
- This research: Focus on evaluation, not on algorithm
Evaluation till now
- An algorithm segments each query in test set
- A segmented query is matched against the human annotated query using five metrics (Hagen et al. 2011)
Evaluation till now
- **Segment Precision** – Fraction of machine segments that match with the human segments
- **Segment Recall** – Fraction of human segments that match with the machine segments
- **Segment F-Score** – Harmonic mean of precision and recall
- **Query Accuracy** – Fraction of queries where machine and human segmentations match exactly
- **Classification Accuracy** – Fraction of boundaries and non-boundaries that match between human and machine segmentations
Problems
- Low inter-annotator agreement on most metrics (≈ 70%) (Tan and Peng 2008)
- Human A: grand theft auto | san andreas | ps2 | cheats
- Human B: grand theft auto san andreas | ps2 cheats
- Not clear what should be the guidelines
Problems
Humans may not be the best judge as to which segments are best for IR – Humans are not the end users of segmentation!!
End user of segmentation is the search engine
An IR performance based evaluation
Main challenge: how to use segmented query for retrieval
Different segments of the same query may need to be matched differently in documents for the best results.
- **Ordered** (*windows 7*)
- **Unordered** (may have linguistic constraints) (*files in word*)
- **Insertions, deletions, transpositions, substitutions** (*cannot properly view*)
- **MRF models of term dependence** (Metzler and Croft, 2005)
- **Certain segments need not be matched at all** (*view online, cheap, near*)
Current IR engines do not support these specifications
Most retrieval systems support use of double quotes (exact match)
However, simply putting double quotes around all query segments results in very poor retrieval performance!!
Hagen et al. (2011) explore an evaluation with quotes around all segments, effective only for MWEs and negatively affecting overall results
We adopt a less constrained approach
For each segmentation algorithm output, we generate all quoted versions of segmented query \( q^s \) (each segment can be quoted or unquoted)
\( 2^k \) quoted versions for a \( k \)-segment query
## Proposed Evaluation Framework
<table>
<thead>
<tr>
<th>Segmented query</th>
<th>Quoted versions</th>
</tr>
</thead>
<tbody>
<tr>
<td>history of all saints church south australia</td>
<td></td>
</tr>
<tr>
<td>history of all saints church “south australia”</td>
<td></td>
</tr>
<tr>
<td>history of “all saints church” south australia</td>
<td></td>
</tr>
<tr>
<td>history of “all saints church” “south australia”</td>
<td></td>
</tr>
<tr>
<td>history of</td>
<td>all saints church</td>
</tr>
<tr>
<td>history of “all saints church” “south australia”</td>
<td></td>
</tr>
<tr>
<td>“history of” all saints church south australia</td>
<td></td>
</tr>
<tr>
<td>“history of” “all saints church” “south australia”</td>
<td></td>
</tr>
<tr>
<td>“history of” “all saints church” “south australia”</td>
<td></td>
</tr>
</tbody>
</table>
- Each version issued through IR engine (after query versions are deduplicated)
- IR system retrieves top $k$ pages for each quoted version of a query
- Measure performance (eg. nDCG) of each quoted version (using human relevance judgments)
### Proposed Evaluation Framework
<table>
<thead>
<tr>
<th>Segmented query</th>
<th>Quoted versions</th>
<th>Score</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td><em>history of all saints church south australia</em></td>
<td>0.723</td>
</tr>
<tr>
<td></td>
<td><em>history of all saints church “south australia”</em></td>
<td>0.788</td>
</tr>
<tr>
<td></td>
<td><em>history of “all saints church” south australia</em></td>
<td>0.801</td>
</tr>
<tr>
<td>*history of</td>
<td>all saints church</td>
<td>south australia*</td>
</tr>
<tr>
<td></td>
<td><em>“history of” all saints church south australia</em></td>
<td>0.632</td>
</tr>
<tr>
<td></td>
<td><em>“history of” all saints church “south australia”</em></td>
<td>0.645</td>
</tr>
<tr>
<td></td>
<td><em>“history of” “all saints church” south australia</em></td>
<td>0.652</td>
</tr>
<tr>
<td></td>
<td><em>“history of” “all saints church” “south australia”</em></td>
<td>0.619</td>
</tr>
</tbody>
</table>
Use of **Oracle**: Highest nDCG from all quoted versions chosen as score achieved by $q^s$
- Reflects “potential” of a segmented query
- Directly correlates to goodness of segmentation algorithm
For each algorithm, compute average oracle score over all queries.
Find gold standard for IR performance: Also perform brute force exhaustive search over all possible quoted versions of a query to find the one with the highest score.
Call it the best quoted version (BQV (BF)) of a query, irrespective of any segmentation algorithm.
- $2^{n-1}$ quoted versions for an $n$-word query.
Resources Required by Framework
- Any search engine that supports double quotes (Lucene in our experiments)
- Test set of queries
- Document pool
- Query relevance sets (*qrels*): For each query, human relevance judgments for the subset of documents in the pool possibly relevant to the query
- These resources are required for any IR-system evaluation
Dataset
- Query test set
- 500 test queries (5-8 words) sampled from Bing Australia in May 2010
- Document collection
- All possible quoted versions of a test query are issued through the Bing API 2.0
- Top 10 URLs retrieved are deduplicated and added to collection
Relevance judgments
- For each query, three sets of relevance judgments obtained for each URL retrieved for the query
- Much higher agreement on relevance judgments than human segment boundaries
Experiments
- Six segmentation strategies compared on our framework including (four state-of-the-art systems)
- Li et al. (SIGIR 2011), Hagen et al. (WWW 2011), Mishra et al. (WWW 2011), Mishra et al.+Wiki (SIGIR 2012)
- Baselines: PMI-W, PMI-Q
- Plus annotations by three human annotators A, B, C
Results
IR Performance of Strategies
- BQV (BF)
- Human A
- Human B
- Mishra et al. + Wiki
- PMI-Q
- Hagen et al.
- Human C
- Li et al.
- Mishra et al.
- PMI-W
- Unsegmented
nDCG@10
August 15, 2012 An IR-based Evaluation Framework for Web Search Query Segmentation
Results
IR Performance of Strategies
- BQV (BF)
- Human A
- Human B
- Mishra et al.+Wiki
- PMI-Q
- Hagen et al.
- Human C
- Li et al.
- Mishra et al.
- PMI-W
- Unsegmented
Segmentation helps!
nDCG@10
Results
IR Performance of Strategies
- BQV (BF)
- Human A
- Human B
- Mishra et al.+Wiki
- PMI-Q
- Hagen et al.
- Human C
- Li et al.
- Mishra et al.
- PMI-W
- Unsegmented
nDCG@10
No statistically significant difference!!
IR Performance of Strategies
- BQV (BF)
- Human A
- Human B
- Mishra et al.+Wiki
- PMI-Q
- Hagen et al.
- Human C
- Li et al.
- Mishra et al.
- PMI-W
- Unsegmented
Humans not the best!!
Results
IR Performance of Strategies
<table>
<thead>
<tr>
<th>Strategy</th>
<th>nDCG@10</th>
</tr>
</thead>
<tbody>
<tr>
<td>BQV (BF)</td>
<td>0.85</td>
</tr>
<tr>
<td>Human A</td>
<td>0.8</td>
</tr>
<tr>
<td>Human B</td>
<td>0.8</td>
</tr>
<tr>
<td>Mishra et al.+Wiki</td>
<td>0.8</td>
</tr>
<tr>
<td>PMI-Q</td>
<td>0.75</td>
</tr>
<tr>
<td>Hagen et al.</td>
<td>0.75</td>
</tr>
<tr>
<td>Human C</td>
<td>0.75</td>
</tr>
<tr>
<td>Li et al.</td>
<td>0.75</td>
</tr>
<tr>
<td>Mishra et al.</td>
<td>0.7</td>
</tr>
<tr>
<td>PMI-W</td>
<td>0.7</td>
</tr>
<tr>
<td>Unsegmented</td>
<td>0.6</td>
</tr>
</tbody>
</table>
Room for improvement!!
August 15, 2012 An IR-based Evaluation Framework for Web Search Query Segmentation
Results
- Kendall-Tau between rankings derived
- IR-performance and Matching Metrics (Humans as reference): 0.75
- Crucial rank inversions for certain pairs when performances compared (Li et al. and PMI-Q)
- IR-performance and Matching Metrics (BQV (BF) as reference): – 0.85
- Issues with metrics!
Algo. 1: history | of | all saints | church | south australia
Algo. 2: history of all | saints church south | australia
Human: history of | all saints church | south australia
IR-performance: Algo. 1 > Algo. 2
Matching metrics: Algo. 1 ≈ Algo. 2
• Sub-, super- and straddle – same penalty for all!
Multiword Segment Analysis
Hagen et al. Human B Human A Mishra et al. + Wiki Human C Li et al. PMI-Q Mishra et al. BQV (BF) PMI-W
0 50 100 150 200 250 300 350 400 450 500
[Legend: □ 4, ▣ 3, □ 2, □ 1, □ 0]
Majority of queries have one multiword segment, gold standard
Most queries have two multiword segments, best in IR.
Less queries have two multiword segments, IR performance lower
Multiword Segment Analysis
Almost no multiword segment, IR performance poorest
Observations
- Human as well as all algorithmic segmentation schemes consistently outperform unsegmented queries.
- Performance of some segmentation algorithms are comparable and sometimes even marginally better than some of the human annotators.
- Considerable scope for improving IR performance through better segmentation (all values less than BQV (BF)).
Insights
- Segmentation is helpful for IR
- Human segmentations are a good proxy, but not a true gold standard
- Matching metrics are misleading – no differential penalties
- Distribution of multiword segments across queries gives insights about effectiveness of strategy
- Vital for algorithms to detect multiword segments that are important for IR – output should allow the BQV(BF) to be generated
Final words
- Dataset used for all experiments publicly shared at
[http://cse.iitkgp.ac.in/resgrp/cnerg/qa/querysegmentation.html](http://cse.iitkgp.ac.in/resgrp/cnerg/qa/querysegmentation.html)
- Acknowledgements:
- ACM SIGIR Student Travel Support and the Donald B. Crouch Travel Grant
- Microsoft Research Ph.D. Fellowship
- Matthias Hagen (Bauhaus Universitat Weimar) for providing us with the segmentation output of his segmentation algorithm (Hagen et al., 2011)
- Kuansan Wang and Bo-June (Paul) Hsu (Microsoft Research Redmond) for sharing the code for their segmentation algorithm (Li et al., 2011)
Questions?
500 queries resulted in 4,476 quoted versions (approx. 9 per query)
Fetched 14,171 unique URLs (approx. 28 per query, 3 per quoted
version)
On an average, adding the 9th strategy to a group of the
remaining eight resulted in about one new quoted version for
every two queries
These new versions may or may not introduce new documents to
the pool
For 71.4% of the queries there is less than 50% overlap between the top ten URLs retrieved for the different quoted versions.
- BQV stands for the best quoted version. The highest value in a row (excluding the BQV column) and those with no statistically significant difference with the highest value are marked in boldface. The values for algorithms that perform better than or have no statistically significant difference with the minimum of the human segmentations are marked with *. The paired t-test was performed and the null hypothesis was rejected if the p-value was less than 0.05.
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>nDCG@5</td>
<td>0.688</td>
<td>0.752*</td>
<td>0.763*</td>
<td>0.745</td>
<td>0.771*</td>
<td>0.691</td>
<td>0.766*</td>
<td>0.770</td>
<td>0.768</td>
<td>0.759</td>
<td>0.802*</td>
</tr>
<tr>
<td>nDCG@10</td>
<td>0.701</td>
<td>0.756*</td>
<td>0.767*</td>
<td>0.751</td>
<td>0.771*</td>
<td>0.704</td>
<td>0.767*</td>
<td>0.770</td>
<td>0.768</td>
<td>0.763</td>
<td>0.813*</td>
</tr>
<tr>
<td>MAP@5</td>
<td>0.882</td>
<td>0.930*</td>
<td>0.942*</td>
<td>0.930*</td>
<td>0.946*</td>
<td>0.884</td>
<td>0.932*</td>
<td>0.944</td>
<td>0.942</td>
<td>0.936</td>
<td>0.950*</td>
</tr>
<tr>
<td>MAP@10</td>
<td>0.865</td>
<td>0.910*</td>
<td>0.921*</td>
<td>0.910*</td>
<td>0.924*</td>
<td>0.867</td>
<td>0.912*</td>
<td>0.923</td>
<td>0.921</td>
<td>0.916</td>
<td>0.935*</td>
</tr>
<tr>
<td>MRR@5</td>
<td>0.538</td>
<td>0.632*</td>
<td>0.649*</td>
<td>0.609</td>
<td>0.657*</td>
<td>0.543</td>
<td>0.648*</td>
<td>0.656</td>
<td>0.648</td>
<td>0.632</td>
<td>0.716*</td>
</tr>
<tr>
<td>MRR@10</td>
<td>0.549</td>
<td>0.640*</td>
<td>0.658*</td>
<td>0.619</td>
<td>0.665*</td>
<td>0.555</td>
<td>0.656*</td>
<td>0.665</td>
<td>0.656</td>
<td>0.640</td>
<td>0.724*</td>
</tr>
</tbody>
</table>
The highest values in a row with no statistically significant differences between each other are marked in boldface. The values for algorithms that perform better than or have no statistically significant difference with the minimum of the values for human segmentations are marked with *. The paired t-test was performed and the null hypothesis was rejected if the p-value was less than 0.05.
**Performance** of state-of-the-art schemes against manual segmentations (Bing test set)
Crucial inversions of ranks of PMI-Q and [13]
<table>
<thead>
<tr>
<th>Metric</th>
<th>Unseg</th>
<th>[13]</th>
<th>[8]</th>
<th>[16]</th>
<th>[16] + Wiki</th>
<th>PMI-W</th>
<th>PMI-Q</th>
<th>A</th>
<th>B</th>
<th>C</th>
<th>BQV</th>
</tr>
</thead>
<tbody>
<tr>
<td>Qry-Acc</td>
<td>0.000</td>
<td>0.375</td>
<td>0.602*</td>
<td>0.167</td>
<td><strong>0.749</strong>*</td>
<td>0.000</td>
<td>0.341</td>
<td>0.631</td>
<td>0.686</td>
<td>0.589</td>
<td>0.065</td>
</tr>
<tr>
<td>Seg-Prec</td>
<td>0.043</td>
<td>0.524</td>
<td>0.697*</td>
<td>0.350</td>
<td><strong>0.803</strong>*</td>
<td>0.036</td>
<td>0.448</td>
<td>0.691</td>
<td>0.741</td>
<td>0.682</td>
<td>0.140</td>
</tr>
<tr>
<td>Seg-Rec</td>
<td>0.076</td>
<td>0.588</td>
<td>0.713*</td>
<td>0.447</td>
<td><strong>0.785</strong>*</td>
<td>0.059</td>
<td>0.487</td>
<td>0.714</td>
<td>0.766</td>
<td>0.723</td>
<td>0.170</td>
</tr>
<tr>
<td>Seg-F</td>
<td>0.055</td>
<td>0.554</td>
<td>0.705*</td>
<td>0.392</td>
<td><strong>0.794</strong>*</td>
<td>0.045</td>
<td>0.467</td>
<td>0.702</td>
<td>0.753</td>
<td>0.702</td>
<td>0.153</td>
</tr>
<tr>
<td>Seg-Acc</td>
<td>0.404</td>
<td>0.810</td>
<td>0.885</td>
<td>0.748</td>
<td><strong>0.927</strong>*</td>
<td>0.411</td>
<td>0.810</td>
<td>0.892</td>
<td>0.913</td>
<td>0.893</td>
<td>0.654</td>
</tr>
</tbody>
</table>
Table 7: IR-based evaluation using Bing API.
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>nDCG@10</td>
<td>0.882</td>
<td>0.823</td>
<td>0.989*</td>
</tr>
<tr>
<td>MAP@10</td>
<td>0.366</td>
<td>0.352</td>
<td>0.410*</td>
</tr>
<tr>
<td>MRR@10</td>
<td>0.541</td>
<td>0.515</td>
<td>0.572*</td>
</tr>
</tbody>
</table>
The highest value in a row is marked **bold**. Statistically significant ($p < 0.05$ for paired $t$-test) improvement over the unsegmented query is marked with *.
|
{"Source-Url": "http://people.mpi-inf.mpg.de/~rsaharo/sigir12slides_rsrngmcsl.pdf", "len_cl100k_base": 4782, "olmocr-version": "0.1.50", "pdf-total-pages": 42, "total-fallback-pages": 0, "total-input-tokens": 64843, "total-output-tokens": 5907, "length": "2e12", "weborganizer": {"__label__adult": 0.0005774497985839844, "__label__art_design": 0.0006842613220214844, "__label__crime_law": 0.00089263916015625, "__label__education_jobs": 0.00605010986328125, "__label__entertainment": 0.0005321502685546875, "__label__fashion_beauty": 0.00029587745666503906, "__label__finance_business": 0.0017728805541992188, "__label__food_dining": 0.0005259513854980469, "__label__games": 0.002231597900390625, "__label__hardware": 0.0011968612670898438, "__label__health": 0.000926494598388672, "__label__history": 0.0009617805480957032, "__label__home_hobbies": 0.0001538991928100586, "__label__industrial": 0.0004360675811767578, "__label__literature": 0.0030956268310546875, "__label__politics": 0.00038909912109375, "__label__religion": 0.0006394386291503906, "__label__science_tech": 0.296630859375, "__label__social_life": 0.00032067298889160156, "__label__software": 0.169677734375, "__label__software_dev": 0.5107421875, "__label__sports_fitness": 0.0003249645233154297, "__label__transportation": 0.0005898475646972656, "__label__travel": 0.000396728515625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14262, 0.04012]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14262, 0.12433]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14262, 0.79282]], "google_gemma-3-12b-it_contains_pii": [[0, 195, false], [195, 466, null], [466, 711, null], [711, 883, null], [883, 1363, null], [1363, 1601, null], [1601, 1730, null], [1730, 1870, null], [1870, 2299, null], [2299, 2672, null], [2672, 2907, null], [2907, 3487, null], [3487, 3728, null], [3728, 4733, null], [4733, 4929, null], [4929, 5316, null], [5316, 5670, null], [5670, 5943, null], [5943, 6139, null], [6139, 6442, null], [6442, 6711, null], [6711, 6915, null], [6915, 7141, null], [7141, 7329, null], [7329, 8064, null], [8064, 8368, null], [8368, 8668, null], [8668, 8885, null], [8885, 8947, null], [8947, 9001, null], [9001, 9064, null], [9064, 9144, null], [9144, 9503, null], [9503, 9906, null], [9906, 10526, null], [10526, 10537, null], [10537, 10537, null], [10537, 10886, null], [10886, 11012, null], [11012, 12430, null], [12430, 13633, null], [13633, 14262, null]], "google_gemma-3-12b-it_is_public_document": [[0, 195, true], [195, 466, null], [466, 711, null], [711, 883, null], [883, 1363, null], [1363, 1601, null], [1601, 1730, null], [1730, 1870, null], [1870, 2299, null], [2299, 2672, null], [2672, 2907, null], [2907, 3487, null], [3487, 3728, null], [3728, 4733, null], [4733, 4929, null], [4929, 5316, null], [5316, 5670, null], [5670, 5943, null], [5943, 6139, null], [6139, 6442, null], [6442, 6711, null], [6711, 6915, null], [6915, 7141, null], [7141, 7329, null], [7329, 8064, null], [8064, 8368, null], [8368, 8668, null], [8668, 8885, null], [8885, 8947, null], [8947, 9001, null], [9001, 9064, null], [9064, 9144, null], [9144, 9503, null], [9503, 9906, null], [9906, 10526, null], [10526, 10537, null], [10537, 10537, null], [10537, 10886, null], [10886, 11012, null], [11012, 12430, null], [12430, 13633, null], [13633, 14262, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14262, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14262, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14262, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14262, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14262, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14262, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14262, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14262, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14262, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14262, null]], "pdf_page_numbers": [[0, 195, 1], [195, 466, 2], [466, 711, 3], [711, 883, 4], [883, 1363, 5], [1363, 1601, 6], [1601, 1730, 7], [1730, 1870, 8], [1870, 2299, 9], [2299, 2672, 10], [2672, 2907, 11], [2907, 3487, 12], [3487, 3728, 13], [3728, 4733, 14], [4733, 4929, 15], [4929, 5316, 16], [5316, 5670, 17], [5670, 5943, 18], [5943, 6139, 19], [6139, 6442, 20], [6442, 6711, 21], [6711, 6915, 22], [6915, 7141, 23], [7141, 7329, 24], [7329, 8064, 25], [8064, 8368, 26], [8368, 8668, 27], [8668, 8885, 28], [8885, 8947, 29], [8947, 9001, 30], [9001, 9064, 31], [9064, 9144, 32], [9144, 9503, 33], [9503, 9906, 34], [9906, 10526, 35], [10526, 10537, 36], [10537, 10537, 37], [10537, 10886, 38], [10886, 11012, 39], [11012, 12430, 40], [12430, 13633, 41], [13633, 14262, 42]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14262, 0.21344]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
447d130623f52d3f441c2eee227530cd007f2112
|
Expanding the Notion of Answer in Rule-Based Systems
Topic Area: Representational Formalisms
Keywords: Question Answering
Debra T. Burhans and Stuart C. Shapiro
Department of Computer Science and Engineering
and Center for Cognitive Science
State University of New York at Buffalo
226 Bell Hall
Buffalo, NY 14206-2000
[email protected], [email protected]
November 1, 1999
Expanding the Notion of Answer in Rule-Based Systems
1 Introduction
The traditional notion of a question in AI is an open sentence $Q(x_1..x_n)$, and the traditional notion of an answer is a set of $a_1..a_n$ such that $Q(a_1..a_n)$. More recently, this notion of answer has been extended to generic answers of the form
$$\forall (x_1..x_n) [G(x_1..x_n) \Rightarrow Q(x_1..x_n)].$$
We further extend the notion of answer to include hypothetical/generic answers of the form
$$(\exists (w_1..w_{n1}) H(w_1..w_{n1}))
\Rightarrow \forall (x_1..x_{n2}, y_1..y_{n3}) [G(x_1..x_{n2}, y_1..y_{n3})
\Rightarrow \forall (z_1..z_{n4}) Q(x_1..x_{n2}, z_1..z_{n4})].$$
We formally show how every clause generated during the course of a refutation resolution procedure may be analyzed as a hypothetical/generic answer, as long as it descends from the query clause. Informally, in the above schema: $Q$, the specific part of the answer, represents literals that were part of the query; $G$, the generic part, represents literals that share variables with the $Q$ literals, or that share variables with other $G$ literals; and $H$, the hypothetical part, represents literals whose variables don’t occur in either $G$ or $Q$. Each part may also contain constants that were or weren’t part of the query.
2 Background
The role of resolution theorem proving in question answering was established by Cordell Green [6, 5] with the introduction of the answer literal. This literal contains the variables from the query and is added to the clause(s) corresponding to the negation of the query. If the resolution refutation procedure produces the empty clause, the variable bindings found along the way are captured in the answer literal.
The goal of resolution refutation in this case is the production of the empty clause: absent that, no answers will be produced. This is the approach taken by Prolog and resolution theorem provers. The type of answer produced using this approach is termed extensional, or specific, and the form of such an answer can be characterized as a set of \( a_1 \ldots a_n \) such that \( Q(a_1 \ldots a_n) \).
Cholvy and Demolombe [2, 3] expanded upon Green’s work by looking at resolution in a rule base with no ground terms. In this situation, the empty clause is never produced, yet answers in the form of rules rather than facts are discovered. Such answers are termed intensional or generic. The general form of these answers is:
\[
\text{All}(x_1 \ldots x_n) \left[ G(x_1 \ldots x_n) \Rightarrow Q(x_1 \ldots x_n) \right].
\]
Motro [7, 8, 9] has examined the problem of intensional answering in the context of databases. His databases contained both rules and facts and his answers could contain elements of both extensional and intensional answers, in which case he used the term \textit{mixed}.
While generic answers have been described in the AI and database literature, applications that use resolution refutation are focused on the search for the empty clause, and hence, carry on the tradition of specific question answering. In addition, specific answering is the paradigm found in introductory AI textbooks [12, 11, 10, 4].
## 3 Recognizing Specific and Generic Answers
The criterion for recognizing specific answers is clear: when the empty clause is produced during the course of a resolution refutation proof, the variable bindings in effect at that time comprise a specific answer. If the option to continue a proof beyond the point where the empty clause is derived exists, the next resolution step that derives the empty clause produces another specific answer, and so on.
The criteria for recognizing a generic answer arise out of a concern for the relevance of an answer. If a clause contains an answer literal and one or more additional literals and the variables in the answer literal overlap completely with the variables in the other literals, it is clear that all the literals in the clause are relevant to the query. A clause with this form constitutes a generic answer. The form of the generic answer will be the conjunction of
the negation of the non-answer literals followed by an implication symbol, followed by the answer literal. Thus, the characterization of generic answers as rules. The variables in a generic answer, shared by the answer and non-answer literals, are assumed to be universally quantified.
Generic answers are generated along the way to finding specific answers. In case there are no specific answers, there may still be generics. The question then becomes, what is generated along the way to finding generic answers, and do these resolvants represent some other type of answer.
4 Hypothetical Answers
Consider the clauses that represent neither generic nor specific answers. Such clauses, provided they descend from the original query, contain at least one answer literal along with a non-empty set of non-answer literals that do not share variables with the answer literals. Such clauses are termed hypothetical answers [ref to our fall symp paper]. The interpretation by some other researchers of these clauses is that they are uninteresting because they will be subsumed by generic or specific answers. That is, additional literals containing variables not in the answer literals have been regarded as either not relevant, or not interesting because they will later be subsumed. A clause containing such additional literals can be represented as follows: the negation of the conjunction of the "extra" literals can be taken as the left hand side of an implication, where the right hand side of the implication is either a generic or a specific answer.
The following example (Example 1.) is presented in order to illustrate hypothetical answers, and how they relate to specific and generic answers. Consider this simple rule base:
all calicos are cats
fluffy is a calico
rover is a horse or rover is a dog
calicos like dogs
calicos do not like horses
and the question, *is there something that fluffy likes."
The following comprise the answers in the order produced. Each is followed by a brief explanation.
A gloss of this is \textit{if fluffy is a calico, fluffy likes dogs. The hypothetical portion of the answer contains only constants, and the generic and specific portions of the answer share a variable. The constant fluffy is also found in the specific part of the answer. It is variable sharing that structures the answer, regardless of the presence of constants.}
The second answer produced is:
\[(\forall x ((\text{DOG } x) \Rightarrow (\text{LIKES FLUFFY } x))) \Rightarrow (\text{LIKES FLUFFY } ?x0)\]
A gloss of this is \textit{fluffy likes dogs. This answer shows what happens when the hypothetical portion of the previous answer has been “discharged”, that is, the hypothetical portion of the answer has been eliminated by the process of resolution.}
The third answer is:
\[((\text{CALICO FLUFFY}) \& (\neg (\text{HORSE ROVER}))) \Rightarrow (\text{LIKES FLUFFY ROVER}))\]
A gloss for this answer is \textit{if fluffy is a calico and rover is not a horse, then fluffy likes rover. There is no generic part to this answer, and there are no variables. The specific part of the answer is simply the answer literals, and the rest of the literals unrelated by variable sharing comprise the hypothetical portion.}
A fourth answer is:
\[(\neg (\text{HORSE ROVER})) \Rightarrow (\text{LIKES FLUFFY ROVER}))\]
A gloss for this is \textit{if rover is a horse then fluffy likes rover. This is simply the third with the hypothetical discharged, as described above: it is known that fluffy is a calico.}
Finally, a seemingly unusual answer is produced:
\[(\exists x ((\text{CALICO } x) \& (\text{LIKES } x \text{ ROVER})) \Rightarrow (\text{LIKES FLUFFY ROVER}))\]
It was in fact this example that led directly to our general formulation of the form of an answer. In an older system, this was rejected as not relevant. A gloss is *if there is a calico that likes rover, then fluffy likes rover*. This is a perfectly reasonable and correct answer, though it falls far from what we commonly consider as an answer.
5 A General Characterization of an Answer
Based on the foregoing discussion, it is clear that the literals appearing in a clause can be partitioned into three groups. First, the answer literals, second, the non-answer literals involved in generic answers, and third, the non-answer literals that have no variables in common with either the answer literals or the generic literals. While the first and third of these groups can be easily identified, the second must be carefully characterized. The collection of literals “involved in generic answers” can be more precisely defined as follows. Included in this set are all literals in the variable closure of the answer literals, where this closure is defined as: Literals that share any variable with some answer literal are in the set, and in addition, literals that share any variable with any literal in the set are included.
We apply the terms specific, generic, and hypothetical to the three groups of literals characterized above, where the specific portion of an answer corresponds in most respects to what has previously been termed specific, and similarly for the generic portion of an answers. We expand the generic portion of the answer by including the closure of the variables in the answer rather than simply the variables. The hypothetical portion of the answer has been described in [1]. The following form shows how the different parts of the answer fit together:
\[
(\exists w_1 \ldots w_n) H(w_1 \ldots w_n) \\
\implies \forall (x_1 \ldots x_n, y_1 \ldots y_{n_2}) [G(x_1 \ldots x_{n_2}, y_1 \ldots y_{n_3}) \\
\implies \forall (z_1 \ldots z_{n_4}) Q(x_1 \ldots x_{n_2}, z_1 \ldots z_{n_4})].
\]
In addition, each part may also contain constants, where these constants may or may not have been part of the original query.
It is clear that specific, generic and mixed answers as previously defined fit into this framework. In the case of specific answers, only the rightmost term is present (the specific portion), and it contains only constants, no variables.
Generic answers comprise the middle and rightmost terms (the generic and specific portions), and contain only variables that appear both in the generic and specific parts. Mixed answers are simply generics in which constants appear in either the generic or specific part. Hypothetical answers are those containing a hypothetical component. The only required component of an answer is the specific portion, reflecting the fact that the answer literals are central to this process of question answering. It is the specific portion that connects the products of resolution with the original query.
6 All Resolvants that Descend from the Original Query are Answers
The procedure for using resolution refutation as a question answering mechanism involves adding an answer literal to the negation of the original query (which might yield more than one clause, meaning each would contain an answer literal). Using the set of support strategy, and setting the initial set of support to the negation of the query, resolution begins by looking for a clause to resolve with the negation of the query. The only way for a clause to be added to the set of support is for it to be the resolvent of some clause from the set of support with some other clause. The answer literals are completely ignored by the resolution process and are merely "carried along" in clauses with the other literals. In this variant of resolution refutation, the empty clause is, in fact, a clause with only answer literals.
More formally:
The set of support strategy guarantees that all clauses generated during the course of a resolution refutation proof descend from the set of support. Namely, each new resolvent has one parent that is a supported clause.
The set of support initially contains the negation of the query with the added answer literal(s).
The first resolvent produced will have as a parent the negated query clause, which contains an answer literal. Since the answer literal is ignored by resolution, the resolvent will contain the intact answer literal inherited from the parent, with updated variable names as necessary to resolve the clauses.
This resolvent is placed in the set of support, which clearly still contains
only clauses containing an answer literal.
Each step of resolution repeats this process, selecting one clause from the set of support, and another from the set of clauses in the rule base.
Therefore it is impossible to produce a resolvant that does not contain an answer literal.
Therefore producing resolvants in this manner means that every resolvant can be considered an answer. The presence of the answer literal ensures that every resolvant is descended from the original query, which was initially the only clause(s) containing an answer literal. In addition, it is clear that all such resolvants are relevant to the query.
6.1 A New Form for the Answer Literal
The way in which we construct the answer literal differs from past approaches. Consider a query $P(x_1, \ldots, x_n)$. Rewrite the query as the antecedent of the answer literal as follows:
$$\text{All}(x_1, \ldots, x_n)[P(x_1, \ldots, x_n) \Rightarrow \text{ANSWER}(P(x_1, \ldots, x_n))]$$
In this form, the query is negated, and this can now be converted to clause form and added to the rule base.
7 Relationships Between Hypotheticals, Generics, and Specifics
While in one sense it is reasonable to view as hypotheticals those answers found along the way to generics or specifics, and generics as those answers found along the way to specifics, the relationship between hypotheticals and the other types of answers is fundamentally different than that between generics and specifics.
It is better to know whether something is or is not the case rather than to be left with uncertainty. Therefore, there is a sense in which the task of settling the question of whether the hypothetical portion is in fact the case is of critical importance. This will be termed “discharging the hypothetical”.
On the other hand, there is no analogous “discharging the generic” process, precisely because a generic is a desirable answer and represents information contained in the knowledge base. The only reason pursue a more
specific answer, which in a sense “discharges” the generic, is when the goal is to obtain a specific rather than a generic answer. There is no existential presupposition associated with generics, so answers such as “all floobles squonk” carries no entailment about the existence of floobles. There are clearly cases when a specific answer is desired, and in such cases generics should be discharged as quickly as possible.
A generic captures what a set of specific answers have in common, and does so often in a succinct and clear manner. The same simply can not be said for a hypothetical.
8 The “Discharging Hypotheticals” Search Strategy
The example given above expressed, in the hypothetical portion of the answer, the question of whether or not fluffy was a calico. Once a hypothetical has been discharged, the information should not simply be forgotten, only to be retrieved from scratch at a later time. A search strategy of “discharging hypotheticals” is proposed that will serve two purposes. First, information relevant to hypotheticals that have already been discharged will be cached for easy later retrieval. Note that this is analogous to the indexing of predicates performed by Prolog. Second, clauses in the set of support will be ordered so that those with top priority for resolution will be the hypotheticals.
The set of support is an ordered list of clauses, where all hypotheticals come at the beginning of the list, and within the hypothetical and non-hypothetical portions of the list any ordering desired can be implemented, such as shortest clause first, most recently generated clause first, etc.
An outline of the search strategy is as follows:
choose clause to resolve from the front of the supported clauses list
if clause is a hypothetical
check the list comprising the cached information of hypotheticals already resolved to see if the hypothetical portion of the clause can be immediately discharged
if hypothetical can be discharged, do so, and place
the resolvant back in the list of supported clauses in the appropriate position
else
try to resolve just the hypothetical portion of the clause with other clauses in the rule base
in case of success place the resolvant back in the list of supported clauses as described above for use in subsequent resolution
else
proceed with resolution as usual, placing resolvant(s) in the appropriate positions in the list of supported clauses
The motivation for developing this strategy arises from the need to provide good answers to questions, but the effect may be beneficial for other problems that use resolution refutation as a reasoning strategy.
When the hypothetical concerns attributing properties to some object, the act of creating a list of properties associated with that objects starts to acquire the flavor of description logic, only the association of properties with objects is driven by particular queries and is not fundamental to the data structure.
For the small problems on which we have tested this strategy there has not been a notable difference in performance of the system. However, we are planning more extensive experiments where this strategy can be compared with other standard search strategies.
The list of information used to discharge the hypotheticals may itself be interesting. That is, knowing that the fact that fluffy was a calico was critically important in answering a particular question might help in the formation of future queries, including possibly reformulating questions so that not as many hypotheticals are generated.
9 Information from Hypotheticals
A specific answer is a witness that proves the truth of an existential hypothesis: we have considered questions having this form. If you ask about dogs, and Fido is a known dog, Fido will be involved in your answer. A generic
answer is a rule, capturing generalities about classes or groups of objects. What, then, is the purpose or utility of a hypothetical answer?
The hallmark of the hypothetical answer is the way in which information belonging to the hypothetical portion is identified. That is, variables not shared with the answer literal, nor in common with those in the generic portion of the answer. If information to discharge the hypothetical is not available in the rule base, this indicates an information deficiency, or underspecification of the question. In some cases the hypothetical can serve as a useful tool to a rule base designer. For example, if you are trying to prove a theorem and the answer you get back is a hypothetical rather than the expected “yes” or “no”, it may be a sign that an important piece of information has been left out. In other cases you may purposely want to ask an underspecified question. For example, if you query a rule base about restaurants, and don’t specify which sort of cuisine, hypothetical answers could include information such as if you like French food, go to Cafi Boeuf.
10 What is “the answer”?
It would seem annoying and uninteresting to return as answers clauses that have been subsumed by other clauses. On the other hand, specifics subsume generics, and both might be interesting and useful answers. The notion of best answer is certainly relative to the person asking the question. If a preference for a particular type of answer, or a desire to avoid certain types of answers is expressed, this can be built into a system that is designed to produce general answers.
10.1 Most General Answer
Generality can be defined in terms of the subsumption relation. Clauses with more specific information subsume those that are more general. Thus specifics subsume generics, which subsume hypotheticals. According to this rubric, a hypothetical is the most general answer, a generic is less general and a specific is not general. The most general answer [need acronym?] can be defined as follows: the conjunction of the hypothetical answers that are neither subsumed by other hypotheticals nor subsumed by generics, the generic answers that do not subsume any other generic answers, and the specific
answers that do not subsume any other answers. The non-subsumed hypothetical answers that are also not subsumed by generics are those with two properties: first, their hypotheticals have not been discharged (they are not subsumed by generics), and second, they are the most specific hypotheticals (they are not subsumed by other hypotheticals). While this is a most general answer, it is critical that any answer reflect what is known, and the most specific hypothetical answer does this better than a more general hypothetical which it subsumes. Hypotheticals that are part of the most general answer indicate a true information deficiency as described above. Generic answers that do not subsume other generic answers are those at the "top" of the hierarchy in terms of generality. Specific answers that do not subsume any other answers are the most general answers possible given the lack of generic answers.
10.2 Most Specific Answer
Similarly, a most specific answer can be defined as the conjunction of the following: the hypothetical answers that are neither subsumed by other hypotheticals nor subsumed by generics, the generic answers that are not subsumed by any other answers, and the specific answers. The criteria for including hypothetical answers is the same as for the most general answer. This reflects the fact that such answers reflect a lack of information, and the hypothetical answers included will be the most specific characterizations of that lack. When a generic answer is subsumed by another answer it means there is more specific information available, and it should not be included. Finally, specific answers are never subsumed.
A desired answer may be neither the most nor the least specific. It is not possible to determine the most or least specific answer until all resolvants descended from the query have been generated. This process may not terminate in case function symbols are included in the clauses.
If answers are produced as they are generated, which in some circumstances can be helpful and illuminating, it must be done with the knowledge that early answers may quickly be subsumed by later answers, which may lead to misunderstandings about how the information in the rule base is related. For example, if you have as answers, Chafic would like to eat a pastry, and Chafic would like to eat a napolean, you might have no idea that napoleons are pastry.
11 Summary
We have proposed a general characterization of answers in a rule base that extends the current notion of answer, and provides a framework for understanding previous types of answers. We have shown that all clauses descended from the query clause are indeed answers, despite the fact that many of them are disregarded by current systems, particularly Prolog and most theorem provers as well as many database systems.
In recognizing the importance of hypothetical answers, a new search strategy that focuses on “discharging” the hypothetical portion of answers has been proposed. This strategy has been employed for small problems, and larger experiments comparing it to other common search strategies are planned.
We have drawn attention to the fact that “answer” is by and large still identified with the “specific answer” proposed so long ago by Cordell Green [ref], despite the broadening of the definition by other researchers.
References
|
{"Source-Url": "http://www.cse.buffalo.edu/~shapiro/Papers/bursha99b.pdf", "len_cl100k_base": 5159, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 29827, "total-output-tokens": 6356, "length": "2e12", "weborganizer": {"__label__adult": 0.0005297660827636719, "__label__art_design": 0.0009479522705078124, "__label__crime_law": 0.0008273124694824219, "__label__education_jobs": 0.0157928466796875, "__label__entertainment": 0.00029349327087402344, "__label__fashion_beauty": 0.00041294097900390625, "__label__finance_business": 0.0007572174072265625, "__label__food_dining": 0.0007987022399902344, "__label__games": 0.001868247985839844, "__label__hardware": 0.0010166168212890625, "__label__health": 0.0011606216430664062, "__label__history": 0.0007610321044921875, "__label__home_hobbies": 0.0002760887145996094, "__label__industrial": 0.0008835792541503906, "__label__literature": 0.005725860595703125, "__label__politics": 0.0005631446838378906, "__label__religion": 0.0008287429809570312, "__label__science_tech": 0.404296875, "__label__social_life": 0.0003924369812011719, "__label__software": 0.0267181396484375, "__label__software_dev": 0.53369140625, "__label__sports_fitness": 0.0004515647888183594, "__label__transportation": 0.0010328292846679688, "__label__travel": 0.0002589225769042969}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26008, 0.02398]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26008, 0.7722]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26008, 0.94031]], "google_gemma-3-12b-it_contains_pii": [[0, 387, false], [387, 2109, null], [2109, 4464, null], [4464, 6478, null], [6478, 8149, null], [8149, 10530, null], [10530, 12741, null], [12741, 14730, null], [14730, 16726, null], [16726, 18555, null], [18555, 20794, null], [20794, 23196, null], [23196, 24141, null], [24141, 25811, null], [25811, 26008, null]], "google_gemma-3-12b-it_is_public_document": [[0, 387, true], [387, 2109, null], [2109, 4464, null], [4464, 6478, null], [6478, 8149, null], [8149, 10530, null], [10530, 12741, null], [12741, 14730, null], [14730, 16726, null], [16726, 18555, null], [18555, 20794, null], [20794, 23196, null], [23196, 24141, null], [24141, 25811, null], [25811, 26008, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26008, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26008, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26008, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26008, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26008, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26008, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26008, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26008, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26008, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26008, null]], "pdf_page_numbers": [[0, 387, 1], [387, 2109, 2], [2109, 4464, 3], [4464, 6478, 4], [6478, 8149, 5], [8149, 10530, 6], [10530, 12741, 7], [12741, 14730, 8], [14730, 16726, 9], [16726, 18555, 10], [18555, 20794, 11], [20794, 23196, 12], [23196, 24141, 13], [24141, 25811, 14], [25811, 26008, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26008, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
7bc8ff8b30cf3b244ebb7a98569ea152406e5b7e
|
Research on the Necessity of Implementing DevOps Technologies in the Training of Future Computer Science Teachers
Sarthak Srivastava
1 Visa Inc, United States
ABSTRACT
In the article, the problem of implementing DevOps technologies in the education of future computer science teachers is explored. This issue has arisen due to the development and expansion of digital technologies and increased demands from stakeholders for future computer science teachers. Through scientific analysis and the systematic categorization of academic publications, the current state of DevOps technologies and their impact on the process of digitization and digitalization of society were examined. It was determined that the professional IT community actively adopts and promotes DevOps technologies. The analysis of publications revealed that there are currently few educational and professional programs that include the study of DevOps. Educational and professional programs in the field of "Secondary Education (Computer Science)" were specifically noted, with most of these programs lacking elements of DevOps. Modern directions for improving the content of school computer science courses focus on enhancing their practical orientation, and DevOps technologies can contribute to this. The study identified several substantive components of DevOps technologies that can be integrated into the preparation of computer science teachers, including infrastructure as code, configuration management, containers, container orchestration, infrastructure security, deployment pipelines, microservices architecture, post-production considerations, and domain-specific DevOps peculiarities. The inclusion of DevOps elements in the training of future computer science teachers should be based on stakeholder needs. Computer science teachers do not need to master all technical and technological aspects of DevOps implementation and application, but they should possess a sufficient level of professional competencies for successful employment. The results of the confirmatory experiment conducted in this study highlighted the necessity of teaching DevOps technologies to future computer science teachers. Stakeholders also identified the most relevant DevOps technologies for modern computer science teachers, including infrastructure as code, containers, and container orchestration.
Keywords: Secondary education, computer science teacher, educational program, DevOps, professional training.
1. Introduction
DevOps Professional training for future computer science teachers must align with contemporary trends in information technology, programming, and network technologies. Achieving this alignment is possible through continuous refinement and updating of the curriculum. One such technology is DevOps. Researchers A. Dyck, R. Penners, and H. Lichter provided the following definition of DevOps in their study (Dyck, Penners, Lichter, 2021): "DevOps is an organizational approach grounded in collaboration and cross-functional cooperation within and between development (programming) and IT operations teams in software-producing organizations to manage stable systems and accelerate change."
According to a survey conducted in 2021 (State of DevOps Report, 2021, p. 6), 78% of software development organizations have introduced elements of DevOps culture into their work, while 18% have fully implemented DevOps. There is currently a high demand for DevOps engineers. In 2022, according to a DevOps Institute survey, a shortage of IT skills was identified as a significant global issue, with 40% of respondents indicating that the scarcity of resources and skills is one of the top three challenges today. Additional research confirms a significant skills gap in the field of technology and IT worldwide (Oehrlich & Settle, 2022, p. 7). DevOps is on its way to becoming a necessary skill for IT practitioners. Therefore, elements of DevOps technologies should be integrated into the professional activities of modern computer science teachers in the near future. The professional IT community provides numerous blogs, webinars, conferences, communities, and organizations that seek to popularize, promote, support, and enhance DevOps. However, research in this field is relatively limited. The integration of DevOps technologies into the curriculum for IT majors already has some experience, but educational and professional programs dedicated to DevOps are relatively rare. If educators have limited knowledge of DevOps, students are unlikely to acquire the necessary DevOps skills for an advanced information society. Therefore, the current state of education for higher education students will not be able to meet the demand for numerous DevOps vacancies. Educational and professional programs for higher education students in the field of “14.09 Secondary Education (Computer Science)” do not provide opportunities for studying DevOps technology, as evidenced by the lack of scientific research on this issue. Thus, there is a contradiction between society's need for future computer science teachers to be knowledgeable about DevOps technologies and the absence of DevOps elements in the curriculum for students majoring in “14.09 Secondary Education (Computer Science).”
The research's aim is to analyze the problem and provide justification for the incorporation of DevOps elements into the curriculum for future computer science teachers.
**Research Methodology:**
The study employed content analysis of scientific publications related to the identified problem, as well as the synthesis and categorization of existing approaches to teaching DevOps to higher education students. Pedagogical confirmatory experiments were conducted to explore the requirements of stakeholders regarding the acquisition of DevOps skills by future computer science teachers. Descriptive statistical methods were used to analyze the results of the confirmatory experimental research.
**Research Findings:**
Unlike other topics in the field of information technology, DevOps is much more than just a technology. IT experts D.A. Tamburri and D. Perez-Palacin argue that DevOps represents a cultural, procedural, and technological movement (Tamburri & Perez-Palacin, 2018). Therefore, studying DevOps differs from learning any other technology. To meet the existing demand for introducing future computer science teachers to DevOps technology, it is necessary to develop theoretical foundations for the training of higher education students in the field of DevOps. Currently, there are almost no educational and professional programs that include DevOps disciplines. Despite the significant benefits that DevOps brings to software development (State of DevOps Report, 2021; Oehrlích & Settle, 2022; Forsgren & Humble, 2015), researchers do not use DevOps in their research projects involving software development. This is due to the high cost of implementing DevOps, which is not justified for research purposes. Companies hiring DevOps specialists do not have well-defined competencies for these professionals, even though DevOps technology is highly popular and in great demand. Employers expect DevOps professionals to have competencies in all related fields, which complicates the development of corresponding educational programs. In conclusion, the research emphasizes the necessity of integrating DevOps elements into the education of future computer science teachers, given the cultural, procedural, and technological significance of DevOps. It also highlights the existing gap between the demand for DevOps skills in the job market and the limited availability of DevOps education in current programs. The DevOps community is evolving rapidly, making it challenging to establish a unified approach to training professionals (Vicente & Cunha, 2022). Available DevOps certification programs, courses, conferences, and communities often struggle to reach a consensus on professional preparation in this field. To understand the prerequisites and the current state of DevOps application for automating software development processes, the study aimed to identify elements that could be integrated into the curriculum for future computer science teachers. Initially, software developers often moved towards modular integration closer to the end of a project, leading to frequent breakdowns and significant developer frustration (Duvall, Matyas & Glover, 2007). This situation prompted the adoption of Continuous Integration (CI), a practice that involves integrating modules throughout the project's lifecycle to avoid complex integration challenges and numerous errors in the project's final stages.
Most IT professionals associated the concept of CI with Extreme Programming (Tamburri & Perez-Palacin, 2018). Towards the end of the 1990s, the IT industry, particularly organizations with significant resources, recognized the necessity and value of implementing CI to address the rapidly growing complexity of software development. In the book "Microsoft Secrets" (Casumano & Selby, 1998), M.A. Casumano et al. described how Microsoft adopted CI and benefited from it by "developing everything in parallel, with frequent synchronizations." Microsoft referred to the model as synchronized and stabilizing, effectively implementing CI. During that period, IT professionals also used various terms to describe the synchronized model, including "daily build," "nightly build," or "zero defects." The first comprehensive study that fully described the practice and implementation of CI was published by P.M. Duvall, S. Matas, and A. Glover (Duvall, Matyas & Glover, 2007). The evolving landscape of DevOps, the complexity of its components, and its transformative impact on software development highlight the need for a structured approach to introducing DevOps concepts into the curriculum for future computer science teachers. Understanding the historical context and the development of practices such as CI can inform this process and help bridge the gap between industry demands and educational offerings.
### 2. Research on CI and CD in DevOps
The researchers identified six tasks within CI:
- Continuous integration of code.
- Continuous integration of databases.
- Continuous testing.
- Continuous inspection.
- Continuous delivery.
- Continuous feedback.
These six tasks primarily focused on integration and programming. N. Forsgren and J. Humble, in their work, studied the second part of DevOps technologies, which is Continuous Delivery (CD) (Forsgren & Humble, 2016). The authors described the impact of continuous delivery practices in organizations on existing technological processes. They argued that the use of CD positively affects technical professionals and enhances the software
delivery productivity, reducing software failures. The study also noted that CD has an indirect impact on overall organization efficiency through IT productivity. Therefore, DevOps is not just about technologies; it encompasses a culture of software development, shared practices, and automation that align development and IT operations teams to follow a unified approach for improving customer experience, responding quickly to business needs, and balancing innovation with security and information system administration requirements. The DevOps workflow is well-known to IT professionals and includes all tasks and processes of CI and CD. The CD process consists of two parts - deployment and release. In addition to CI, DevOps includes many other continuous processes, such as continuous planning, continuous utilization, continuous trust, continuous improvement, continuous innovation, and more. Technically, DevOps does not cover all continuous software development processes. The goal of DevOps is to unite the roles of developers and administrators to streamline the software delivery process and ensure collaboration throughout the software development lifecycle. This holistic approach improves software quality, accelerates development cycles, and enhances the overall efficiency of the organization.
Usage and Research Trends in DevOps:
DevOps is a part of software engineering that has gained significant popularity in the IT industry. Its popularity has grown among IT professionals over the last decade (State of DevOps Report, 2021) and has also captured the attention of researchers in the field (Tamburri D.A. & Perez-Palacin, 2018; Azad & Hyrynalsmi, 2022; Sánchez-Cifo, Bermejo & Navarro, 2023; Amaro, Pereira & Mira da Silva, 2022). However, the number of academic publications on the subject of DevOps is relatively low, especially within the Ukrainian research segment. The popularity of DevOps in the industry can be observed in three ways: publication trends, the creation of supporting tools, and a significant number of surveys with a large number of participants. Many communities, user groups, and DevOps blogs have been created to inform IT professionals about this subject (Lennon, 2023). The State of DevOps Report (2021) suggests that DevOps practices lead to increased productivity for IT companies. This, in turn, improves business outcomes measured by profitability and market share. Therefore, DevOps has a high return on investment, and companies are willing to invest in popularizing DevOps technologies. In Ukraine, examples of such companies are SoftServ and EPAM. The popularity of DevOps technologies worldwide is evident from the large number of conferences scheduled for 2023 (Best Upcoming DevOps Conferences in 2023, 2023). Now, let's examine DevOps tools and their usage practices to determine the potential for their incorporation into the curriculum for future computer science teachers. Given the high demand for DevOps in the industry, many companies and individuals are developing tools to support the Deployment Pipeline (Gall & Pigni). One such company is Digital.ai, which develops enterprise-scale DevOps tools. The company is a leader in the field of continuous delivery and software release automation. Overall, the growing popularity of DevOps, both in terms of adoption and research, indicates its significance in the IT industry. Incorporating DevOps practices and tools into the curriculum for future computer science teachers can help them stay relevant and better prepare their students for careers in the IT field.
3. Classification of DevOps Tools:
Digital.ai has attempted to track the growing number of DevOps tools and has proposed a classification of these tools based on their licensing and functions (The Periodic Table of DevOps Tools…, 20230). Here is an overview of the different categories of DevOps tools:
- Repository Management: Tools for managing code repositories.
- Database Management: Tools for managing databases.
- Configuration Management: Tools for managing configuration or preparing it.
- Building: Tools for building applications.
- Testing: Tools for automated testing.
- Continuous Integration (CI): Tools for continuous integration practices.
- Release Management: Tools for managing software releases.
- Logging: Tools for logging and monitoring.
- Business Intelligence (BI) or Monitoring: Tools that generate statistics and analytics based on business or system data.
- Cloud, Infrastructure as a Service (IaaS), or Platform as a Service (PaaS): Tools that provide shared infrastructure for deploying applications.
- Containerization: Tools that create isolated user space instances in the operating system.
- Collaboration: Tools that facilitate collaboration among all stakeholders in the software development lifecycle.
- Security: Tools focused on security aspects.
A wide range of DevOps tools with various functional capabilities serves the broad spectrum of IT processes in DevOps. There is no single DevOps tool that supports all DevOps functions. However, some categories of tools form the core of the DevOps process. Let's take a closer look at these tools.
Source Code Management tools, which manage code revisions, are among the most crucial tools in DevOps. These tools ensure that changes to the source code are tracked, versioned, and can be easily collaborated on by multiple team members. Examples of source code management tools include Git and Mercurial. These tools play a fundamental role in enabling collaboration and version control, which are essential aspects of modern software development practices. Computer science teachers should be aware of and proficient in these tools to effectively teach their students about contemporary software development methodologies.
Travis CI: Travis CI is a distributed web-based CI tool that builds projects hosted on GitHub and executes predefined build commands from a YAML file.
Jenkins: Jenkins is a server-based CI tool written in Java, running in a servlet container. Jenkins supports various types of source code management tools and can execute different types of build commands.
Containerization: Traditional CI tools build an application and then deploy it on the operating system. Containerization tools create multiple isolated user space instances within the operating system, allowing multiple programs to be deployed in the OS to reduce resource requirements. Docker is a popular containerization tool that automates the deployment of software applications within containers.
Kubernetes: Kubernetes is a popular container orchestration tool, open-sourced by Google, that organizes the deployment, scaling, and management of containerized applications across multiple hosts.
While many different DevOps tools serve a wide range of IT processes, they are not directly related to each other. Therefore, a configuration management tool is needed to configure and manage various system resources. Puppet is an open-source configuration management utility and is one of the leading Infrastructure as Code (IaC) solutions. Puppet has several product lines. These tools are fundamental to DevOps practices and are essential for managing source code, automating build processes, containerization, orchestration, and configuration management. Teachers preparing future IT professionals should consider including these tools in their curriculum to ensure that students are well-equipped with the skills required in the DevOps field.
DevOps products, such as Puppet Discovery, Puppet Enterprise, and Puppet Pipelines, automate the modern code deployment process. We have only discussed the fundamental categories of DevOps tools, but there are many more. Each DevOps pipeline uses tools according to its needs. Therefore, there are numerous opportunities for new DevOps tool providers. Compared to its popularity in the industry, DevOps is not particularly popular in academic circles. An analysis of research in major IT-themed catalogs such as IEEE Xplore, ACM, and Google Scholar shows that DevOps only began to be addressed in research starting from 2008. In contrast, Microsoft fully embraced Continuous Integration (CI) in the development process as early as 1998. It took academic circles more than ten years to consider the concepts of CI, Continuous Delivery (CD), and DevOps after they became popular.
After 2011, the number of published articles has been steadily increasing due to the growing relevance of DevOps in the field. DevOps is a very modern term, so publications on this topic will continue to grow each year. IT professionals and researchers in academic circles have varying levels of interest in DevOps, influenced by different research interests related to DevOps. For example, V. Garousi and M. Felderer compare industrial and academic publications on software testing, which is a subtopic of DevOps (Garousi & Felderer, 2017). Industrial and corporate research tends to focus more on real practices, while academic research leans toward theory. V. Garousi and M. Felderer also argue that IT professionals consider academic publications too formal and complex to understand and implement in software development practice. Researchers note that IT professionals do not find academic research practical or useful. As a result, collaboration between real-world practical projects and academic circles is relatively rare. Without collaboration with the industry, it is challenging to conduct academic research on DevOps since DevOps encompasses all software development processes performed by IT professionals, including programmers, system administrators, operational analysts, support analysts, and more. DevOps also covers all infrastructure management for development, testing, deployment, production, scaling, virtualization, and more. It is unlikely that academic circles can replicate or model complex industrial environments for research. Therefore, collaboration between the IT industry and researchers becomes crucial.
Technologies related to DevOps are beginning to appear in educational disciplines to prepare IT engineers. Researchers R. Hobeck, I. Weber, L. Bass, in their work (Hobeck et al., 2021), investigate the content of subjects taught at engineering faculties in universities. As researchers note, the material introduces students to technical concepts such as microservices architecture, deployment pipelines, or infrastructure as code. On the other hand, practical tasks are offered with standard software tools such as Docker, Kubernetes, Jenkins, or Logstash. Researchers identify that subjects cover topics such as infrastructure as code, configuration management, virtual machines, containers, networking, cloud, container management, infrastructure security, deployment pipelines, microservices architecture, service networks, post-production, disaster recovery, secure development, DevOps specifics for specific domains, and distributed system architecture.
DevOps also encompasses all infrastructure management for development, testing, deployment, production, scaling, virtualization, and more. It is unlikely that academic circles can replicate or model complex industrial environments for research. Therefore, collaboration between the IT industry and researchers becomes crucial. To demonstrate that a specific process or tool brings improvements, academic researchers require collaboration and real data from various deployment pipeline participants.
DevOps technologies are starting to appear consistently in educational disciplines for the preparation of IT engineers. Researchers R. Hobeck, I. Weber, L. Bass, in their work (Hobeck et al., 2021), investigate the content of subjects taught at engineering faculties in universities. As researchers note, the material introduces students to technical concepts such as microservices architecture, deployment pipelines, or infrastructure as code. On the other hand, practical tasks are offered with standard software tools such as Docker, Kubernetes, Jenkins, or Logstash. Researchers identify that subjects cover topics such as infrastructure as code, configuration management, virtual machines, containers, networking, cloud, container management, infrastructure security, deployment pipelines, microservices architecture, service networks, post-production, disaster recovery, secure development, DevOps specifics for specific domains, and distributed system architecture.
domains, and distributed system architecture. Let's consider the place of DevOps technologies in the school computer science curriculum. In the curriculum for the 10th-11th grades in computer science (hereinafter referred to as the Program), it is stated that it prepares students for participation in Olympiads, competitions, tournaments, scientific-practical conferences, research competitions of various levels, and other intellectual competitions (Educational program for the profile level..., 2011). Setting such a goal requires teachers to be prepared to use cutting-edge technologies in the field of computer science, including DevOps technologies. The practical skills that the Program aims to develop include: '... skills in analyzing known methods of algorithm construction and determining the most optimal ones for solving specific tasks; skills in testing complex algorithms; skills in working with programming environments; programming technique skills' (Educational Program for the Profile Level..., 2011). Programming and software development are not possible without testing, including automated testing, which is part of modern development environments. Therefore, the ability to use modern development environments and their functionality for automated testing is part of DevOps technologies. Let's examine the content of the Program material that contains elements of DevOps technologies. The topic 'Programming Language and Data Structures' has an activity component 'Creates and executes own test suites and prepared ones,' which involves automated testing of developed algorithms, and this is an element of DevOps technologies.
The topic 'Paradigms and Programming Technologies' is a central theme in the study and application of DevOps technologies in the school computer science course. The knowledge component of this topic involves studying software development methodologies. Modern software development methodologies are based on the complete cycle of implementing DevOps technologies. The activity component of expected learning outcomes for the topic 'Paradigms and Programming Technologies' involves students mastering the complete software development cycle, from design to deployment, which directly transforms the theoretical ideas of DevOps technologies into practical implementation at the school computer science course level.
The preparation of students under the educational-professional program 'Secondary Education (Computer Science)' at Berdiansk State Pedagogical University is carried out in accordance with a list of general and professional competencies.
4. Results of the research.
The study of DevOps technology elements by future computer science teachers should be based on the needs of stakeholders. Computer science teachers do not need to possess all technical and technological aspects of implementing and using DevOps technologies, but they should have a necessary level of professional competencies for future successful employment.
According to the conducted analysis, the following topics remain unaddressed in the educational-professional program "Secondary Education (Computer Science)"
- Infrastructure as Code
- Configuration management
- Containers
- Container orchestration
- Infrastructure security
- Deployment pipelines
- Microservices architecture
- Post-production
- Domain-specific DevOps features
During the research, a descriptive experiment was planned, prepared, and conducted. The research aimed to identify DevOps technologies that are worth teaching to future computer science teachers within the framework of the educational-professional program "Secondary Education (Computer Science)".
To organize and conduct the descriptive experiment, the following tasks were defined:
- Determine the circle of respondents who will participate in the survey within the descriptive experiment.
- Formulate a list of DevOps technologies that are not yet taught to future computer science teachers.
- Identify research methods and relevant criteria to be applied in the descriptive experiment.
- Develop a questionnaire for surveying respondents participating in the descriptive experiment.
- Conduct surveys of respondents and perform a statistical analysis of the obtained results.
Identify a list of recommended DevOps technologies that are worth teaching in the educational-professional program "Secondary Education (Computer Science)" at Berdiansk State Pedagogical University.
In accordance with the first task, the circle of respondents who would participate in the research was determined. For this purpose, stakeholders of the educational program were invited: teachers and heads of general secondary education institutions, teachers and heads of extracurricular educational institutions, representatives of teacher professional development institutions, representatives of regional education departments.
The research is dedicated to DevOps technologies in the education of future computer science teachers, so the respondents should be familiar with these technologies. In this case, the respondents will be able to correctly assess the need for teaching DevOps technologies to future computer science teachers.
We used a manual method of selecting respondents for the experimental group from the general population. For this purpose, an initial survey of stakeholders was conducted to determine their level of familiarity with DevOps technologies. As a result, a group of respondents familiar with DevOps technologies was formed. The research identified and analyzed a list of DevOps technologies that have not yet been studied by future computer science teachers. The study employed a descriptive experiment using surveys and specialized software tools for data collection. Data analysis was conducted using descriptive statistics methods and the R programming language. The research aimed to understand respondents' attitudes toward DevOps technologies in the high school computer science curriculum, their views on specific DevOps technology elements for future computer science teachers, and the possible inclusion of DevOps technologies in higher education programs. To achieve these goals, a questionnaire with six questions was developed to assess the relevance of introducing DevOps technology elements into the "Secondary Education (Computer Science)" program. The survey was conducted in the autumn of 2022, targeting various educational stakeholders, including teachers, lecturers, school administrators, and extracurricular education institutions.
The survey results indicated that respondents believed collaborative work with data/documents, the use of coding in multiple fields, and cloud computing were highly relevant for the future development of society. Additionally, the research found that future computer science teachers should be proficient in software development technologies, cloud technologies/services, and managing various operating systems. Finally, respondents expressed a strong interest in adding DevOps technologies to the educational-professional program "Secondary Education (Computer Science)," with Infrastructure as Code, Containers, and Container Orchestration being the most desired DevOps technologies for inclusion.
5. Discussion:
The results of the conducted descriptive research on the issue of introducing the study of DevOps technology into the educational-professional program "Secondary Education (Computer Science)" have revealed significant interest among stakeholders.
It is pertinent to consider the survey results comprehensively. For example, the majority of respondents believe that the main directions of development in the information society are the use of coding (programming) elements in various fields and collaborative work with data/documents. These responses correlate with the answers to questions about proficiency in modern information technologies. This is evident in the selection of responses such as "software development technologies," "cloud technologies/services," and "technologies for managing various operating systems." Cloud services mostly provide users with tools for collaborative work with data and documents, and programming elements are indirectly used, even when creating and editing spreadsheets. The obtained results confirm the stated approaches to designing the educational-professional program "Secondary Education (Computer Science)" as discussed in N. Pavlova's research (Pavlova, 2022).
Analyzing the responses to the question "The preparation of a computer science teacher should meet the following requirements" allows us to conclude that the majority of stakeholders believe that a modern computer science teacher should be prepared to work with specialized computer science programs. In other words, they should teach computer science at the highest, specialized level. This is supported by 95% of the respondents.
The final question aimed to gather the opinions of respondents regarding which DevOps technologies should be taught to future computer science teachers. As noted in the research by N. Morze, T. Nanayeva, and O. Pasichnyk (Morze, Nanayeva & Pasichnyk, 2022), the high school computer science curriculum is overly theoretical, and teaching future teachers DevOps technologies would enable the introduction of a practice-oriented content into the high school computer science curriculum. The technologies selected by the respondents can be easily integrated into the educational-professional program "Secondary Education (Computer Science)" and can be used by graduates to enhance the practical orientation of computer science classes in general secondary education institutions when implementing specialized programs.
6. Conclusion:
The modern computer science teacher in a general secondary education institution must be prepared for the challenges of today, the rapid evolution of information technologies, and their deep integration into all aspects of life. Content analysis of scientific research has revealed the issue of the absence of DevOps technology education in the preparation of future computer science teachers, while elements of this technology are increasingly permeating our lives. The conducted pedagogical exploratory experiment has confirmed the necessity of teaching DevOps technology to future computer science teachers as part of networking technology education. The experiment's results have allowed us to identify key topics in the field of DevOps (Infrastructure as Code, Containers, Container Orchestration) that are advisable to incorporate into the educational-professional program "Secondary Education (Computer Science)" for the training of future computer science teachers. Future research prospects include substantiating the improvement of the structure of educational components, taking into account the integration of the identified DevOps technologies into the educational-professional program "Secondary Education (Computer Science)," as well as studying the effectiveness of enhanced teaching content.
References
[1] Profile Level Educational Program for 10–11 Grades in Computer Science. (2011). Link Освітологічний дискурс, № 2(41), 2023 ISSN 2312-5829 (online)
[16] Лілія Павленко, Максим Павленко, Євген Павленко Educological discourse, 2023, Issue 2(41)
|
{"Source-Url": "https://ijrpr.com/uploads/V4ISSUE9/IJRPR17391.pdf", "len_cl100k_base": 6203, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 20756, "total-output-tokens": 7747, "length": "2e12", "weborganizer": {"__label__adult": 0.0007252693176269531, "__label__art_design": 0.0009007453918457032, "__label__crime_law": 0.0008039474487304688, "__label__education_jobs": 0.2122802734375, "__label__entertainment": 0.00017154216766357422, "__label__fashion_beauty": 0.0004699230194091797, "__label__finance_business": 0.0010986328125, "__label__food_dining": 0.0011768341064453125, "__label__games": 0.0009975433349609375, "__label__hardware": 0.0015668869018554688, "__label__health": 0.0018444061279296875, "__label__history": 0.0006380081176757812, "__label__home_hobbies": 0.0003571510314941406, "__label__industrial": 0.0009746551513671876, "__label__literature": 0.0007357597351074219, "__label__politics": 0.0009021759033203124, "__label__religion": 0.00115203857421875, "__label__science_tech": 0.0209808349609375, "__label__social_life": 0.00043487548828125, "__label__software": 0.01430511474609375, "__label__software_dev": 0.7353515625, "__label__sports_fitness": 0.000675201416015625, "__label__transportation": 0.0010213851928710938, "__label__travel": 0.0005869865417480469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38217, 0.01736]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38217, 0.68965]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38217, 0.91748]], "google_gemma-3-12b-it_contains_pii": [[0, 5269, false], [5269, 10785, null], [10785, 16071, null], [16071, 23361, null], [23361, 27818, null], [27818, 34424, null], [34424, 38217, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5269, true], [5269, 10785, null], [10785, 16071, null], [16071, 23361, null], [23361, 27818, null], [27818, 34424, null], [34424, 38217, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38217, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38217, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38217, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38217, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38217, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38217, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38217, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38217, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38217, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38217, null]], "pdf_page_numbers": [[0, 5269, 1], [5269, 10785, 2], [10785, 16071, 3], [16071, 23361, 4], [23361, 27818, 5], [27818, 34424, 6], [34424, 38217, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38217, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
60db6732b67b9de3a1873a971c36f43d2e8f4efc
|
An Architectural Model for Customizing the Business Logic of SaaS Applications
André Correia¹, Jorge Renato Penha¹ and António Miguel Rosado da Cruz¹
¹Escola Superior de Tecnologia e Gestão, Instituto Politécnico de Viana do Castelo,
Av. Do Atlântico s/n, 4900-348, Viana do Castelo, Portugal
[email protected], [email protected], [email protected]
Keywords: Multi-tenancy, Software as a Service, SaaS, Business Logic Configurability, Customizability, Extensibility.
Abstract: Traditional software applications are typically customized before being delivered to a client. This customization was a paid service delivered by software development organisations. With the growing demand of applications delivered with a SaaS model, software development organisations are increasingly responding with the migration of traditional applications to a multi-tenant SaaS deployment model. This makes them face themselves with the problem of customizing a shared application, with a shared database, for each tenant that subscribes their deployed service. After overviewing existing solutions for application customizability, this paper addresses the customization of the business logic layer of multi-tenant applications by proposing a solution, which has been used in a multi-tenant WMS application deployed with a SaaS service model.
1 INTRODUCTION
Traditional software development firms commonly develop software applications that are customized, either by themselves or by affiliated companies, before being deployed in their clients’ locations. Their business is about developing software as much as customizing that software to each specific client. This supports the fact that applications need to be flexible to a certain point that allows them to accommodate variability in the response to the customer’s requirements (Gebauer et al., 2006).
These software development firms are increasingly facing the challenge of having to adapt their applications for deploying them in the cloud with a software-as-a-service (SaaS) delivery model.
The SaaS model provides a multi-tenant, ready to run, on-demand hosted application. Multi-tenancy is, indeed, the primary characteristic of SaaS applications, as it allows the service provider to run a single instance application, which supports multiple tenants on the same platform. This involves sharing unique resources, as a database and an application instance, giving the tenants’ users the impression that they are the only ones using those resources. This implies addressing many issues, in order to assure the functional and non-functional isolation of the tenants (Krebs et al., 2012).
Other desirable feature of SaaS applications is that they retain the ability to be customizable. This ability shall not, by any means, threaten the imperative of tenants’ isolation.
There are several levels of application customizability, from simple configuration at allowed application points, to tenant specific code extensions at any point of the application, passing by simple extensions to the data model. Also, this customization ability may be the application provider’s responsibility or the tenant’s responsibility. Either way, a tenant’s customization may not interfere with other tenants’ application usage experience, even when the customization is the provider’s responsibility.
After a survey of existing solutions for application customizability, this paper proposes an approach for functionality customization per tenant, by recurring to specific code extensions that may be plugged into specific points in the application. The approach is being used in a warehouse management system (WMS) application that will be deployed with an SaaS service model. The structure of the presentation is as follows: the next section discusses the customizability of software applications, explains why traditional approaches are not suitable
for multi-tenant SaaS applications, and presents related work framing it in three architectural levels, namely the data, presentation and business logic layers; section 3 presents the general view of our approach for per-tenant customization at the business logic level; section 4 details the approach; section 5 discusses our approach and compares it with existing approaches; and, section 6 concludes the paper and proposes some future research directions.
2 APPLICATION CUSTOMIZATION
Traditional applications, meaning single tenant applications that are deployed on premises, regardless of being web oriented or not, are typically customized by the application provider, or deploying organization, having its data model extended or adapted to the customer’s reality, and/or its code modified or extended to meet the customer’s business rules. These customization procedures can be made by exploring configurability capabilities of the application, which is the common approach in Software Product Lines Engineering (SPLE), where customized product variants may be derived from a feature model that includes predictable features’ variability modelling (Clemens et al., 2001). Or it can be made by changing the application’s source code, and culminate with the creation of a new, different, application variant tailored for the new customer/tenant, which has been a common approach for software houses representing and reselling software from major companies but making customized deployments of those softwares.
Gebauer et al. (2006) identifies two types of software application flexibility: flexibility-to-use, regarding the features that are provided at the time of deployment, and flexibility-to-change regarding the features that constitute an option for later system change.
Adherence to each type of application flexibility differs according to being a single tenant application or a multi-tenant one (see Figure 1). Single tenant, traditional, applications’ customization is typically addressed before deploying the application in the customer/tenant’s location, and so they require flexibility-to-change, that is flexibility for changing the features to adapt a software application to a specific customer’s requirements, even if the application must be shut down for a period of time. This is also the kind of flexibility addressed by SPLE. These application tailoring procedures are not applicable to the customization of multi-tenant SaaS applications, which, on the other hand, do not require high flexibility-to-change, but do require high flexibility-to-use, meaning that the deployed features must be easily changeable, without affecting the application usage (Ruehl et al., 2011).
We consider application customizations at three architectural levels:
- Data level customization;
- Presentation customization;
- Business logic customization.
2.1 Data level customization
At the data level, a customizable application typically enables the creation of new entity attributes for the existing entities or, less often, it may even enable the creation of new entity types.
Common data extension customization approaches are (Chong et al., 2006):
- Preallocated fields;
- Name-value pairs;
- Custom columns.

depending on each customer/tenant’s will. The number of customizable extendable fields is predetermined in each data table.
Name-value pairs allow the definition of an arbitrary number of extended fields. Typically, this is enabled by providing the application with a metadata table, defining the extended field (its name or label and its data type), and an extension table, defining the field value and associating it to a field in a primary data table (see Figure 2).
Custom columns are a data extension approach where columns are arbitrarily added to specific tables by making the software dynamically use data definition language (DDL) operations in the database.
Whichever method is chosen to extend the data model, it must be combined with the necessary code adaptation, either by directly modifying the source code, or by providing a mechanism for integrating the additional fields into the application’s functionality.
In multi-tenant SaaS single instance applications, the most suitable solution seems to be name-value pairs, because it does not limit the number of extra fields by table nor requires DDL operations in the shared multi-tenant single instance database.
In a multi-tenant name-value pairs approach, the metadata table must be bound to the tenant Id (Chong et al., 2006). And the software code that uses it, must take the tenant into account, without interfering with the other tenants.
### 2.2 Presentation customization
Another common kind of application customization is at the presentation, or user interface, level. The customer naturally wants the application to be aligned with the company’s corporate image, and its country culture/localisation (language, currency and other cultural peculiarities).
In multi-tenant applications this must also be customizable for each tenant, without interfering with the other tenants’ application usage experience.
### 2.3 Business logic customization
After exploring the variability incorporated into the application’s features, customization at the business logic level requires that the business logic code, which is typically located at an application layer or in the database layer, is adapted to the customer.
In single tenant applications this is commonly accomplished by modifying the application’s source code in order to adapt it to the customer’s specific requirements, which could not be foreseen when designing the application flexibility that addresses the variability points.
However, in multi-tenant applications this is not a suitable solution. Modifying the source code is out of question, because it would create a jumbled mix of different tenants’ business rules into the source code. Additionally, it would be needed to shut down the (multi-tenant) system every time a tenant would want a piece of customized code.
One of the first successful SaaS applications to appear in the market was Salesforce’s CRM solution. Salesforce offers two business logic customization approaches: point-and-click configuration, and code based customization. The former enables fast and easy customizations, by providing a series of simple point-and-click wizards with limited customization capability. And, the latter is useful for deeper customizations to meet more demanding tenants’ needs, and is made possible through a native programming language called Apex for tenants to customize complex business logic (Salesforce, 2013; Weissman and Bobrowski, 2009; Chen et al., 2010).
Other authors have proposed customization approaches for multi-tenant SaaS applications. For instance, Yaish et al. (2012) propose a conceptual architecture design using elastic extension tables and a number of database, user interface and access control services, for customizing the data layer, the user interface layer and the access control, but it doesn’t address business logic customization.

Xiuwei et al. (2012) propose a business rule engine-based framework for customizing the business logic layer of multi-tenant SaaS applications. Their approach separates the business rules, defined in decision tables, from the software source code, enabling its customization by the tenants within the variability scope pre-determined in the decision tables.
Chen et al. (2010) propose an approach to business logic SaaS applications’ customization based on domain engineering techniques and business rules templates. Like Xiuwei’s approach, it enables the customization of business rules within the variability scope pre-determined in the rules templates.
3 BUSINESS LOGIC CUSTOMIZATION OF SAAS APPLICATIONS – GENERAL VIEW
Our approach to the customization at the business logic level aims to enable the SaaS provider organisation to be able to supply, as a paid service, the customization of the SaaS application to specific tenants. Note that all the predictable variability in requirements shall be incorporated into the application, leaving to this approach only the unforeseen deeper customization needs.
Figure 3 depicts the architecture of the proposed solution, consisting in a customizable multi-tenant application provided with a SaaS deployment model. The approach requires that customized web services are developed for a given tenant, and that the system is configured, for that tenant, by using a configuration tool, as further explained below.
The user accesses the application’s presentation layer, which calls the shared services. These are a set of multi-tenant enabled services that, in turn, access the single instance database.
For customizing the SaaS application business logic, customized services must be put available in a customized services server, or any other web server, and the SaaS application must be configured to plug those services in the desired extension points available in the application.
Let’s analyse, through an example, the proposed approach. Consider that a given tenant wants to modify the default behaviour of an order registering SaaS application so that, when a user inserts a new order, the total accumulated debt of the client be verified and, if that debt is above some threshold, the system rejects the new order.
4 PROPOSED APPROACH
4.1 Detailed approach
Each specific tenant customized service must be plugged into an application extension point.
Although predefined, these extension points allow to plug a customized service into almost every desired point in the application. This is made possible by providing extension points before, instead and after every shared service associated to application forms, including CRUD operations. For supporting this approach, a set of metadata tables has been established (see Figure 5).
Every pluggable component, provided by an external custom-service, must be registered in table Custom_Service, and may have one of three purposes, or types (property type in table Custom_Service):
- Validate a form field (type: Validation);
- Provide data to an external application (type: Export);
- Get data from an external application (type: Import).
Besides the service type, its URL is also required, just as its result (output parameter) type, and what tenant owns it. The currently allowed result types are:
- FormValResult. Form Validation Result, which is composed of a a Boolean, stating if the form is valid, and a String, with a message, in case of invalid form data.
- Boolean;
- Void, or no result expected. Void and Boolean may be used, for instance, in providing data to an external application.
- JSON String. A JSON formatted string that may be used when getting data from an external application (type: Import), to show information to the user. Currently, this has the sole effect of opening a dialog box showing the “imported” data.
Note that, regardless of its type, a custom-service may access the application database, through the CRUD shared services. By this way, it can, for instance, import data to the SaaS application from an external source.
Table Extendable_Page registers the extendable pages of the application, that is, pages with extension points. Each extendable page of the application’s presentation layer may have an extension point, where a custom-service may be plugged in.
A page’s extension points are defined in table Extension_Point, which also links the extension point to the, possibly Null, custom-service to be called.
An extension point is located around the load and submit operations of an extendable page, and defines the moment when the custom-service, is triggered. The page controller, that is its submit operation handler, handles all the possible operations provided by that page, which may involve the creation of new information (create one or more records in the database) or the modification of existing information (update one or more records in the database).
Every pluggable component, provided by an external custom-service, must be registered in table Custom_Service, and may have one of three purposes, or types (property type in table Custom_Service):
- Validate a form field (type: Validation);
- Provide data to an external application (type: Export);
- Get data from an external application (type: Import).
Besides the service type, its URL is also required, just as its result (output parameter) type, and what tenant owns it. The currently allowed result types are:
- FormValResult. Form Validation Result, which is composed of a a Boolean, stating if the form is valid, and a String, with a message, in case of invalid form data.
- Boolean;
- Void, or no result expected. Void and Boolean may be used, for instance, in providing data to an external application.
- JSON String. A JSON formatted string that may be used when getting data from an external application (type: Import), to show information to the user. Currently, this has the sole effect of opening a dialog box showing the “imported” data.
Note that, regardless of its type, a custom-service may access the application database, through the CRUD shared services. By this way, it can, for instance, import data to the SaaS application from an external source.
Table Extendable_Page registers the extendable pages of the application, that is, pages with extension points. Each extendable page of the application’s presentation layer may have an extension point, where a custom-service may be plugged in.
A page’s extension points are defined in table Extension_Point, which also links the extension point to the, possibly Null, custom-service to be called.
An extension point is located around the load and submit operations of an extendable page, and defines the moment when the custom-service, is triggered. The page controller, that is its submit operation handler, handles all the possible operations provided by that page, which may involve the creation of new information (create one or more records in the database) or the modification of existing information (update one or more records in the database).
Figure 5: Metadata tables for supporting business logic customization by tenant.
Figure 6: Customization (metadata creation) tool example.
plugged in. This customization tool allows plugging the desired service into an extension point, linking the selected form fields to the web-service’s input parameters, setting the trigger to the appropriate value (before, after or instead of), and choosing the service type and the output parameter type.
Every extendable page controller has code for looking for custom-services plugged into it, associated to the tenant accessing the page. That is, each extendable page searches for extension points with non-null ID_CustomService attached to it, that belong to the tenant accessing the page, in each of the possible triggering positions.
4.2 Validation
The proposed approach to the customization at the business logic level has been tested, and is being used in the development of a WMS application that will be deployed with a SaaS deployment model.
In the WMS application, the user accesses the application’s presentation layer through any browser with a Silverlight plugin. The presentation layer calls the WMS domain (shared) services, which are a set of Windows Communication Foundation Rich Internet Application services (WCF RIA services, see for instance http://msdn.microsoft.com/en-us/library/ee707344(v=vs.91).aspx) exposed as SOAP/WSDL. These, in turn, access the WMS database.
For customizing the WMS application business logic, custom REST web-services must be put available in another web server, and the WMS application must be configured to plug those services in the desired extension points, available in the application.
After being deployed, we will further assess the utility and usability of this approach with real customers/tenants and real users in an industry setting.
5 DISCUSSION
The proposed approach enables the tenant-based customization of SaaS applications’ business logic. It addresses customizations that could not have been foreseen in a domain engineering analysis, and could not be implemented as a feature variability point as defended by SPLE (Clemens et al., 2001).
The proposed approach makes use of common knowledge technology, in what respects to developers, since the custom-services may be developed in any programming language and may be deployed in any web-server. The only limitation, in the experiences made, and in the WMS application being developed, is that the custom services communicate through REST and that the objects are passed to and from the services with JSON format, because this is what the SaaS application is expecting.
Table 1: Surveyed approaches to SaaS applications’ business logic customization.
<table>
<thead>
<tr>
<th>Approach</th>
<th>Pre-determined variability scope</th>
<th>Full business logic customization</th>
<th>Comments</th>
</tr>
</thead>
<tbody>
<tr>
<td>SPLE</td>
<td>✔</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Sales-force</td>
<td>✔</td>
<td>✔</td>
<td></td>
</tr>
<tr>
<td>Yaish et al. (2012)</td>
<td>✔</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Addresses only data and presentation layers’ customizability</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Xiuwei et al. (2012)</td>
<td>✔</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Chen et al. (2010)</td>
<td>✔</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Our Approach</td>
<td>Addressed through customization of variable features as recommended by SPLE (Not in the scope of this paper)</td>
<td>✔</td>
<td></td>
</tr>
</tbody>
</table>
Table 1 aims to compare our approach and the state of the art approaches, referenced in section 2.3, by classifying them according to two main aspects: approaches that only address pre-determined variability scope; and, approaches that enable a full business logic customization. In the first category, we can find the feature variability modelling and product variants of SPLE (Clemens et al., 2001), the point-and-click customization feature of Salesforce (Salesforce, 2013), and the approaches by Xiuwei et al. (2012) and Chen et al., 2010).
The approach by Yaish et al. (2012) only addresses data layer and presentation layer customizability. It doesn’t address business logic customization at any degree.
As said before, our approach addresses customizations that could not have been foreseen in a domain engineering analysis, and so are outside the limitations of a metadata framework. This way, it assumes that pre-determined feature variability is handled through SPLE or other appropriate approach, but the focus of our approach is, however, deep unforeseen customizations. This way it is only comparable to the Salesforce code customization feature, and the solution is the same, that is making use of open-ended development environments for the most common programming languages to create
the needed functionality. In addition, Salesforce also allows creating new functionality using its own proprietary language, Apex.
6 CONCLUSIONS AND FUTURE WORK
Multi-tenant SaaS applications’ customization is hard to address because of the requirement for high flexibility-to-use, meaning that the application’s deployed features must be easily changeable by one tenant, without affecting the application usage of other tenants.
This paper presented an approach for the tenant-based customization of SaaS applications, at the business logic architectural layer of the application.
The proposed approach reserves the business logic customization to the SaaS provider organisation. The business logic customization may, then, be supplied as a paid service to the tenants that need it. The proposed architecture allows, however, that the business logic customization responsibility is given to the tenant administrator, provided he/she can develop the needed custom-services.
The approach has been tested and is being applied in a multi-tenant SaaS application.
An issue that needs mitigation has to do with the amount of overhead code needed, in each extendable page, to verify if there is any custom-service to be called.
Other future directions involve also the customization responsibility passage to the tenant.
REFERENCES
|
{"Source-Url": "https://www.researchgate.net/profile/Antonio_Miguel_Cruz/publication/256546895_An_Architectural_Model_for_Customizing_the_Business_Logic_of_SaaS_Applications/links/00b495380e2413b2df000000.pdf?inViewer=true&pdfJsDownload=true&disableCoverPage=true&origin=publication_detail", "len_cl100k_base": 4852, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 25915, "total-output-tokens": 5799, "length": "2e12", "weborganizer": {"__label__adult": 0.0002684593200683594, "__label__art_design": 0.00047850608825683594, "__label__crime_law": 0.0002868175506591797, "__label__education_jobs": 0.0004100799560546875, "__label__entertainment": 4.3451786041259766e-05, "__label__fashion_beauty": 0.0001138448715209961, "__label__finance_business": 0.000919818878173828, "__label__food_dining": 0.00029349327087402344, "__label__games": 0.0002722740173339844, "__label__hardware": 0.00040078163146972656, "__label__health": 0.0002827644348144531, "__label__history": 0.0001500844955444336, "__label__home_hobbies": 4.9173831939697266e-05, "__label__industrial": 0.00025391578674316406, "__label__literature": 0.00015020370483398438, "__label__politics": 0.00019562244415283203, "__label__religion": 0.00021278858184814453, "__label__science_tech": 0.0037479400634765625, "__label__social_life": 4.45246696472168e-05, "__label__software": 0.00876617431640625, "__label__software_dev": 0.98193359375, "__label__sports_fitness": 0.00017130374908447266, "__label__transportation": 0.0002970695495605469, "__label__travel": 0.00015795230865478516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26566, 0.01691]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26566, 0.08611]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26566, 0.89584]], "google_gemma-3-12b-it_contains_pii": [[0, 3897, false], [3897, 7219, null], [7219, 11172, null], [11172, 13586, null], [13586, 18375, null], [18375, 23153, null], [23153, 26566, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3897, true], [3897, 7219, null], [7219, 11172, null], [11172, 13586, null], [13586, 18375, null], [18375, 23153, null], [23153, 26566, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26566, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26566, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26566, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26566, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26566, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26566, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26566, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26566, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26566, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26566, null]], "pdf_page_numbers": [[0, 3897, 1], [3897, 7219, 2], [7219, 11172, 3], [11172, 13586, 4], [13586, 18375, 5], [18375, 23153, 6], [23153, 26566, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26566, 0.07031]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
d6bd40c34cb140f32946e87c309ea7ba37b1116d
|
Adopting of Agile methods in Software Development Organizations: Systematic Mapping
Samia Abdalhamid ¹, Alok Mishra ²
¹Department of Modeling & Design of Engineering Systems, Attilim University, Ankara, Turkey
²Department of Software Engineering, Attilim University, Ankara, Turkey
Abstract- Adoption of agile methods in the software development organization is considered as a powerful solution to deal with the quickly changing and regularly developing business environment and fully-educated customers with constantly rising expectation, such as shorter time periods and an extraordinary level of response and service. This study investigates the adoption of agile approaches in software development organizations by using systematic mapping. Six research questions are identified, and to answer these questions a number of research papers have been reviewed in electronic databases. Finally, 25 research papers are examined and answers to all research questions are provided.
Keywords - Agile; software development organization; methods; systematic mapping study; adoption.
1. Introduction:
Agile methodologies emerged in the mid-1990's as a contrasting option to the traditional methods, fundamentally due to the limitations forced by the strict plan-driven and assignment based characteristics of the traditional models, similar to Waterfall and its differences [1] [2]. Every single agile approach participates with the same basic values of agility and flexibility [3], and the most popular methods of agile are: Extreme Programming (XP), SCRUM, the Crystal Family, Adaptive Software Development (ASD), Feature Driven Development (FDD), Dynamic System Development Method (DSDM) and Agile Modeling [4] [5] [2] [6]. In addition, Agile methods distinct from the traditional techniques by being iterative as opposed to phase, for example The execution of Agile software development is done in short (2-4 weeks) cycles, each incrementing the software utilizing minimal planning [3] [7] [8]. Agile techniques give deliverables after every iteration, increasing in small subsets of planned out features. In this way they are encouraging interaction, trust, and comprehension between the location clients and the developers [9].
The fact that agile methodologies have become customary in software development organizations around the globe, both small and large ones, because agile can assist to solve the problems related to time-to-market and inadequate primary requirements [10]. Initially, the techniques were intended for tiny in size, collocated ventures [11]. Nowadays, an increasing number of software companies adopt agile methods in order to develop their current programming processes [12] [13]. This means they begin over and over again by embracing new software practices as opposed to enhancing their current procedures [14].
The number of companies that have been attempting to adopt agile methodologies to deal with the developing of software has risen since the Agile Manifesto was released in 2001. Nevertheless, a considerable lot of them have not achieved their adoption objectives, which involve quick and high-quality software deliveries, the customer satisfaction and ability of software products to deal with the change of the requirements during the project development. The outcomes, then again, are limited to the adoption of few practices, so the value of
developed software products was low and furthermore to individuals' potential on delivering them [15].
Adopting agile methods is nevertheless, a very complex process that involves changes to programming development procedure as well as changes to the organizational culture and social patterns and conducts of the stakeholders included [13] [16] [17]. In fact, agile approaches at first expected for use in small, signal-signal-group ventures [18]. Nonetheless, their potential benefits have made them tempting in like manner outside this circumstance, particularly, both for bigger ventures and in large organizations.
So, to find out how can the organization’s culture impact the process of adopting agile methods, what are the benefits of adopting agile and the results of adopting agile in large organizations besides other questions related to the process of adopting agile methods in organizations as shown below, a systematic study was carried out to guide our research. This systematic review tries to assess, synthesize, and introduce the existing findings.
The outline of this paper is introduced as following: Section II illustrates some related research; Section III presents the systematic study; Section IV reports the results of the study; Section V discussion part; Section VI contains threats to validity; Section VII conclusion, limitations and future work.
**Research Questions:**
**RQ1. Why do the organizations motivate to adopt agile methods in software development?**
**RQ2. Are the agile methods adoption beneficial for the organization?**
**RQ3. What are the Challenges in agile methods adoption in software organization?**
**RQ4. Are there guidelines provided for agile methods adoption in organization?**
**RQ5. How large software development organizations can scale agile methods for complex software projects?**
**RQ6. How the organization’s culture impact agile methods adoption?**
## 2. Background
Agile software development (ASD) techniques are frequently announced as a differentiation to the traditional, plan-driven method to deal with software development [19] and the announced and contended benefits are various. ASD techniques are alleged to raise the quality of software [20], enhance communication [21] and in addition, coordination [22] and raise productivity [23]. The Agile Manifesto [24], made in 2001, records a set of values whereupon ASD depends. Alongside these values, there is additionally a list of principles. Principles are “domain-specific guidelines for life” [25], indicating how the values can be used in various areas. Thirdly, there are practices, which are significantly more specified.
There were some researches conducted related to the rate of companies that adopted agile methodology at the starting of declaring it. There were two studies conducted back in 2005 that gave information about the rate of adoption of agile methods. The first survey was administrated by MethodsAndTools.com [26], and indicated that about 40% of the 232 members' organizations had utilized agile approaches and around 20% were assessing them in pilot ventures.
The second survey, coordinated by Schwaber and Fichera for Forrester Research, communicates that around 14% of North American and European companies were using agile techniques and another 19% desired to utilize them in the immediate future [27]. This survey furthermore induces that while the early adopters were ordinarily smaller firms making high-tech products, the present adopters tended to be information technology groups inside huge companies.
In the interim, agile methodologies are productive in some circumstances, huge and complicated software products as often as possible require efficient preparing with the required additional process to ensure success. Agile designing is a similarly informal process with various little undertakings to ensure perfect delivery results [28]. The congruity of agile methods to deal with huge organizations is often thought about as challenging [29] [30]. In big-scale ventures, the issue rises as the complexity of the application space is regularly beyond the experience or skill of a few customers and moreover engineers. There is an obvious necessity for progress with customer engagement in expansive scale complex ventures and it is the fundamental key for XP extends accomplishment [31]. Starting now, organizations are gradually deploying agile methods in their product development projects.
The fact that organizations nowadays are examined by a fast changing and continuous building up business environment and fully-educated customers with persistently rising anticipation, such as shorter time periods and an extraordinary level of response and service [32][33]. Currently, the Agile has risen the size of the success of stories in software development field, and for this reason it has become adopted vastly by various organizations due to the
upsides of Agile that make utilizing them. So, utilizing agile methodologies have many advantages as some researches are proposed which make the Agile methods the first option for developing in any type of venture. [34][35].
3. Research Methodology
A systematic mapping study (SMS) is a research technique concerned with investigating the literature in a specific field of interest and creating a survey to recognize gaps that require further evaluation [36]. For this reason, it was used to answer six research questions in concern with adopting agile methods in software development organizations.
As figure 1. Shows there were three main steps in the research method applied which are identifying research questions, the search strategy and the study selection process.
2. Search strategy: The key words and their synonyms were identified to search for relevant documents from electronic databases: “agile”, “adopting”, “software development organizations”). A logical operator AND was used to make a group of the basic terms. The final research series that we got is:
[“(“adopting agile” AND “software development organizations”).
Five electronic databases (DB) were utilized in the mapping study to get quicker outcomes: ACM Digital Library IEEE Xplorer, Springer, Google Scholar and the Web of Science as it is shown in Table 2. With a specific end goal to guarantee the quality of the outcomes, the recognized literature must be reviewed in pairs.
Table 1. Selected databases
<table>
<thead>
<tr>
<th>Source</th>
<th>Location</th>
</tr>
</thead>
<tbody>
<tr>
<td>IEEE Explore</td>
<td><a href="http://ieeexplore.ieee.org">http://ieeexplore.ieee.org</a></td>
</tr>
<tr>
<td>ACM Digital Library</td>
<td><a href="http://portal.acm.org">http://portal.acm.org</a></td>
</tr>
<tr>
<td>Springer</td>
<td><a href="https://link.springer.com">https://link.springer.com</a></td>
</tr>
<tr>
<td>Google Scholar</td>
<td><a href="https://scholar.google.com">https://scholar.google.com</a></td>
</tr>
<tr>
<td>Web of Science</td>
<td><a href="http://apps.webofknowledge.com">http://apps.webofknowledge.com</a></td>
</tr>
</tbody>
</table>
3. Study selection process: Several searches of electronic databases are conducted using the search string. We found initially 190 Preliminary studies on adopting agile in software development organizations. The selection criteria were applied by reading the title and abstract sections of these papers and the number of papers was reduced to 25 papers as shown in Table 2.
The choice criteria concentrate on adopting agile in software development organizations papers. There were some papers that were barred in light of the following criteria: studies that not relate to the topic, papers not provided in English and studies not available in full-text.
Table 2. Articles include SCRUM
<table>
<thead>
<tr>
<th>Database</th>
<th>Obtained</th>
<th>Included</th>
</tr>
</thead>
<tbody>
<tr>
<td>IEEE explore</td>
<td>50</td>
<td>6</td>
</tr>
<tr>
<td>ACM digital</td>
<td>30</td>
<td>2</td>
</tr>
<tr>
<td>Springer link</td>
<td>20</td>
<td>4</td>
</tr>
<tr>
<td>Google Scholar</td>
<td>60</td>
<td>6</td>
</tr>
<tr>
<td>Web of Science</td>
<td>30</td>
<td>7</td>
</tr>
</tbody>
</table>
4. Results:
RQ1. Why do the organizations motivate to adopt agile methods in software development?
The organizations motivate to adopt agile methods in software development for many reasons. According to Shen et al. (2012) agile software development method was the answer to how the software development companies can be more organized in order to deliver quicker, better and inexpensive solutions because of the large market demand [37]. For example, IT companies try to develop the effectiveness and general standard of their product development effort by adopting agile software development practices [38]. The other reason for adopting agile methods is that it provides fast and high-quality deliveries of software, programming products that better fulfill clients’ needs and adaptability to manage scope changes all through the venture [40]. Some researchers stated that the number of software organizations that adopt agile increased because agile methods help them to improve their existing software processes [12][13]. Moreover, requirements are basic for the achievement of software ventures. It is not easy to produce requirements, as the hardest phase of building a software system is to choose what the system ought to do, and requirements errors are costly to fix in the later periods of the product improvement life cycle. So, to avoid such problem, agile methodologies are adopted in different stages of software development cycles [39]. On the other hand, according to Silva and Goldman (2014), the traditional culture of the organizations could be viewed as their fundamental motivation to move to adopting agile [40].
RQ2. Are the agile methods adoption beneficial for the organization?
Yes they are, the agile methods adoption was beneficial for the organization in different aspects such as enhancing customer value, increase the quality and develop organizational confidence [41][42], it also was proved that adopting agile methods provide opportunities to improve products in terms of quality. In addition, the use of agile methods provides positive effect on the product development efficiency and effectiveness [43], and according to Lagerberg et al. (2013) implementation of agile principles and practices lead to raise of the venture visibility and coordination effectiveness, decrease of the requirement for different sorts of coordination mechanisms and raise of the productivity [44].
In general, adopting agile methods in software development organizations brought many benefits, for example, ability to deal with requirements changes, productivity return, and business alignment, it also can deal with plan, cost, workforce turnover in an efficient way in companies[45] [46]. According to Korhonen (2013), adoption of agile methods practices also have the beneficial effects on the large organizations [47].
RQ3. What are the challenges in agile methods adoption in software organization?
There are many challenges which can be faced during adopting agile methods which require a basic organizational change to create the transition successful, such as the technical prospects, the complexity of product development and social prospects of software development [48]. Moreover, there is another challenge that should be taken into account when adopting agile which is the management of agile methodologies [49], because the mismanagement of management aspect can lead to delay in the schedule, increase costs and loss of productivity [43]. Sustainability, measuring agile value, and understanding cultural change is very complex which make them challenges to the organizations when adopting agile methods [50].
RQ4. Are there guidelines provided for agile methods adoption in organization?
There are some guidelines that can be used for agile methods adoption in organizations, Nikitina and Mattsson (2011) recommended a procedure model of software method adoption and list circumstantial factors for managing the deployment of software development methodologies. The model is called Software Method Adoption (SMA). It involves the list of methodology adoption activities exercises that are organized in stages and a set of circumstantial factors that should be considered while changing programming processes. The SMA model is illustrated on two levels: phase and activity levels [52].
The phase of deployment can be repetitive or may be implemented once.
Figure 2. General representation of the SMA procedure model. Adopted from [50].
Meanwhile, Pikkarainen et al. (2011) exhibited a framework that can be used to support a systematic choice and deployment of new agile practices and for adjusting them to suit the organizational framework which is called an agile deployment framework. This framework involves the processes and methodologies required for choosing appropriate new agile practices in a company [18]. In addition, according to Pikkarainen (2011), identifying barriers, strengths, and suggestions can be utilized as a checklist for arranging as well as checking the effectiveness of deploying agile methods in software organizations [52].
RQ5. How large software development organizations can scale agile methods for complex software projects?
Recently, various frameworks for scaling agile have been made by advisors, involving the Scaled Agile Framework (SAFe), Disciplined Agile Delivery (DAD) and Large-scale Scrum (LeSS) [53]. As per the State of Agile Survey, the Scaled Agile Framework (SAFe) is by all accounts the most popular Framework for scaling agile [54]. Paasivaara (2017) used the Scaled Agile Framework to scale agile methods for complex software projects [54]. The Scaled Agile Framework (SAFe) demands to offer a recipe for agile adoption at the company scale [55]. It includes the levels of groups, programs, and portfolio, and in addition, the possible value stream level [56].
RQ6. How the organization’s culture impact agile methods adoption?
The organization’s culture is believed to have an impact on the degree to which an agile methodology is utilized by agile methodologies advocates [57]. It is also thought to be a factor affecting successful adoption of agile [58] [59]. There were many types of researches that were conducted in attempt to find out the impact of organizational culture on agile methods adoption.
Strode et al. (2009) conducted a study to investigate the relationship between organizational culture and agile method usage, they found a number of organizational culture factors that related to the utilization of the agile methods such as "the organization values feedback and learning, and social interaction in the organization is trustful, collaborative, and competent". The more prominent the degree of these factors in the companies studied, the higher was their agile technique utilization value [57].
On the other hand, organizational culture has an impact on the work in the organization, affecting the routine, delivery of the work, and productivity [60]. It also affects staff members routine, pecking order, connection, collaboration, and it rises when a list of assumptions are set up by a team, becoming consolidated and repetitive in the day by day fill in as the "right form" of administering the work [61].
5. Discussion
In this study, five electronic databases were utilized in the search process which are: IEEE Explore, ACM Digital Library, Web of Science, Google scholar and Springer Link, and the topic of the search was the Adopting of Agile methods in Software Development Organizations. As a result, approximately 190 researches on the topic were found, but only 25 of them were related to the research subject. A number of research questions were prepared to help us during the search.
Research Questions:
RQ1. Why do the organizations motivate to adopt agile methods in software development?
The software organizations' motivation for adopting agile methods can be divided into three reasons:
Dealing with the large market demand to deliver quicker, better and inexpensive solutions [37].
Adopting agile methods to ensure fast and high-quality deliveries of software, programming products that better fulfill clients' needs and adaptability to manage scope changes all through the venture [40].
The traditional culture of the organizations could be viewed as their fundamental motivation to move to adopting agile [40].
**RQ2. Are the agile methods adoption beneficial for the organization?**
Yes, the agile methods adoption can be beneficial for the organization in different aspects such as enhancing customer value, increase the quality and develop organizational confidence [41][42].
The use of agile methods provide positive effect on the product development efficiency and effectiveness [43], and it raises the venture visibility and coordination effectiveness, decrease the requirement for different sorts of coordination mechanisms and raise the productivity [44]. So, adopting agile methods in software development organizations brought many benefits, for example, the ability to deal with requirements changes, productivity return, and business alignment [44]. According to Korhonen (2013) adoption of agile methods practices also has the beneficial effects on the large organization [47].
**RQ3. What are the Challenges in agile methods adoption in software organization?**
Many challenges can be faced during adopting agile methods which require a basic organizational change to create the transition successful which are:
The technical prospects, the complexity of product development and social prospects of software development [48]. Also, others are: the management of agile methodologies [49] and sustainability, measuring agile value, and understanding cultural change is very complex [50].
**RQ4. Are there guidelines provided for agile methods adoption in organization?**
There are few guidelines that can be used for agile methods adoption in organizations, Nikitina and Mattsson (2011) recommended a procedure model of software method adoption and list circumstantial factors for managing the deployment of software development methodologies. The model is called Software Method Adoption (SMA). [51]
There is also a framework that can be used to support a systematic choice and deployment of new agile practices and for adjusting them to suit the organizational framework which is called an agile deployment framework [18].
**RQ5. How large software development organizations can scale agile methods for complex and software projects?**
There are many frameworks that can be used to scale agile such as the Scaled Agile Framework (SAFe), Disciplined Agile Delivery (DAD) and Large-scale Scrum (LeSS) [54] but the most popular Framework for scaling agile is the Scaled Agile Framework (SAFe) [54].
**RQ6. How the organization’s culture impact agile methods adoption?**
There are many types of researches conducted in attempting to find out the impact of organizational culture on agile methods adoption. The results of these studies can be summarized as following:
The more prominent the degree of cultural factors in the companies studied, the higher was their agile technique utilization value [57]. Organizational culture has an impact on the work in the organization, affecting the routine, delivery of the work, and productivity [60]. It also affects staff members routine, pecking order, connection, collaboration, and it rises when a list of assumptions are set up by a team, becoming consolidated and repetitive in the day by day fill in as the “right form” of administering the work [61].
6. **Threats to validity**
The validity issues are basically in the papers' chosen process. Particularly, the issue related to the possibility of losing relevant studies. To guarantee the fullness of our paper repository, the most known scholarly web indexes, including IEEE Explore, ACM Digital Library etc. are selected. Moreover, various combinations of the topic of interest and their synonyms related to agile strategies in software development organizations are used.
7. **Conclusion, Limitations, and Future Work**
Conclusions: Adopting agile methods in software organization is beneficial for these organizations in different aspects such as providing fast and high-quality deliveries of software, software organization are motivated to adopt agile methods because it is the proper answer for some problems that can be faced such as the large market demand requirements to deliver swift, better and inexpensive solutions. In addition, the process of adopting agile methods may not be easy, so there are some guidelines that can be used during adopting agile, for instance an agile deployment framework.
Organization’s culture has an impact on agile methods adoption because it is a significant factor affecting successful adoption of agile. Large software development organizations can scale agile methods for complex and software projects by using various frameworks such as (SAFe) which is the most popular Framework for scaling agile. For the future work, further research can be done to discover if software organizations prefer to adopt agile methods completely or combining them with traditional methods and identifying reasons behind their selection process.
In this study, the main limitation is the bias in the selection of publications, the keywords and search terms were identified to allow us to identify the relevant studies. It is essential to know that, the software engineering keywords are not standardized and that they can be both disciplines- and language-specific. Thus, because of our selection of keywords and search strings, there is a hazard that relevant studies were overlooked. There is also a possibility of missing some relevant studies that are included in other databases because five electronic databases were used in this study.
References:
|
{"Source-Url": "http://temjournal.com/content/64/TemJournalNovember2017_817_825.pdf", "len_cl100k_base": 4925, "olmocr-version": "0.1.49", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 27196, "total-output-tokens": 9048, "length": "2e12", "weborganizer": {"__label__adult": 0.0004067420959472656, "__label__art_design": 0.00029778480529785156, "__label__crime_law": 0.00033283233642578125, "__label__education_jobs": 0.002346038818359375, "__label__entertainment": 4.696846008300781e-05, "__label__fashion_beauty": 0.00016605854034423828, "__label__finance_business": 0.0008993148803710938, "__label__food_dining": 0.0004096031188964844, "__label__games": 0.0004425048828125, "__label__hardware": 0.0004839897155761719, "__label__health": 0.0004878044128417969, "__label__history": 0.00021958351135253904, "__label__home_hobbies": 7.873773574829102e-05, "__label__industrial": 0.0003330707550048828, "__label__literature": 0.0002601146697998047, "__label__politics": 0.0003135204315185547, "__label__religion": 0.00036025047302246094, "__label__science_tech": 0.00347137451171875, "__label__social_life": 0.00011038780212402344, "__label__software": 0.00408172607421875, "__label__software_dev": 0.9833984375, "__label__sports_fitness": 0.0003368854522705078, "__label__transportation": 0.00044608116149902344, "__label__travel": 0.00021183490753173828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36778, 0.03748]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36778, 0.19722]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36778, 0.89141]], "google_gemma-3-12b-it_contains_pii": [[0, 3391, false], [3391, 8319, null], [8319, 11184, null], [11184, 15510, null], [15510, 19422, null], [19422, 23875, null], [23875, 29657, null], [29657, 35481, null], [35481, 36778, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3391, true], [3391, 8319, null], [8319, 11184, null], [11184, 15510, null], [15510, 19422, null], [19422, 23875, null], [23875, 29657, null], [29657, 35481, null], [35481, 36778, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36778, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36778, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36778, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36778, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36778, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36778, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36778, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36778, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36778, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36778, null]], "pdf_page_numbers": [[0, 3391, 1], [3391, 8319, 2], [8319, 11184, 3], [11184, 15510, 4], [15510, 19422, 5], [19422, 23875, 6], [23875, 29657, 7], [29657, 35481, 8], [35481, 36778, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36778, 0.08696]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
32970b999835b8aa5fd1e43248bcd1a66a02694e
|
Applying model transformation and Event-B for specifying an industrial DSL
Citation for published version (APA):
Document status and date:
Published: 01/01/2013
Document Version:
Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)
Please check the document version of this publication:
• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher’s website.
• The final author version and the galley proof are versions of the publication after peer review.
• The final published version features the final layout of the paper including the volume, issue and page numbers.
Link to publication
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
• Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
• You may not further distribute the material or use it for any profit-making activity or commercial gain
• You may freely distribute the URL identifying the publication in the public portal.
If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:
www.tue.nl/taverne
Take down policy
If you believe that this document breaches copyright please contact us at:
[email protected]
providing details and we will investigate your claim.
Download date: 04. Aug. 2019
Applying Model Transformation and Event-B for Specifying an Industrial DSL
Ulyana Tikhonova, Maarten Manders, Mark van den Brand, Suzana Andova, and Tom Verhoeff
Technische Universiteit Eindhoven, P.O. Box 513, 5600 MB Eindhoven, The Netherlands
{u.tikhonova,m.w.manders,m.g.j.v.d.brand,s.andova,t.verhoeff}@tue.nl
Abstract. In this paper we describe our experience in applying the Event-B formalism for specifying the dynamic semantics of a real-life industrial DSL. The main objective of this work is to enable the industrial use of the broad spectrum of specification analysis tools that support Event-B. To leverage the usage of Event-B and its analysis techniques we developed model transformations, that allowed for automatic generation of Event-B specifications of the DSL programs. The model transformations implement a modular approach for specifying the semantics of the DSL and, therefore, improve scalability of the specifications and the reuse of their verification.
Key words: domain specific language, Event-B, model transformations, verification and validation, reuse, scalability
1 Introduction
Domain-Specific Languages (DSLs) are a central concept of Model Driven Engineering (MDE). A DSL provides domain notions and notation for defining models. It implements the semantic mapping of the models by means of model transformations. A DSL bridges the gap between the domain level and an execution platform. From a semantics point of view this gap can be quite wide, i.e. the DSL implementation usually includes rather complicated design solutions and algorithms. To manage the complexity of the industrial DSL, considered in this paper, we provide an explicit definition of its semantics by means of a formal method. This allows for formal specification of the DSL semantics and for assessing correctness of the specified semantic mapping via verification and validation.
In this paper we discuss the use cases of verification and validation applied to a DSL specification in an industrial context. We identify two different roles, that use different types of analysis of the DSL specification. A DSL developer is interested in validating and checking consistency of the DSL design and implementation. A DSL user is interested in getting better understanding of the DSL semantics, for example via simulation of its specifications. Correspondingly, in...
the context of MDE a formal specification of a DSL can be given on two abstraction levels: the DSL metamodel level and the DSL model level.
There exists quite a number of formalisms for specifying behavior and tools for analyzing these specifications using different verification and validation techniques. In this research we use the Event-B formalism [2] and the Rodin platform [3], as they allow the implementation of all use cases listed above. By using Event-B and Rodin we can (1) prove consistency of the DSL semantics specifications with (automatic and interactive) provers, (2) find deadlocks and termination problems using model checkers, (3) use animators to validate the specified semantics with the help of domain experts, and (4) provide graphical visualization of the specification to help DSL users to understand how their programs run. All these tools are available in Rodin for Event-B.
In this paper, we show how Event-B and Rodin can be adopted in practice and applied to the industrial use cases, through model transformations from the DSL to Event-B. Our model transformations can automatically generate an Event-B specification for each concrete DSL program. For this, we apply the techniques of composition and instantiation of Event-B specifications. Composition of Event-B specifications simplifies the creation, maintenance and verification of larger specifications, because one can handle the smaller components separately. Instantiation is a way to concretize a generic Event-B specification, defined on the DSL metamodel level, to the model level of a concrete DSL program. The instantiation technique allows for the reuse of verification results from one level to the other. As a result of applying these techniques, our model transformations improve usability of Event-B and Rodin.
In the rest of the paper, Section 2 gives the overview of the industrial DSL and defines roles and use cases; Section 3 introduces the Event-B formalism; Sections 4.1 and 4.2 describe the decomposition and instantiation techniques; Section 4.3 outlines the implementation of our approach and results of its application. Related work is discussed in Section 5. Conclusions and directions for future work are given in Section 6.
2 Case Study and Use Cases
Our case study was performed at ASML, a producer of complex lithography machines for the semiconductor industry. In our case study we specified the dynamic semantics of the LACE DSL. LACE (Logical Action Component Environment) is one of the DSLs, developed by and used within ASML for controlling lithography machines. A lithography machine consists of many physical subsystems (such as actuators, projectors, and sensors), which operate simultaneously in order to perform the required functions of the machine. LACE allows for specifying how subsystems operate in collaboration with each other by means of so-called logical actions. An example of a logical action is shown in Figure 1.
LACE has a graphical notation based on UML activity diagrams. A logical action consists of subsystem actions (rounded rectangles in Figure 1), each of
---
which belongs to a subsystem, that executes this subsystem action (vertical column, containing the rounded rectangle). Subsystem actions, combined together into a so-called scan (dashed rounded rectangle), are executed synchronously. Thus, in Figure 1 the Sensor subsystem starts and stops the GrabAFrame action at the same moments, as the Laser subsystem starts and stops the ProduceLight action. Subsystem actions within a logical action can be executed sequentially or concurrently (thick arrows, fork and join nodes). For example, the subsystem actions AdjustFramePosition and PositionObject are independent actions and can be executed in any order or in parallel, but the GrabAFrame action can be performed only after both these actions are finished. Finally, subsystem actions may require and produce data. The dataflow in a logical action is depicted by means of thin arrows, input and output pins. For example, in Figure 1 the GrabAFrame action produces data, which is saved in the snapshot output parameter.
The high-level description of the machine subsystems’ behavior, given in logical actions, is translated into the invocations of hardware drivers and a synchronization driver in such a way that the resulting execution matches the behavior specified in the logical actions. The semantic gap between LACE and driver functions is wide, and thus the translation is hard to develop, maintain, understand, and use. We construct a formal specification of LACE and apply different kinds of analysis to it, in order to enhance understandability, maintainability and usage of the DSL translation. We identify the following roles and use cases of specification of a DSL and its analysis in the industrial context (Figure 2).
A DSL developer designs and develops the DSL by constructing its metamodel and semantic translations. Formal specification of this DSL implementation allows for the verification, such as checking that it is consistent, non-contradictive, feasible and complete. A DSL user specifies DSL programs as instances of the DSL metamodel. In the context of formal methods the instantiation of metamodel by a DSL program needs to be verified separately. Formal specification of the DSL programs allows for the execution of specifications, and
Fig. 2: Roles and use cases of the DSL specification and its analysis
thus for model checking and simulation. The construction of the LACE specifications and the implementation of the listed use cases are described in Section 4.
3 Event-B
Event-B is an evolution of the B method, both introduced by Abrial [2]. Event-B employs set theory and first-order logic for specifying software and/or hardware behavior. A big advantage of Event-B is its tool support, offered by the Rodin platform [3]. Using Rodin and its plug-ins, one can create and edit Event-B specifications, verify them using automatic or interactive provers, animate and model check Event-B specifications.
An Event-B specification consists of contexts and machines. A context describes the static part of a system: sets, constants and axioms. A machine uses (sees) the context to specify behavior of a system via a state-based formalism. Variables of the machine define the state space. Events, which change values of these variables, define transitions between the states. An event consists of guards and actions, and can have parameters. An event can occur only when its guards are true, and as a result of the event its actions are executed. The properties of the system are specified as invariants, which should hold for all reachable states. The properties are verified via proving automatically generated proof obligations and/or via model checking.
The attractive simplicity of Event-B is enhanced by techniques such as shared event composition and generic instantiation, which support scalability and reuse of Event-B specifications [4]. We discuss these techniques in detail in Section 4.
4 Model Transformations from LACE to Event-B
The Rodin platform allows for the implementation of the use cases described in Figure 2, provided that the corresponding Event-B specifications of LACE are given. This poses the following problems. First, the semantics of LACE is complex, therefore capturing it within Event-B machines is challenging and results in a big specification, which is hard to understand, maintain and verify. To tackle this problem we apply two types of composition of Event-B specifications: composition of semantic features and composition of machines (Section 4.2). The
LACE specification is composed using model transformations. Second, while a specification of LACE on the metamodel level can be created and analyzed once, specifications of the LACE programs need to be constructed and analyzed many times by DSL users. We cannot expect DSL users to create Event-B specifications of their LACE programs and to verify them themselves. Therefore, we apply model transformations from LACE to Event-B to automatically generate specifications of LACE programs and we use the generic instantiation technique to verify their conformance to the LACE specification, given on the metamodel level (Section 4.1). Moreover, we enhance simulation of Event-B specifications of LACE programs by providing a user-friendly visualization.
4.1 Instantiation of Event-B specification
Generic instantiation is a technique, proposed by Abrial and Hallerstede to reuse an existing Event-B specification by refining the data structures, specified in its constants and variables, in a new copy of this specification [4]. We apply generic instantiation as depicted in Figure 3 (on the left). The concepts of conceptual machine and of composite machine are introduced in Subsection 4.2.

The metamodel context captures the structure specified in the LACE metamodel. A conceptual machine uses this context to specify the dynamic semantics of LACE in terms of the metamodel. Based on the structural properties, specified in the axioms of the metamodel context, the conceptual machines are proved to be consistent and complete by discharging the corresponding proof obligations using the Rodin provers. Thus, the semantics is verified on the metamodel level. The metamodel context and the conceptual machines for a specific DSL are constructed manually and only once.
In the model context, values are assigned to the sets and constants, introduced in the metamodel context. The assignments are done in the axioms of the model context. Therefore, being used by a composite machine, this context specifies behavior of a concrete LACE program – on the model level. This specification can be model checked and animated, allowing for the analysis of a
particular LACE program. Model contexts are generated from LACE programs automatically by means of model transformations.
According to the generic instantiation technique, if all structural properties, defined in the metamodel context, can be derived for the structure, instantiated in the model context, then the verification of the Event-B specification can be extended straightforwardly from the metamodel level to the model level [4]. In [13] and [6] it is proposed to use theorem proving to show this derivation.
Due to the large sizes of the model contexts, generated from LACE programs, the automatic provers of Rodin fail to discharge instantiation theorems. On the other hand, we do not expect an average DSL user to prove these theorems using Rodin interactive provers, as it requires knowledge of propositional calculus and understanding of proof strategies. Therefore, instead of the theorem proving, we employ evaluation of structural properties predicates in the ProB animator integrated in Rodin [11]. Thus, we achieve automatic proof of instantiation in Event-B.
4.2 Composition of Event-B specification
As we mentioned before, capturing semantics of LACE within Event-B machines is rather complicated due to their different abstraction levels. To handle this complexity we employ modularity of LACE semantics. Each module is described separately as a conceptual machine in Event-B. Composition of the conceptual machines gives a resulting Event-B machine, which specifies the LACE dynamic semantics. The modular approach facilitates development, understandability and proving the correctness of the specification. We distinguish two types of modularity in the dynamic semantics of LACE: semantic modularity and architectural modularity.
To manage complexity of the LACE semantics we decompose it into separate semantic features (SFs): Core SF, Order SF, Scan SF and Data SF. The Core SF specification defines common concepts and interfaces: logical actions, consisting of subsystem actions, subsystems and events for requesting execution of logical actions and subsystem actions. Order SF, Scan SF and Dataflow SF are specified independently on the basis of the Core SF machine by adding extra variables, invariants, parameters, guards and actions to the Core SF machine and by changing some of the Event-B types, used in it. Order SF introduces partial order of execution of subsystem actions within a logical action. Scan SF joins subsystem actions into scans. Data SF introduces input and output parameters and dataflow within a logical action. The composition of semantic features is implemented via weaving Event-B code of the machines in the model transformation (the self-referential M2M arrow in Figure 3).
The LACE implementation consists of different software components, such as: logical action components (LAC), that translate logical action requests into subsystem actions, and subsystems (SS), that execute subsystem actions. This architectural modularity of LACE is implemented in Event-B using the shared event composition approach [14]. Software modules are specified in separate machines, which are then composed into one composite machine specifying the
whole system. The interaction of the modules is implemented via composition (or in fact, synchronization) of the events of the composing machines. Composition of events means conjunction of the events’ guards and composition of the events’ actions in one composite event. The composition of the LAC and SS machines is implemented using model transformation (the M2M arrow from conceptual machines to composite machines in Figure 3).
As a result of the intersection of two types of specification modularity, eight conceptual machines and four composition schemes need to be specified: for each semantic feature we specify a conceptual machine of each software module (LAC and SS) and a scheme of the interaction of LAC with SS. An Event-B machine that specifies the LACE semantics as a whole is composed of LAC and SS machines, that include Event-B code for all four semantic features, according to the compositional schemes of all four semantic features. Two dimensions of the modularity presented above simplify creation, verification and validation of Event-B components and maintenance of the model transformations.
4.3 Implementation
The LACE-to-Event-B transformations, described in Sections 4.1 and 4.2, were implemented using the Operational QVT (Query/View/Transformation) language [1] in the Eclipse environment. The input for the transformation is provided directly by the LACE implementation software, which employs model transformation and code generation techniques in the Borland Together environment, and therefore is compatible with EMF (Eclipse Modeling Framework). As a target metamodel for the transformation we use the Event-B Ecore implementation provided by the EMF framework for Event-B [15].
The LACE-to-EventB transformation is designed in a modular way, which follows the logic of instantiation and composition techniques as described in Sections 4.1 and 4.2. Thus, the transformation is possible to reuse and to generalize.
<table>
<thead>
<tr>
<th>Event-B components</th>
<th>Semantic features</th>
<th>core+scan+order+data</th>
</tr>
</thead>
<tbody>
<tr>
<td>Metamodel context</td>
<td>3 constants</td>
<td>4 constants</td>
</tr>
<tr>
<td></td>
<td>5 axioms</td>
<td>8 axioms</td>
</tr>
<tr>
<td></td>
<td>20 constants</td>
<td>23 constants</td>
</tr>
<tr>
<td></td>
<td>7 axioms</td>
<td>10 axioms</td>
</tr>
<tr>
<td>Model context</td>
<td>3 constants</td>
<td>4 constants</td>
</tr>
<tr>
<td></td>
<td>8 axioms</td>
<td>9 axioms</td>
</tr>
<tr>
<td></td>
<td>21 constants</td>
<td>23 constants</td>
</tr>
<tr>
<td></td>
<td>8 axioms</td>
<td>10 axioms</td>
</tr>
<tr>
<td>LAC machine</td>
<td>3 events</td>
<td>4 events</td>
</tr>
<tr>
<td></td>
<td>21 POs</td>
<td>26 POs</td>
</tr>
<tr>
<td></td>
<td>3 events</td>
<td>28 POs</td>
</tr>
<tr>
<td></td>
<td>7 POs</td>
<td>4 events</td>
</tr>
<tr>
<td>SS machine</td>
<td>3 events</td>
<td>3 events</td>
</tr>
<tr>
<td></td>
<td>11 POs</td>
<td>7 POs</td>
</tr>
<tr>
<td></td>
<td>3 events</td>
<td>11 POs</td>
</tr>
<tr>
<td>composition of LAC and SS machines</td>
<td>10 events</td>
<td>10 events</td>
</tr>
<tr>
<td></td>
<td>70 POs</td>
<td>89 POs</td>
</tr>
<tr>
<td></td>
<td>386 POs</td>
<td>491 POs</td>
</tr>
</tbody>
</table>
Table 1: Characteristics of the LACE-to-Event-B transformation
Table 1 shows the representative characteristics of the transformation: sizes of the metamodel contexts vs. model contexts and sizes of the conceptual machines (LAC and SS machines for core, scan, order and data semantic features) vs. composite machines (bottom row). The automatically generated Event-B components are shaded. As an input for the transformation the LACE program depicted in Figure 1 is used. All proof obligations (POs) of the LAC and SS machines are discharged by invocation of the automatic provers in Rodin. The proof obligations of the composite machines (bottom row) can be left undischarged, as these are inherited proof obligations of the LAC and SS machines (according to the shared event composition approach [14]). The Event-B machine that specifies the LACE semantics as a whole is located in the bottom right cell of the table. One can observe that this machine is much larger and has much more proof obligations, than the conceptual machines, of which this machine is composed.
To make it convenient for a LACE user to work with Event-B we developed a graphical visualization of the LACE specification using the BMotion Studio plugin [10]. This visualization runs together with the ProB animator and provides a GUI (graphical user interface) for a machine being animated. The GUI is based on the original LACE notation. By experimenting with a LACE program specification using this GUI a user can get better understanding of the DSL design and improve efficiency of her programs. Screen shots of the visualization can be found on the web-page of our project.2
5 Related Work
There are a number of studies in which Event-B has been applied to a specification of the dynamic semantics of a DSL. Ait-Sadoune and Ait-Ameur employ Event-B and Rodin for proving properties and animation of BPEL processes [5]. Hoang et al. use Event-B and Rodin to automate analysis of Shadow models [9]. In both studies, DSL program descriptions are translated into Event-B specifications. The translations are implemented in the Java programming language. These works do not use generic instantiation and composition techniques, but apply refinement of Event-B machines [4] to implement modularity of the programs. Based on our experience, refinement restricts semantics definition and can be rather complicated for automatic proving. Moreover, we use model transformations to implement the generation of Event-B specifications, which increases the abstraction level of the translation and therefore enhances its reuse.
Besides Event-B, other formalisms have been used as a target formal domain for specifying semantics of DSLs. Chen et al. propose transformational specification of dynamic semantics using Abstract State Machines (ASM) as a target formalism and explore specified behavior by means of the AsmL simulator tool [8]. Moreover, they introduce semantic units as an intermediate common language for defining dynamic semantics of DSLs and explore a technique for their composition [7]. Another approach, that supports reuse of the DSL analyses via intermediate specification modules, is proposed by Ratiu et al. [12]. They
2 www.win.tue.nl/mdse/COREF
identify conceptually distinct sub-languages, shared by different DSLs, and transform these to different analysis formalisms. These works support modularity of a DSL specification by modularizing target formalisms. In this paper we describe how modularity of the DSL specification arises from the modularity of the DSL semantics, and apply model transformations to compose semantic modules.
6 Conclusion
In this paper we showed how the dynamic semantics of an industrial DSL can be defined using the Event-B formalism and model transformations. The Rodin platform and its plug-ins provide a broad spectrum of functionality and analysis tools for Event-B specifications. Our objective was to adopt Event-B for the industrial use cases for two major roles: DSL users and DSL developers. This was achieved by using MDE techniques – model transformations that define the semantics mapping from DSL domain to Event-B.
In order to specify semantics of LACE in a modular and scalable way we introduce semantic features and specify them in conceptual Event-B machines. The conceptual machines are verified on the metamodel level using automatic provers of Rodin. The LACE-to-Event-B transformation composes the conceptual machines into the LACE specification and instantiates this specification for concrete LACE programs. The resulting Event-B specifications can be validated and model checked by DSL developers and can be simulated by DSL users in the user-friendly GUI – all in the Rodin environment.
As future work we aim for applying the demonstrated techniques to other DSLs. For this we need to generalize LACE-to-Event-B transformation by distinguishing repetitive Event-B code, that can be combined into fine-grained specification patterns. This will allow not only for reuse of the demonstrated techniques of instantiation and composition, but also for reuse of already verified and visualized pieces of specification.
7 Acknowledgements
We are very grateful to Marc Hamilton and Wilbert Alberts (ASML, The Netherlands) for introducing us to the LACE world and providing very useful feedback on our experiments. We would like to thank Michael Butler and Colin Snook (University of Southampton, United Kingdom) for their help with using Event-B and Rodin. We also would like to thank Anton Wijs and Alexander Serebrenik (Eindhoven University of Technology, The Netherlands) for their useful comments on this paper.
References
Applying Model Transformation and Event-B for Specifying an Industrial DSL
Proceedings of MoDeVVa 2013 50
|
{"Source-Url": "https://pure.tue.nl/ws/portalfiles/portal/3864066/341211114329664.pdf", "len_cl100k_base": 5375, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 27396, "total-output-tokens": 6949, "length": "2e12", "weborganizer": {"__label__adult": 0.0003628730773925781, "__label__art_design": 0.0004801750183105469, "__label__crime_law": 0.00035500526428222656, "__label__education_jobs": 0.000637054443359375, "__label__entertainment": 8.654594421386719e-05, "__label__fashion_beauty": 0.00020134449005126953, "__label__finance_business": 0.0005006790161132812, "__label__food_dining": 0.0003769397735595703, "__label__games": 0.0005745887756347656, "__label__hardware": 0.0013437271118164062, "__label__health": 0.000484466552734375, "__label__history": 0.0002770423889160156, "__label__home_hobbies": 0.00010633468627929688, "__label__industrial": 0.0013647079467773438, "__label__literature": 0.0003151893615722656, "__label__politics": 0.00031638145446777344, "__label__religion": 0.0005612373352050781, "__label__science_tech": 0.09014892578125, "__label__social_life": 8.654594421386719e-05, "__label__software": 0.0103607177734375, "__label__software_dev": 0.8896484375, "__label__sports_fitness": 0.0003058910369873047, "__label__transportation": 0.0008554458618164062, "__label__travel": 0.0001885890960693359}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30281, 0.02877]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30281, 0.18798]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30281, 0.85396]], "google_gemma-3-12b-it_contains_pii": [[0, 2527, false], [2527, 4905, null], [4905, 8081, null], [8081, 10346, null], [10346, 12612, null], [12612, 14837, null], [14837, 18033, null], [18033, 21352, null], [21352, 24527, null], [24527, 27079, null], [27079, 30281, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2527, true], [2527, 4905, null], [4905, 8081, null], [8081, 10346, null], [10346, 12612, null], [12612, 14837, null], [14837, 18033, null], [18033, 21352, null], [21352, 24527, null], [24527, 27079, null], [27079, 30281, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30281, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30281, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30281, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30281, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30281, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30281, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30281, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30281, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30281, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30281, null]], "pdf_page_numbers": [[0, 2527, 1], [2527, 4905, 2], [4905, 8081, 3], [8081, 10346, 4], [10346, 12612, 5], [12612, 14837, 6], [14837, 18033, 7], [18033, 21352, 8], [21352, 24527, 9], [24527, 27079, 10], [27079, 30281, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30281, 0.1626]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
e7eda79b8e6c0773b706beb446cb90d59eb78ae1
|
Abstract
When a Diameter server or agent becomes overloaded, it needs to be able to gracefully reduce its load, typically by informing clients to reduce sending traffic for some period of time. Multiple mechanisms have been proposed for transporting overload and load information. While these proposals differ in many ways, they share similar data requirements. This document analyzes the data requirements of each proposal with a view towards proposing a common set of Diameter Attribute-Value Pairs (AVPs).
Status of this Memo
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on August 22, 2013.
Copyright Notice
Copyright (c) 2013 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Table of Contents
1. Introduction .................................................. 3
2. Documentation Conventions .................................. 3
3. Overload Control Data Usage .................................. 3
4. Mechanism Differences that Affect Data Structures .......... 4
4.1. Non-Adjacent Nodes ....................................... 4
4.2. Stateless Negotiation ................................... 4
4.3. Overload Scopes .......................................... 5
4.4. Hard or Soft Overload State ............................. 5
5. Naming Conventions ........................................... 5
6. Data Element Comparison ...................................... 6
6.1. Data Elements for Connection Establishment and
Negotiation .............................................. 6
6.1.1. Supported Scope Selection .......................... 6
6.1.2. Algorithm Selection ................................ 6
6.1.3. Application Selection ............................... 7
6.1.4. Frequency of Reports ............................... 7
6.1.5. Grouping ............................................ 7
6.2. Data Elements for Overload and Load reporting ........... 7
6.2.1. Scope of Report ..................................... 7
6.2.2. Overload Severity .................................. 8
6.2.3. Report Algorithm ................................... 8
6.2.4. Report Expiration .................................. 8
6.2.5. Current Load ....................................... 9
6.2.6. Applications covered by a Report .................. 9
6.2.7. Report Action ....................................... 9
6.2.8. Priority ............................................. 10
6.2.9. Session Groups ..................................... 10
6.3. Result Codes ............................................. 10
7. IANA Considerations .......................................... 11
8. Security Considerations ...................................... 11
9. References .................................................... 12
9.1. Normative References .................................... 12
9.2. Informative References .................................. 12
Appendix A. Contributors ...................................... 12
Authors' Addresses ............................................ 12
1. Introduction
When a Diameter [RFC6733] server or agent becomes overloaded, it needs to be able to gracefully reduce its load, typically by informing clients to reduce sending traffic for some period of time. The Diameter Overload Control Requirements [I-D.ietf-dime-overload-reqs] describe requirements for overflow control mechanisms.
At the time of this writing, there have been two proposals for Diameter overload control mechanisms. "A Mechanism for Diameter Overload Control" (MDOC) [I-D.roach-dime-overload-ctrl] defines a mechanism that piggybacks overload and load state information over existing Diameter messages. "The Diameter Overload Control Application" (DOCA) [I-D.korhonen-dime-ovl] defines a mechanism that uses a new and distinct Diameter application to communicate similar information. While there are significant differences between the two proposals, they carry similar information. Each proposal includes its own set of Diameter AVPs.
This document is intended as a framework for discussing the data requirements of the two proposals. It includes an analysis of the differences and similarities of their respective data elements, with a view towards rationalizing the AVPs from the two proposals.
The authors expect that a follow-on effort will eventually specify a common data model for reporting Diameter overload information.
This document assumes that Diameter nodes exchange overload control information via Diameter, rather than via some out-of-band channel. This document does not address the specific difference of either mechanism proposal, except where they impact the AVP definitions.
2. Documentation Conventions
This document uses terms defined in [RFC6733] and [I-D.ietf-dime-overload-reqs].
3. Overload Control Data Usage
A Diameter overload control mechanism based on the overload control requirements [I-D.ietf-dime-overload-reqs] involves the exchange of information between two or more Diameter nodes. The exchanged information serves three distinct purposes:
Negotiation: Diameter nodes need to negotiate support for the overload control mechanism in general. Nodes that support overload control need to advertise the overload control scopes they can support. Finally, they need to select an overload control algorithm.
Communication of Overload State: Nodes need to report that an overload condition is in effect, to what degree they are overloaded, and the scope of the overload condition. We refer to such a communication as an "Overload Report".
Communication of Load: Nodes need to communicate their current load status, even when not in an overloaded state.
Overload Control information may be communicated between adjacent Diameter nodes, or it may cross one or more intervening nodes. Overload Control information can be communicated in either direction; that is, a downstream node can indicate overload to an upstream node, or vice-versa.
Open Issue: There is an ongoing discussion about whether the overload control mechanism should be strictly hop-by-hop, or whether it should support communication between non-adjacent nodes. The results of this discussion may have implications for overload control data elements.
4. Mechanism Differences that Affect Data Structures
While a thorough comparison of the two proposed mechanisms is out of scope for this document, there are a few differences that directly impact the choice of data elements.
4.1. Non-Adjacent Nodes
MDOC only supports hop-by-hop communication of overload information. DOCA allows for the possibility of communication between non-adjacent nodes. For hop-by-hop communication, the originator of an overload report is always the directly connected node. If non-adjacent communication is to be allowed, the data model needs a way to express the identity of the originating node.
4.2. Stateless Negotiation
Both MDOC and DOCA allow overload control parameters to be negotiated at the beginning of a connection, and persist for the duration of the connection. DOCA also allows a "stateless" mode, where the parameters do not persist between overload reports. This requires
the sender of an overload report to restate any relevant parameters for each report. Thus, the DOCA overload report format includes the ability to express all such parameters at any time, not just during negotiation.
Note that stateless negotiation does not mean that no state may ever be saved. Nodes may use implementation-specific methods of remembering certain parameters, or out-of-band configuration methods to do the same.
### 4.3. Overload Scopes
As described in [I-D.ietf-dime-overload-reqs], it's possible for a Diameter node to experience overload that impacts some subset of potential traffic. For example, a Diameter agent might route traffic to different servers based on realm. If the server for one realm experienced an outage or overload condition, the agent report that it is overloaded for that realm, but can process traffic for other realms normally. We use the term "overload scope", or simply "scope", to refer to the set of potential messages affected by an overload report.
MDOC includes a richer (and therefore more complex) concept of overload scopes. A node may include multiple scopes in an overload report. Each scope entry indicates both the type of scope, and the value of the scope, where the value is interpreted according to the type.
DOCA also allows a node to include multiple scopes in a report. But DOCA's current set of scope types only affect the interpretation of the originating node identity. Therefore the DOCA scope entries do not include a value.
### 4.4. Hard or Soft Overload State
MDOC assumes that overload information is soft state. That is, it expires if not refreshed within a stated interval. DOCA also treats most overload information as soft state, but there are situations where it may be treated as hard-state. For example, if the OC-Level is set to "Hold", the expiration time is not honored.
### 5. Naming Conventions
MDOC and DOCA use somewhat different naming conventions for their respective AVPs. DOCA prefixes each AVP name with "OC". (for example, "OC-Scope"). MDOC prefixes AVPs that can appear in the root of messages with "Overload", and leaves those that occur inside an
overload related grouped AVP to be identified by context. (For example, "Overload Info" and "Supported Scopes"). The working group should consider picking one approach or the other.
6. Data Element Comparison
6.1. Data Elements for Connection Establishment and Negotiation
The following sections describe data elements used for initial negotiation.
6.1.1. Supported Scope Selection
- **DOCA:** OC-Scope: Bitmap of scopes supported by the sender. Currently defined values are "Host scope", "Realm Scope", "Only origin realm", "Application Information", "Node Utilization Information", and "Application Priorities".
- **MDOC:** Supported-Scopes: Bitmap of scopes supported by the sender. Currently defined values are "Destination-Realm", "Application-ID", "Destination-Host", "Host", "Connection", "Session-Group", and "Session".
DOCA uses OC-Scope both to declare supported scopes, and to list the scopes associated with a particular overload report. MDOC uses separate dedicated AVPs for the two purposes. DOCA overloads OC-Scope to include indicators that load information and priority information may be included.
6.1.2. Algorithm Selection
- **DOCA:** OC-Algorithm: Bitmap of supported algorithms. Currently defined values are "Drop", "Throttle", and "Prioritize". Multiple values allowed.
- **MDOC:** Overload-Algorithm: Enumeration of supported algorithms. Multiple instances allowed in negotiation. Currently, there is one algorithm described, namely "loss".
Both mechanisms support algorithm extensibility. MDOC only allows Overload-Algorithm to occur in a CER or CEA message, and negotiates a single algorithm for the duration of the connection. DOCA allows the algorithm to be selected at report time. (Open Issue: what does it mean to indicate multiple algorithms in a congestion report?)
6.1.3. Application Selection
- DOCA: OC-Applications: Indications of the applications that are of interest.
- MDOC: MDOC assumes that overload reports can apply to any and all applications, and does not negotiate the list upfront. The "application" scope is used to select one or more applications on a per-report basis.
Open Issue: Are there use cases for the up front negotiation of applications of interest?
6.1.4. Frequency of Reports
- DOCA: OC-ToCl: Indicates how frequent reports shall be sent.
- MDOC: N/A
Since MDOC piggybacks overload reports in existing messages, the rate of overload reports is the same as the overall message rate. This may have advantage of giving more rapid and precise feedback as load increases.
Open Issue: We need further discussion about the appropriate rate(s) for overload reporting, regardless of which mechanism may be selected.
6.1.5. Grouping
- DOCA: n/a - negotiation AVPs included at message root.
- MDOC: Load-Info: Grouped AVP acting as a container for the other AVPs used for negotiation.
6.2. Data Elements for Overload and Load reporting
6.2.1. Scope of Report
- DOCA: OC-Scope (See Section 6.1.1)
- MDOC: Load-Info-Scope: Octet-String giving the scope of the overload report. The string contains a type indicator and a value. One or more instances required.
MDOC has a richer and more complex concept of scopes. Multiple scopes can be combined for a given overload report. Allowable scope combinations are described in [I-D.roach-dime-overload-ctrl].
6.2.2. Overload Severity
- DOCA: OC-Level: OctetString(1): Values 1-6 define discreet overload levels of increasing severity, with 1 meaning no overload condition, and 6 meaning clients should switch to a different server.
- DOCA: OC-Sending-Rate: Float32: Used when the "throttle" algorithm is in effect to indicate the maximum desired Diameter message rate.
- MDOC: Overload-Metric (Unsigned32): A numeric representation of load. The meaning is up to the interpretation of the selected algorithm, with the exception that a value of zero always means that no overload abatement is in effect. For the "Loss" algorithm, Overload Metric is a numeric value in the range of zero through 100, indicating the percentage of traffic reduction requested.
The Overload-Metric AVP used by MDOC is more general than OC-Level, in that it's interpretation is left to the algorithm. The meaning of the OC-Level values appear to be fixed regardless of algorithm choice. The OC-Level meanings could be used in MDOC by defining a new algorithm that interpreted Overload-Metric values 1-6 in the same way as defined for OC-Level.
Since MDOC does not define an algorithm similar to "throttle", it has no built in analog to OC-Sending-Rate. However, since MDOC allows algorithm extensibility, one could define a similar algorithm, and if necessary, add an extension AVP to state sending-rate.
6.2.3. Report Algorithm
- DOCA: OC-Algorithm (See Section 6.1.2)
- MDOC: The overload control algorithm is set during negotiation, and doesn't change for the duration of the connection.
Open Issue: DOCA's reuse of the OC-Algorithm AVP seems to allow more than one algorithm to be assigned to a single overload report. It's not clear what that would mean.
6.2.4. Report Expiration
- MDOC: Period-Of-Validity (Unsigned32)- Number of seconds until expiration.
DOCA defines expiration to be a point in time. MDOC uses a duration, i.e. number of seconds until expiration. The DOCA approach seems to require clock synchronization.
DOCA contains an open issue about whether to allow reports to expire vs. requiring explicit signaling.
6.2.5. Current Load
- DOCA: OC-Utilization: Indicates the overall load situation as a value between 0 and 100.
- MDOC: Load: The load situation in terms of 0 - 65535.
Current load indicates the existing load on an otherwise non-overloaded node. MDOC's range of 0-65535 was selected to harmonize with the DNS service location (SRV) [RFC2782] record's "Weight" field.
6.2.6. Applications covered by a Report
- DOCA: OC-Applications: Indications what applications are of interest for load reporting.
- MDOC does not use a separate AVP for this purpose. Rather, one or more applications can be indicated using the application scope type.
6.2.7. Report Action
- DOCA: OC-Action: Indicates the start, interim, and end of an overload period.
- MDOC: MDOC does not have a separate AVP to indicate the start and stop of an overload condition. Rather, a report with a non-zero Overload-Metric value starts the condition, and a report with a zero value, or the expiration of the Period-of-Validity value, indicate an end. Subsequent reports with non-zero Overload-Metric values serve the same purpose as a DOCA report with an OC-Action value of "interim".
Open Issue: Is OC-Action redundant? DOCA also has the ability to express a non-overload condition in OC-Level, so an approach similar to that of MDOC should be workable.
6.2.8. Priority
- DOCA: OC-Priority: Unsigned32: When used in an OC-Information AVP, sets the relative priority of applications listed in OC-Applications. As specified, may also be used to set the priority of a given Diameter message. [Open Issue: Is OC-Priority only in effect when the "Prioritize" algorithm is in effect?]
- MDOC: N/A
MDOC does not have an explicit priority data element. Relative priority between applications can be managed using the "Application" scope. This is not exactly the same as stating inter-application priority explicitly, but it may be possible to accomplish similar behavior.
6.2.9. Session Groups
- DOCA: N/A
- MDOC: Session-Group: UTF8String: Session-Group allows a node to assign a session to a named group. Overload Reports can refer to all sessions in a group using the Session-Group AVP.
A common application for Session-Group is when a Diameter agent load balances Diameter sessions across a set of servers. If the agent assigns all of the sessions assigned to a particular server to a group, and that server later becomes overloaded, the agent can send one overload report that applies to all sessions in the group, but does not apply to sessions assigned to other, non-overloaded, servers.
DOCA may be able to do something similar using by using the OC-Origin AVP to identify the overloaded server. However, the server-group approach can work even if the Diameter agent performs topology hiding.
6.3. Result Codes
DOCA defines the following Diameter result codes:
- DIAMETER_NO_COMMON_SCOPE (Permanent Failure): The Diameter peers are unable to negotiate one or more scopes in common.
- DIAMETER_NO_COMMON_ALGORITHM (Permanent Failure): The Diameter peers are unable to negotiate one or more algorithms in common.
- **DIAMETER_TOCL_TOO_SMALL** (Permanent Failure): The peer included an OC-TOCL AVP with an unacceptably low value.
- **DIAMETER_TOCL_TOO_BIG** (Permanent Failure): The peer included an OC-TOCL AVP with an unacceptably high value.
- **DIAMETER_RATE_TOO_BIG** (Permanent Failure): The peer included an OC-SENDING-RATE AVP with an unacceptably high value.
A failure to negotiate Overload Control support does not cause a connection failure in MDOC. Instead, overload control is just not invoked on the connection.
MDOC defines the following result codes:
- **DIAMETER_PEER_IN_OVERLOAD** (Transient Failure): When a Diameter node drops a request due to overload, it responds with this result code. This is primarily used when the peer does not support overload control, and therefore fails to reduce load as it would be expected to do so if it supported overload control.
**DIAMETER_PEER_IN_OVERLOAD** may be of value to both mechanisms. The Overload Control Requirements [I-D.ietf-dime-overload-reqs] argues that the result codes in the Diameter base protocol are insufficient for reporting failures due to congestion.
7. **IANA Considerations**
This draft makes no requests of IANA. The authors expect that a follow-on effort will specify a common set of Overload Control AVPs. This may introduce additional IANA considerations.
8. **Security Considerations**
This document compares the data elements used by "DOCA [I-D.korhonen-dime-ovl] and MDOC [I-D.roach-dime-overload-ctrl]. It introduces no security considerations beyond those in the respective documents.
The authors expect that a follow-on effort will specify a common set of Overload Control AVPs. This may introduce additional security considerations.
The authors made no attempt to analyze the security considerations in the DOCA and MDOC specifications for completeness.
9. References
9.1. Normative References
[I-D.ietf-dime-overload-reqs]
[I-D.roach-dime-overload-ctrl]
[I-D.korhonen-dime-ovl]
Korhonen, J., "Diameter Overload Control Application", draft-korhonen-dime-ovl-00 (work in progress), October 2012.
9.2. Informative References
Appendix A. Contributors
Eric McMurry made significant contributions to the analysis in this draft.
Authors' Addresses
Ben Campbell
Tekelec
17210 Campbell Rd.
Suite 250
Dallas, TX 75252
US
Email: [email protected]
|
{"Source-Url": "https://datatracker.ietf.org/doc/pdf/draft-campbell-dime-overload-data-analysis-00", "len_cl100k_base": 4583, "olmocr-version": "0.1.50", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 39670, "total-output-tokens": 5816, "length": "2e12", "weborganizer": {"__label__adult": 0.0003571510314941406, "__label__art_design": 0.0003299713134765625, "__label__crime_law": 0.0008411407470703125, "__label__education_jobs": 0.0006337165832519531, "__label__entertainment": 0.0001926422119140625, "__label__fashion_beauty": 0.000186920166015625, "__label__finance_business": 0.0010957717895507812, "__label__food_dining": 0.0003514289855957031, "__label__games": 0.0007801055908203125, "__label__hardware": 0.004962921142578125, "__label__health": 0.0005421638488769531, "__label__history": 0.0005550384521484375, "__label__home_hobbies": 8.821487426757812e-05, "__label__industrial": 0.0009002685546875, "__label__literature": 0.0004422664642333984, "__label__politics": 0.0006113052368164062, "__label__religion": 0.0004680156707763672, "__label__science_tech": 0.3330078125, "__label__social_life": 0.00012123584747314452, "__label__software": 0.12396240234375, "__label__software_dev": 0.52783203125, "__label__sports_fitness": 0.0004253387451171875, "__label__transportation": 0.000988006591796875, "__label__travel": 0.0002970695495605469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21507, 0.0367]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21507, 0.34599]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21507, 0.85002]], "google_gemma-3-12b-it_contains_pii": [[0, 1380, false], [1380, 3793, null], [3793, 3793, null], [3793, 5806, null], [5806, 5806, null], [5806, 7901, null], [7901, 7901, null], [7901, 10052, null], [10052, 10052, null], [10052, 11862, null], [11862, 11862, null], [11862, 13376, null], [13376, 13376, null], [13376, 15275, null], [15275, 15275, null], [15275, 16870, null], [16870, 16870, null], [16870, 18639, null], [18639, 18639, null], [18639, 20500, null], [20500, 20500, null], [20500, 21507, null], [21507, 21507, null], [21507, 21507, null], [21507, 21507, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1380, true], [1380, 3793, null], [3793, 3793, null], [3793, 5806, null], [5806, 5806, null], [5806, 7901, null], [7901, 7901, null], [7901, 10052, null], [10052, 10052, null], [10052, 11862, null], [11862, 11862, null], [11862, 13376, null], [13376, 13376, null], [13376, 15275, null], [15275, 15275, null], [15275, 16870, null], [16870, 16870, null], [16870, 18639, null], [18639, 18639, null], [18639, 20500, null], [20500, 20500, null], [20500, 21507, null], [21507, 21507, null], [21507, 21507, null], [21507, 21507, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21507, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21507, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21507, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21507, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21507, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21507, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21507, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21507, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21507, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21507, null]], "pdf_page_numbers": [[0, 1380, 1], [1380, 3793, 2], [3793, 3793, 3], [3793, 5806, 4], [5806, 5806, 5], [5806, 7901, 6], [7901, 7901, 7], [7901, 10052, 8], [10052, 10052, 9], [10052, 11862, 10], [11862, 11862, 11], [11862, 13376, 12], [13376, 13376, 13], [13376, 15275, 14], [15275, 15275, 15], [15275, 16870, 16], [16870, 16870, 17], [16870, 18639, 18], [18639, 18639, 19], [18639, 20500, 20], [20500, 20500, 21], [20500, 21507, 22], [21507, 21507, 23], [21507, 21507, 24], [21507, 21507, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21507, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
5eaa30555d4ef55b46a695d8a239ded37f36601d
|
Release Notes
3-Heights™ and Classic PDF Tools
Version 4.4
Contact: [email protected]
Owner: PDF Tools AG
Kasernenstrasse 1
8184 Bachenbüelach
Switzerland
www.pdf-tools.com
© 2014 PDF Tools AG – Premium PDF Technology
# Table of Contents
1. **Overview** .................................................................................................................. 4
1.1 How to Download the Trial Software ........................................................................ 4
1.2 Technical Support ..................................................................................................... 4
1.3 System Requirements .............................................................................................. 4
1.4 Compiler Versions ................................................................................................... 5
2. **New Products** .......................................................................................................... 6
2.1 3-Heights™ PDF Merge Split Shell ........................................................................... 6
2.2 3-Heights™ Digital Signature Service .................................................................... 6
2.3 New implementation of 3-Heights™ PDF to Image Converter R2 ............................ 7
3. **New Features to all Products** .................................................................................. 8
3.1 Enhancements to all 3-Heights™ Products .............................................................. 8
3.2 Enhancements to all Classic Products ....................................................................... 9
4. **New Features to Specific Products** ......................................................................... 10
4.1 3-Heights™ AFP to TIFF Conversion Utility .......................................................... 10
4.2 3-Heights™ CrypTokl Certificate Utility .................................................................. 10
4.3 3-Heights™ Document Assembler ......................................................................... 10
4.4 3-Heights™ Document Converter .......................................................................... 10
4.5 3-Heights™ Font to PDF Utility ............................................................................ 11
4.6 3-Heights™ Image Compare Utility ....................................................................... 11
4.7 3-Heights™ Image to PDF Converter ..................................................................... 11
4.8 3-Heights™ JPM to PDF Converter ........................................................................ 12
4.9 3-Heights™ OCR Enterprise Add-on ...................................................................... 12
4.10 3-Heights™ PDF Analysis & Repair ...................................................................... 12
4.11 3-Heights™ PDF Annotation API ......................................................................... 12
4.12 3-Heights™ PDF Compare Utility ........................................................................ 12
4.13 3-Heights™ PDF Creator Library ......................................................................... 12
4.14 3-Heights™ PDF Extract ....................................................................................... 12
4.15 3-Heights™ PDF Merge Split ............................................................................... 13
4.16 3-Heights™ PDF Optimization ............................................................................ 13
4.17 3-Heights™ PDF Page Split Tool ......................................................................... 13
4.18 3-Heights™ PDF Printer ......................................................................................... 13
4.19 3-Heights™ PDF Producer ..................................................................................... 13
4.20 3-Heights™ PDF Security ...................................................................................... 14
4.21 3-Heights™ PDF Studio Utility ............................................................................ 14
4.22 3-Heights™ PDF Thumbnail Utility ...................................................................... 14
4.23 3-Heights™ PDF to EMF Converter ..................................................................... 14
4.24 3-Heights™ PDF to Image Converter ................................................................... 15
4.25 3-Heights™ PDF to Image Converter R2 ............................................................... 15
4.26 3-Heights™ PDF to PDF/A Converter ................................................................. 15
4.27 3-Heights™ PDF Uncompress Utility .................................................................... 16
4.28 3-Heights™ PDF Validator .................................................................................... 16
4.29 3-Heights™ PDF Viewer................................................................. 16
4.30 3-Heights™ Text to PDF Converter ............................................. 17
4.31 3-Heights™ TIFF Tool Suite......................................................... 17
4.32 3-Heights™ XMP Generator ......................................................... 17
4.33 Classic Command Line Suite ......................................................... 17
4.34 Classic PDF Prep Tool Suite ........................................................ 18
5 About PDF Tools AG......................................................................... 18
1 Overview
The 3-Heights™ PDF tools represent the most recent product line from PDF Tools AG. The 3-Heights™ PDF tools are available as programming libraries (APIs), command line tools and Windows services. The tools allow for a wide variety of manipulation of PDF files including viewing, printing, extracting of information, conversion, digitally signing, validation, repairing, and optimization.
The Classic PDF Tools (formerly known as GLANCE Tools) represent the original product line from PDF Tools AG. Similar to the 3-Heights™ product line the Classic PDF Tools are available as programming libraries (APIs) and command line shell versions but not Windows services. The tools are packaged in two products, the Command Line Suite and the Prep Tool Suite.
1.1 How to Download the Trial Software
The 3-Heights™ PDF tools can be downloaded from the product description pages on our website. There is no charge for downloading evaluation versions (valid for a 30 day period).
1.2 Technical Support
Please report problems by contacting our support team by mail:
[email protected]
1.3 System Requirements
All of the tools are available for:
- Windows XP
- Windows Vista
- Windows 7
- Windows 8
- Windows Server 2003
- Windows Server 2003 R2
- Windows Server 2008
- Windows Server 2008 R2
- Windows Server 2012
- Windows 9.x and NT 4 and Windows 2000 are no longer supported by this version.
Some tools are only available on Windows platforms. These are:
- PDF Producer
- Document Converter
- The components which are based on the rendering engine: PDF Viewer, PDF Printer, PDF to Image Converter, PDF to EMF Converter
Most of the tools, however, are also available for the following platforms, and will run also on newer versions of the same OS family:
- IBM AIX 5.1
- Sun Solaris / SPARC 5.8
- HP-UX 11i incl. IA64
- SunSolaris / Intel 5.10
- Linux 2.4 (RedHat)
- Mac OS X 10.4 x86 / x64
- Linux 2.6 (SuSE)
Other platforms are available on request. Please refer to the individual product pages to obtain information on supported operating system platforms.
### 1.4 Compiler Versions
The release kits are generated with the following compiler versions for C/C++:
<table>
<thead>
<tr>
<th>Platform</th>
<th>C/C++</th>
</tr>
</thead>
<tbody>
<tr>
<td>Windows</td>
<td>MSVC 10.0</td>
</tr>
<tr>
<td>MAC OS/X 10.5 x86</td>
<td>gcc 4.0.1</td>
</tr>
<tr>
<td>Intel/x86 Linux</td>
<td>gcc 4.1.2</td>
</tr>
<tr>
<td>Intel/x64 Linux</td>
<td>gcc 4.3.4</td>
</tr>
<tr>
<td>FreeBSD</td>
<td>gcc 3.4.2</td>
</tr>
<tr>
<td>Sun Solaris 2.8/SPARC</td>
<td>gcc 3.4.6</td>
</tr>
<tr>
<td>Sun Solaris 2.8/Intel</td>
<td>gcc 3.4.3</td>
</tr>
<tr>
<td>IBM AIX 5.1</td>
<td>gcc 4.2.4</td>
</tr>
<tr>
<td>HP UX 11.i (11.23)</td>
<td>gcc 4.1.2</td>
</tr>
<tr>
<td>HP UX 11.23 IA64</td>
<td>gcc 4.6.0</td>
</tr>
</tbody>
</table>
For Java, version 1.5 is required as minimum runtime version.
2 New Products
2.1 3-Heights™ PDF Merge Split Shell
The purpose of the 3-Heights™ PDF Merge Split Shell is to either merge several PDF files into one output file or to split one or several PDF files into several output files. A special feature is the component's ability to process and create PDF/A-compliant files.
The shell version offers equally extensive options as the API version. For instance, resources can be optimized on the fly, the output file can be optimized for the web, annotations, form fields and signature appearances can be flattened, and metadata can be copied or set.
The PDF Merge Split Shell is a new implementation built on top of the 3-Heights™ kernel. The Command Line Suite offers a similar function in the pdcat tool which is based on the kernel of the Classic product line.
2.2 3-Heights™ Digital Signature Service
The main purpose of a signature service is to create full signature data on account of a signing request of the signature client. The signing request is generated from both the document which is to be signed and the authentication of the client. The service sends the signature data back to the signature client where it is embedded in the document.
The document itself is not sent to the service but rather a hash value of it (similar to a fingerprint). The content of the document cannot be determined from the hash value. This means the confidentiality of the document remains guaranteed in all conceivable applications such as patient files, banking data, design drawings, etc. The mutual authentication of the client and server and the transaction are via secure connections (TLS). The secure connection is protected by a client certificate and a server certificate. With these measures the service can allocate the signatures clearly to a client.
The service manages the necessary private keys and certificates in a secure and trusted environment for every client. Based on this data the service creates the individual signatures. Advanced certificates are supported according to ZertES and ElDI-V and also qualified certificates conforming to SuisseID.
As an option the service can also generate signatures with long-term validity (LTV) and also integrate a time stamp.
High availability is guaranteed by the redundant design of the service. The service is run by an accredited issuer of certificates which guarantees compliance with all relevant regulations.
The signature client is a software component which can sign data and documents with the help of the signature service from the cloud. The signature client communicates with the service via the OASIS/DSS protocol. The following of our products support the Digital Signature Service:
- 3-Heights™ PDF Security
- 3-Heights™ PDF to PDF/A Converter
- 3-Heights™ Document Converter
**2.3 New implementation of 3-Heights™ PDF to Image Converter R2**
The major improvement of this release is the re-development of the rendering engine. It is not available as a separate product but is rather a library which is used as an essential part of the 3-Heights™ PDF to Image Converter R2 in this release and will be used in other products such as the 3-Heights™ PDF Printer, 3-Heights™ PDF Viewer, etc. in future releases.
The highlights of the new rendering engine are:
- Full coverage of the upcoming PDF 2.0 specification including transparency groups, patterns, shadings etc.
- High-quality 256 level anti-aliasing
- High-performance ICC color management engine including all device specific, special and calibrated color spaces
- Bi-linear interpolation and low-pass Gauss image filtering
- Specialized glyph rendering based on a combination of hinting and anti-aliasing for optimal legibility
- Improved font replacement algorithm for non-embedded fonts including the on-the-fly generation of multiple master font instances
The complex and unique graphics model of PDF and the upcoming ISO standard for PDF 2.0 which contains many clarifications and improvements regarding the rendering of PDF documents made this development investment necessary.
Whereas the former implementations where all based on existing graphics libraries (GDI, GDI+, etc.) the new implementation has been developed from scratch. This allows for more reliable high-quality rendering, higher performance tuning, and better maintenance and customer support.
3 New Features to all Products
The following enhancements affect all components and solutions unless otherwise noted.
3.1 Enhancements to all 3-Heights™ Products
- The major improvement of this release is the implementation of a new rendering engine. For a description please read the chapter about new products.
- The font replacement algorithm for non-embedded fonts has been significantly improved by the on-the-fly generation of multiple master font instances.
- The embedding of invoice data conforming to the ZUGFeRD XML format has been adapted to the latest standard. Affected Products: 3-Heights™ PDF to PDF/A Converter, 3-Heights™ Document Converter.
- Support for signature creation using the SuisseID Signing Service of Swiss Post, the Digital Singing Service of Swiss Post and the Swisscom All-in Signing Service.
- Improved performance and reliability of OCSP, CRL and Timestamp caches.
- The Windows development environment has been migrated from Visual Studio 8.0 to Visual Studio 10.0. Some of the tools and libraries now require the C runtime libraries version 10.
- The lcms software library (www.littlecms.com) has been updated to version 2.6.
- The jpeg software library (www.iijg.org) has been updated to version 9a.
- The libpng software library (www.libpng.com) has been updated to version 1.6.14.
- The libxml2 software library (www.xmlsoft.org) has been updated to version 2.9.2
- Improved thread safety on all Unix platforms.
- Improved on-the-fly repair when opening corrupt PDF documents.
3.2 Enhancements to all Classic Products
- The Windows development environment has been migrated from Visual Studio 8.0 to Visual Studio 10.0. Some of the tools and libraries now require the C runtime libraries version 10.
4 New Features to Specific Products
4.1 3-Heights™ AFP to TIFF Conversion Utility
- No functional changes.
4.2 3-Heights™ CrypTokI Certificate Utility
- Improved switch 'i': The import of X.509 certificates has been enhanced to read PEM or DER encoded data streams.
- Improved switch 'l': The listed certificates are now verified for validity.
4.3 3-Heights™ Document Assembler
- Improved preview pane now uses the 3-Heights™ PDF Viewer OCX for display.
- Annotations can now be added to the output document (Sticky notes, text highlight).
- The output documents can now be stored in the TIFF format in addition to the PDF and PDF/A format.
- The user interface received numerous improvements.
4.4 3-Heights™ Document Converter
- The Document Converter is now entirely based on Microsoft.NET Framework 4.0. As a consequence, the .NET 2.0 framework no longer needs to be installed. If the web service feature is used, make sure to adjust the IIS application pool settings.
- New options:
- SIGPROFILE to support signature profiles
- SKIPFILES = SIZE<12x34 to suppress the conversion of small images in mails
- PDF.Info to allow to set document attributes
- LockFiles to freeze MS Word fields that would otherwise be automatically updated
- PDFA.OCRMODE to control the behavior of OCR processing
• FlattenSignatures to preserve the visual appearance of signature fields when the underlying digital signature is removed
• DOCM added to MSWord file extensions
• ZIPX added to ZIP file extensions with 7z provider
• “needs rendering” functionality using Adobe Reader
• to support the conversion of XFA to PDF/A when Adobe Reader is installed
• to reject XFA forms or other PDF documents marked as “needing rendering” when Adobe Reader is not installed
• support for the EMF format (enhanced metafiles)
• support to configure certain PDF and TIFF printer settings via configuration file, and automatic resetting of printer user settings to system settings on start
• support to select email header template on a per job basis
• periodic cleanup of the office recently opened documents folder
### 4.5 3-Heights™ Font to PDF Utility
• No functional changes.
### 4.6 3-Heights™ Image Compare Utility
• No functional changes.
### 4.7 3-Heights™ Image to PDF Converter
• New switch ‘ocb’: Convert images to bitonal before OCR recognition.
• Extensions to interface 'IPDFCodec':
• New property 'ErrorCode': Returns the error code of the last operation.
• New property 'ErrorMessage': Returns the error message of the last operation.
• New property 'Decode': Indicates whether compressed samples need to be decoded.
• Extensions to interface IPDFImg2Pdf:
- New property 'OCRBitonalRecognition': Indicates whether the images should be converted to bi-tonal before passing to the OCR engine.
- New property 'ErrorMessage': Returns the error message of the last operation.
### 4.8 3-Heights™ JPM to PDF Converter
- No functional changes.
### 4.9 3-Heights™ OCR Enterprise Add-on
- ABBYY 10: Improved license selection if more than one license is available.
- Improved error reporting.
### 4.10 3-Heights™ PDF Analysis & Repair
- No functional changes.
### 4.11 3-Heights™ PDF Annotation API
- No functional changes.
### 4.12 3-Heights™ PDF Compare Utility
- New switch ‘ia’: Ignore volatile data from annotation (CreationDate).
- New switch ‘c’: Compare content streams syntactically (tokens).
- More object types are now supported.
### 4.13 3-Heights™ PDF Creator Library
- No functional changes.
### 4.14 3-Heights™ PDF Extract
- Improved support for collections (aka PDF Portfolios) with new property ‘IsCollection’ and modified property ‘PageCount’.
- Improved treatment of white space for text extraction.
4.15 3-Heights™ PDF Merge Split
- New property ‘FlattenSigAppearances’: Flatten the appearance of all signed signature fields.
- Introduced automatic detection of compliance of output file based on compliances of input files.
- Greatly improved linearization performance for large files.
- Detection of XFA forms that contain no rendered PDF content.
4.16 3-Heights™ PDF Optimization
- New switch ‘cms’: Set the color management engine.
4.17 3-Heights™ PDF Page Split Tool
- No functional changes.
4.18 3-Heights™ PDF Printer
- New switch 'sl': Set the list of available paper sizes.
- New method 'SetPaperList': Set the list of available paper sizes.
- The exception handling in the print service has been improved.
- Improved pre-rendering of images to work around broken printer drivers.
- Improved spool file size when printing images.
- Applying watermarks is now supported for printing both PDF and image files.
4.19 3-Heights™ PDF Producer
- PDF Producer
- The printer settings dialog now supports AES encryption.
- The standard sRGB output intent profile has replaced by a calibrated color space except for the case where stamps are used. This simplifies the merging of such files with files having CMYK output intents.
- TIFF Producer
- No functional changes.
Application Runner:
- The 'SaveAs' dialog can be configured to open in a specific directory.
- The cancel button has now an image.
Installer:
- Product registration is now for all users
4.20 3-Heights™ PDF Security
- New switch 'cps': Set provider-specific property string.
- New switch 'cpf': Set provider-specific property from file.
- New switch 'fs': Force signature to allow DocMDP and timestamp signatures on PDF/A-1 documents.
- New property 'ForceSignature': Force signature to allow DocMDP and timestamp signatures on PDF/A-1 documents.
- Support Proxy for communication to all supported signature services. Proxy is supported for SSL connections as well.
- Greatly improved stamping functionality.
- Improved signature validation.
- Validation of signatures of Subtype 'adbe.pkcs7.sha1'.
- Ability to extract more signature properties, such as the signing time as specified by PAdES.
- Ability to validate signatures according either to PAdES or to Adobe Acrobat compatibility (PKCS#7 CMS).
- Improved error messages.
4.21 3-Heights™ PDF Studio Utility
- GUI: The tree view has been enhanced with tool tips.
4.22 3-Heights™ PDF Thumbnail Utility
- No functional changes.
4.23 3-Heights™ PDF to EMF Converter
- No functional changes.
4.24 3-Heights™ PDF to Image Converter
- New switch 't': Specifies the threshold for the conversion from gray to bi-tonal.
- New property 'BilevelThreshold': Specifies the threshold for the conversion from gray to bi-tonal.
- The dither methods have been extended by the 'Atkinson' method.
- Improved image quality and accuracy for very small images and hairlines.
- Support for OpenType fonts in Fast Mode.
- Detection of PDF Portfolios with no initial PDF document.
- Improved rendering of degenerate paths.
4.25 3-Heights™ PDF to Image Converter R2
- New switch 'pi': Print page information such as the width and height in pixels, the resolution in dpi, and the width and height in user units.
- New switch 'so': Specify the page offset in user units. Together with the 's' switch the user can specify the source rectangle which has to be rendered as part of the page.
- New properties 'PageOffs', 'PageXOffs', 'PageYOffs': These properties have the same function as the 'so' switch (see above).
4.26 3-Heights™ PDF to PDF/A Converter
- New switch 'cps': Set provider-specific property string.
- New switch 'cpf': Set provider-specific property from file.
- Enhanced switch 'ocm': The OCR mode has been extended by the function 'OCR if input contains no text'.
- New switches 'abg', 'af1', 'af2', 'at1' and 'at2' allow creation of a customized signature appearance.
- New property 'SignatureLocation': The physical location of the signing.
- New method 'SetSessionPropertyString': Set provider-specific session property.
- New method 'SetSessionPropertyBytes': Set provider-specific session property.
- New method 'TestSession': Check, if the session if still alive.
- New method 'AddZUGFeRDXml' and 'AddZUGFeRDXmlMem': Add a ZUGFeRD XML invoice file.
• Improved conversion of corrupt documents:
• Repair invalid page references.
• Improved handling of corrupt interactive form fields.
• Improved conversion of corrupt actions.
• Improved repair of images with invalid or sample streams.
• Improved repair of embedded font programs that are corrupt.
• Improved repair of invalid color space with ICC profile.
• Improved conversion of corrupt OCR fonts.
• Improved logging, e.g. exact message of signature creation error.
### 4.27 3-Heights™ PDF Uncompress Utility
• No functional changes.
### 4.28 3-Heights™ PDF Validator
• New switch `-p`: Set custom profile to validate compliance with corporate directives.
• New method 'SetProfile': Set custom profile to validate compliance with corporate directives.
• New property 'ErrorMessage': Returns an error message text of the last operation.
• Improved performance of Type 3 font validation.
### 4.29 3-Heights™ PDF Viewer
• Viewer Shell & Viewer OCX:
• GUI: The user can now display, open and save embedded files, e.g. of PDF Portfolios.
• New method 'CreateAnnotation': Create and display a new annotation.
• New method 'GoBack': Open parent file of currently displaying embedded file.
• New method 'SetCursor': Set a custom cursor for a cursor mode.
• New event 'OnGotoE': Indicates when an embedded file is opened.
• New ability to switch between difference view and individual file view when comparing files.
• Java Document Viewer:
• New methods 'getEmbeddedFilesCount' and 'getEmbeddedFileInfo': Returns information about embedded files in the open document.
• New methods 'storeEmbeddedFile' and 'getEmbeddedFile': Saves an embedded file to disk or returns it as a byte array.
• Swing sample application: The user can now display and open embedded files.
• PDF Viewer WPF:
• Enhanced method 'Close': The method now waits until the document is closed.
• Enhanced support for touch input: Multi touch zoom and scrolling inertia.
4.30 3-Heights™ Text to PDF Converter
• No functional changes.
4.31 3-Heights™ TIFF Tool Suite
• All tools now support large TIFF files up to 4 GB.
• Tiffimp:
• New switch ‘u’: Use JPEG compression according to Technical Note #2.
4.32 3-Heights™ XMP Generator
• No functional changes.
4.33 Classic Command Line Suite
• pdform:
• new listing of the values list of radio button fields.
### 4.34 Classic PDF Prep Tool Suite
- New method 'SetResolveDestNames' to disable the resolution of named destinations (used by PDF Batch Stamp Tool to increase performance).
- New method 'IDocGetFormBoxW' to support the passing of Unicode string values.
- Enhanced interoperability of the method 'CreateImageEx' with the 3-Heights™ PDF Codec. This allows supporting a wider variety of image types. Please refer to the appropriate code samples for details.
- Awareness for rich text forms: convert to normal text forms when setting a new value; preserve appearance when flattening without changing the value.
- Support to retrieve the radio button field values list.
### 5 About PDF Tools AG
PDF Tools AG (www.pdf-tools.com) is a world leader in PDF (Portable Document Format) software, delivering reliable PDF products to international customers in all market segments.
PDF Tools AG provides server-based software products designed specifically for developers, integrators, consultants, customizing specialists and IT-departments. Thousands of companies worldwide use our products directly and hundreds of thousands of users benefit from the technology indirectly via a global network of OEM partners. The tools can be easily embedded into application programs and are available for a multitude of operating system platforms.
|
{"Source-Url": "https://www.pdf-tools.com/public/downloads/release-notes/release-notes-440.pdf", "len_cl100k_base": 5801, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 44827, "total-output-tokens": 6735, "length": "2e12", "weborganizer": {"__label__adult": 0.0003440380096435547, "__label__art_design": 0.0010528564453125, "__label__crime_law": 0.0002340078353881836, "__label__education_jobs": 0.00024628639221191406, "__label__entertainment": 0.0001068115234375, "__label__fashion_beauty": 0.00011938810348510742, "__label__finance_business": 0.0006222724914550781, "__label__food_dining": 0.0002014636993408203, "__label__games": 0.0005879402160644531, "__label__hardware": 0.0010251998901367188, "__label__health": 0.00012421607971191406, "__label__history": 0.00014460086822509766, "__label__home_hobbies": 7.081031799316406e-05, "__label__industrial": 0.0002772808074951172, "__label__literature": 0.00019431114196777344, "__label__politics": 9.554624557495116e-05, "__label__religion": 0.0003192424774169922, "__label__science_tech": 0.0019083023071289065, "__label__social_life": 6.490945816040039e-05, "__label__software": 0.201416015625, "__label__software_dev": 0.79052734375, "__label__sports_fitness": 0.00013267993927001953, "__label__transportation": 0.00013554096221923828, "__label__travel": 0.00016427040100097656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26595, 0.03524]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26595, 0.0233]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26595, 0.71235]], "google_gemma-3-12b-it_contains_pii": [[0, 229, false], [229, 5141, null], [5141, 5790, null], [5790, 7252, null], [7252, 8647, null], [8647, 11069, null], [11069, 12997, null], [12997, 14528, null], [14528, 14753, null], [14753, 16065, null], [16065, 17438, null], [17438, 18592, null], [18592, 19876, null], [19876, 21130, null], [21130, 22890, null], [22890, 24236, null], [24236, 25260, null], [25260, 26595, null]], "google_gemma-3-12b-it_is_public_document": [[0, 229, true], [229, 5141, null], [5141, 5790, null], [5790, 7252, null], [7252, 8647, null], [8647, 11069, null], [11069, 12997, null], [12997, 14528, null], [14528, 14753, null], [14753, 16065, null], [16065, 17438, null], [17438, 18592, null], [18592, 19876, null], [19876, 21130, null], [21130, 22890, null], [22890, 24236, null], [24236, 25260, null], [25260, 26595, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 26595, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26595, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26595, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26595, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26595, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26595, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26595, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26595, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26595, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26595, null]], "pdf_page_numbers": [[0, 229, 1], [229, 5141, 2], [5141, 5790, 3], [5790, 7252, 4], [7252, 8647, 5], [8647, 11069, 6], [11069, 12997, 7], [12997, 14528, 8], [14528, 14753, 9], [14753, 16065, 10], [16065, 17438, 11], [17438, 18592, 12], [18592, 19876, 13], [19876, 21130, 14], [21130, 22890, 15], [22890, 24236, 16], [24236, 25260, 17], [25260, 26595, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26595, 0.03636]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
0f6685c62803721b087fd34285ea2c3b76c46764
|
[REMOVED]
|
{"Source-Url": "https://kyutech.repo.nii.ac.jp/?action=repository_action_common_download&item_id=200&item_no=1&attribute_id=17&file_no=1", "len_cl100k_base": 5978, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 35656, "total-output-tokens": 8271, "length": "2e12", "weborganizer": {"__label__adult": 0.0002830028533935547, "__label__art_design": 0.0002465248107910156, "__label__crime_law": 0.0003740787506103515, "__label__education_jobs": 0.0009584426879882812, "__label__entertainment": 4.9233436584472656e-05, "__label__fashion_beauty": 0.00013077259063720703, "__label__finance_business": 0.0002753734588623047, "__label__food_dining": 0.00027680397033691406, "__label__games": 0.0005784034729003906, "__label__hardware": 0.0012636184692382812, "__label__health": 0.0008349418640136719, "__label__history": 0.00022864341735839844, "__label__home_hobbies": 7.081031799316406e-05, "__label__industrial": 0.0004820823669433594, "__label__literature": 0.00021922588348388672, "__label__politics": 0.0002219676971435547, "__label__religion": 0.0004303455352783203, "__label__science_tech": 0.05303955078125, "__label__social_life": 6.091594696044922e-05, "__label__software": 0.0135955810546875, "__label__software_dev": 0.92578125, "__label__sports_fitness": 0.00027441978454589844, "__label__transportation": 0.0004198551177978515, "__label__travel": 0.00015056133270263672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35152, 0.02962]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35152, 0.42953]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35152, 0.90939]], "google_gemma-3-12b-it_contains_pii": [[0, 542, false], [542, 3092, null], [3092, 6374, null], [6374, 7757, null], [7757, 10823, null], [10823, 13412, null], [13412, 14667, null], [14667, 17799, null], [17799, 20831, null], [20831, 22155, null], [22155, 23720, null], [23720, 25876, null], [25876, 27897, null], [27897, 29072, null], [29072, 32253, null], [32253, 35152, null]], "google_gemma-3-12b-it_is_public_document": [[0, 542, true], [542, 3092, null], [3092, 6374, null], [6374, 7757, null], [7757, 10823, null], [10823, 13412, null], [13412, 14667, null], [14667, 17799, null], [17799, 20831, null], [20831, 22155, null], [22155, 23720, null], [23720, 25876, null], [25876, 27897, null], [27897, 29072, null], [29072, 32253, null], [32253, 35152, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35152, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35152, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35152, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35152, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35152, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35152, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35152, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35152, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35152, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35152, null]], "pdf_page_numbers": [[0, 542, 1], [542, 3092, 2], [3092, 6374, 3], [6374, 7757, 4], [7757, 10823, 5], [10823, 13412, 6], [13412, 14667, 7], [14667, 17799, 8], [17799, 20831, 9], [20831, 22155, 10], [22155, 23720, 11], [23720, 25876, 12], [25876, 27897, 13], [27897, 29072, 14], [29072, 32253, 15], [32253, 35152, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35152, 0.05625]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
4758970b326a3bcece1baf375de7978cdd13929b
|
We code the front page of newspapers for our sample.
1. **Newspapers**
**First Tier**
Code 2 out of these 4 every weekday; 1 every Sunday (January 1, 2011-present)
Code 2 out of these 4 every weekday and Sunday (January 1, 2010-December 31, 2010)
January 1, 2010 – present
The New York Times
Los Angeles Times
USA Today
Wall Street Journal
Code 2 out of these 4 every weekday and Sunday (January 1, 2007-December 31, 2009)
January 1, 2007 – December 31, 2009
The Washington Post
Los Angeles Times
USA Today
The Wall Street Journal
Code every day except Saturday (January 1, 2007 – December 31, 2009)
The New York Times
**Second Tier**
Code 2 out of 4 every weekday; 1 every Sunday (January 1, 2011-present)
Code 2 out of these 4 every weekday and Sunday (January 1, 2007-December 31, 2010)
January 1, 2010 – present
The Washington Post (was Tier 1 from January 1, 2007 – December 31, 2009)
January 1, 2012 – present
The Denver Post
Houston Chronicle
Orlando Sentinel
January 1, 2011 – December 31, 2011
Toledo Blade
The Arizona Republic
Atlanta Journal Constitution
January 1, 2010 – December 31, 2010
Columbus Dispatch
Tampa Tribune
Seattle Times
January 1, 2009-December 31, 2009
Kansas City Star
Pittsburgh Post-Gazette
San Antonio Express-News
San Jose Mercury News
March 31, 2008- December 31, 2008
The Philadelphia Inquirer
Chicago Tribune
Arkansas Democrat-Gazette
San Francisco Chronicle
The Boston Globe
Star Tribune
Austin American-Statesman
Albuquerque Journal
Third Tier
Rotate between 1 out of 3 every weekday and Sunday (January 1, 2011 – present)
January 1, 2012– present
Traverse City Record-Eagle (MI)
The Daily Herald (WA)
The Eagle-Tribune (MA)
January 1, 2011 – December 31, 2011
The Hour
St. Augustine Record (January 1-October 9, 2011)
Spokesman-Review (October 10-December 31, 2011)
Joplin Globe
Code 1 or 2 out of 3 every weekday and Sunday (January 1, 2010-December 31, 2010)
January 1, 2010 – December 31, 2010
The Day
Rome News Tribune
Ventura News
Code 2 out of 4 every weekday and Sunday (January 1, 2007 – December 31, 2009)
January 1, 2009-December 31, 2009
Herald News
Anniston Star
Spokesman-Review
Meadville Tribune (January 5, 2009-December 31, 2009)
East Valley Tribune (January 1 – 4, 2009)
March 31, 2008- December 31, 2008
New Hampshire Union Leader
The Gazette, Colorado Springs
MetroWest Daily News (March 10 – December 31, 2008)
The Modesto Bee
The Bakersfield Californian
The Sun Chronicle
Star Beacon (January 1, 2007 – February 29, 2008)
Chattanooga Times Free Press
2. Online
For websites we code the top 5 stories on the home page at the time of capture.
On April 28, 2008, we began rotating the time at which we captured websites. We now capture websites between 9-10 am EST and 4-5 pm EST. Prior to that we only captured websites between 9 and 10 am Eastern time.
On January 1, 2009, we added 7 websites into our universe.
On that day, we started rotating our sample so that we code 6 out of the total 12 on any given day, either in the morning or in the afternoon. Previously, we had coded all five original websites every day. For more precise data, refer to our methodology.
Websites are coded Monday – Friday.
CNN.com
Yahoo News
MSNBC.com
Google News
LATimes.com (May 24, 2011 – present)
NYTimes.com (January 1, 2009 – present)
WashingtonPost.com (January 1, 2009 – present)
FoxNews.com (January 1, 2009 – present)
USAToday.com (January 1, 2009 – present)
ABCNews.com (January 1, 2009 – present)
WSJonline.com (January 1, 2010 – present)
HuffingtonPost.com (January 1, 2010 – present)
BBC News (international version) (January 1, 2009 – December 31, 2009)
Reuters.com (January 1, 2009 – December 31, 2009)
AOL News (January 1, 2007 – April 28, 2011)
3. Network TV
For Network, Cable and Radio programming, we code the first 30 minutes of a show.
The exceptions here are Newshour and NPR’s Morning Edition:
For Newshour, we alternate between coding the first and the second 30 minutes.
For Morning Edition, we alternate between the first half hour of the 5 am programming and first half hour of the 6 am programming.
Network TV is coded Monday – Friday.
January 1, 2007 – Present
Morning shows
Code 1 or 2 out of 3 every weekday (May 30, 2011 – present)
Code 2 out of 3 every weekday (January 1, 2010 – May 27, 2011)
ABC: Good Morning America
CBS: The Early Show
NBC: Today Show
Evening news
Code 2 out of 3 every weekday (January 1, 2010 – present)
ABC: World News Tonight
CBS: CBS Evening News
NBC: NBC Nightly News
Rotate to code 1st half hour one day, 2nd half hour next day, do not code on third day (January 1, 2010 – present)
PBS – Newshour
We started rotating between first and the second half hours of Newshour on March 31, 2008. Before that date, we only coded the first half hour of this show.
We started rotating the days in which we code network news beginning on January 1, 2010. Prior to this, we had coded the first half hour of each newscast every weekday. (With the exception of Newshour, in which we rotated between the first and second half hours each weekday).
4. **Cable TV**
Cable TV is coded Monday – Friday.
**Daytime**
*March 19, 2007 – Present*
*Code 2 out of these 3 every weekday*
2 – 2:30pm Cable programming for:
- CNN
- Fox News Channel
- MSNBC
*January 1, 2007 – March 16, 2007*
*Code 2 out of these 3 every weekday*
1 – 1:30 pm Cable programming for:
- CNN
- Fox News Channel
- MSNBC
**Early Evening and Prime Time**
When we changed the shows we coded, it was only to reflect programming changes made by the channel for the given time slot.
**CNN**
*Code 1 or 2 out of these 4 every weekday October 3, 2011-present*
*Code 1 or 2 out of these 3 every weekday August 8, 2011 – October 2, 2011*
*Code 1 or 2 out of these 4 every weekday May 30, 2011 – August 5, 2011*
*Code 2 out of these 4 every weekday January 1, 2009 – May 27, 2011*
*Code 3 out of these 4 every weekday January 1, 2007– December 31, 2008*
**5 – 6pm:** The Situation Room *(From October 3, 2011)*
**6 – 7pm:** John King, USA *(From October 3, 2011)*
*November 5, 2007-October 2, 2011:* The Situation Room
*Before November 5, 2007:* Lou Dobbs Tonight
**7 – 8pm:** Erin Burnett OutFront *(From October 3, 2011)*
*March 22, 2010 – October 2, 2011:* John King, USA
*January 11 – March 19, 2010:* CNN Unspecified Show
*November 12, 2009 – January 10, 2010:* CNN Tonight
*November 5, 2007 – November 11, 2009:* Lou Dobbs Tonight
*Before November 5, 2007:* The Situation Room
**8 – 9pm:** Anderson Cooper 360* *(From August 8, 2011)*
Before August 8, 2011, Anderson Cooper 360 airing at its 10pm position was included in the rotation.
10 – 12pm: Anderson Cooper 360 (January 1, 2007-August 5, 2011)
Fox News Channel
Code 2 out of these 4 every weekday January 1, 2009 – present
Code 3 out of these 4 every weekday January 1, 2007– December 31, 2008
6 – 7pm: Special Report with Bret Baier (From January 12, 2009)
7 – 8pm: Fox Report with Shepard Smith
8 – 9pm: The O'Reilly Factor
9 – 10pm: Hannity (From January 12, 2009)
MSNBC
Code 1 or 2 out of these 4 every weekday May 30, 2011 – present
Code 2 out of these 4 every weekday January 1, 2009 – May 27, 2011
6 – 7pm: PoliticsNation (From October 24, 2011)
7 – 8pm: Hardball
8 – 9pm: The Ed Show (From October 24, 2011)
January 24, 2011-October 23, 2011: The Last Word w/ Lawrence O’Donnell
January 1, 2007-January 21, 2011: Countdown w/ Keith Olbermann
9 – 10pm: Rachel Maddow (From September 8, 2008)
Between August 21 and September 8, 2008, this time slot had Convention coverage
July 2, 2007 – August 21, 2008: Dan Abrams
Before October 24, 2011, 10-11pm slot was included in the rotation.
10 – 11pm:
April 6, 2009-October 23, 2011: The Ed Show
November 5, 2008 – April 3, 2009: 1600 Pennsylvania Ave
March 17, 2008 – November 4, 2008: Race for the White House
January 1, 2007 – March 14, 2008: Tucker
5. Radio
This sector is coded Monday – Friday.
**News Radio**
*January 1, 2007 – Present*
ABC Radio headlines at 9 am and at 5 pm
CBS Radio headlines at 9 am and at 5 pm
*Starting January 1, 2010, we started rotating ABC and CBS headlines so that we code one set of 9 am and one set of 5 pm headlines. If we code ABC 9 am one day, we will also code CBS 5 pm that evening, and vice versa.*
**NPR**
Morning Edition** (January 1, 2007 – present)**
All Things Considered (January 1, 2010 – present)
* On January 1, 2010, we started rotating between Morning Edition and All Things Considered. At present we code one of the following on any given day:
5 – 5:30 am Morning Edition
6 – 6:30 am Morning Edition
4 – 4:30 pm All Things Considered
5 – 5:30 pm All Things Considered
For a sample rotation schedule see the Methodology.
** Prior to January 14, 2008, we coded the first half hour of Morning Edition only. On January 14, 2008 we started rotating the times we coded. We either code 5 – 5:30 am or 6 – 6:30 am.
**Talk Radio**
**Conservative Talk Shows**
Code one out of 2 every weekday (January 1, 2010 – present)
Rush Limbaugh
Sean Hannity
**Liberal Talk Shows**
Code every other weekday (January 1, 2010 – present)
Ed Schultz
Before January 1, 2010
Conservative Talk Shows
We started rotating Rush Limbaugh’s program to code it every other day on July 1, 2008. Until that date, we coded this show every weekday.
Rush Limbaugh
*Code one of these 2 every weekday*
Sean Hannity
Glenn Beck (October 5, 2009 – December 31, 2009)
January 1, 2007 – October 4, 2009
Michael Savage
Liberal Talk Shows
*Code one of these 2 every weekday*
Ed Schultz
Thom Hartmann (October 5, 2009 – December 31, 2009)
January 1, 2007 – February 27, 2009 and May 11, 2009 – October 4, 2009
Randi Rhodes
March 2, 2009 – May 8, 2009
Stephanie Miller*
* This show was only coded while Randi Rhodes was off the air.
Total outlets coded:
2012: 25-28 outlets each weekday and 3 newspapers are included only on Sundays.
2010: 28 or 29 outlets each weekday, and 5 or 6 newspapers are included only on Sundays.
2009: 33 or 34 outlets each weekday, and seven newspapers are included only on Sundays.
2008: 34 or 35 outlets each weekday, and seven newspapers were included only on Sundays.
2007: 35 outlets each weekday, and seven newspapers were included only on Sundays.
Note:
All dates are approximate due to the fact that we rotate most outlets during the course of the week. Outlets are generally rotated into our sample on a Monday. But due to rotation, the first day an outlet is coded may not be the day it was introduced into the sample.
Changes to the NCI methodology:
2012
1. At the start of 2012, PEJ updated the weights given to each media sector for 2012 NCI: newspapers (0.19); online (0.30); network TV (0.15); cable TV (0.23); radio (0.12).
2. At the start of 2012, we changed our sample of Tier 2 and Tier 3 newspapers (see above).
2011
1. On May 30, 2011, we made changes to the rotation for CNN and MSNBC samples and the network morning TV sample (see above).
2. At the start of 2011, PEJ updated the weights given to each media sector for 2011 NCI: newspapers (0.20); online (0.30); network TV (0.14); cable TV (0.24); radio (0.12).
3. On January 2, 2011, we changed the way we rotate newspapers. We code ONE paper from each Tier on Sunday. During the rest of the week, we code 2 out of 4 papers from Tiers 1 and 2, and 1 out of 3 from Tier 3.
2010
1. At the start of 2010, PEJ updated the weights given to each media sector for 2010 NCI: newspapers (0.22); online (0.26); network TV (0.15); cable TV (0.24); radio (0.13).
2. On January 1, 2010, we introduced a few changes to our sample:
In the newspaper sector, we switched out our sample for all tiers. (See above).
Note that the Washington Post is no longer a Tier 1 newspaper due to low circulation numbers.
In the online sector, we changed our sample of websites we code. (See above).
3. On January 1, 2010, we started rotating our coding for the New York Times so that we no longer code it every day.
4. On January 1, 2010, we started rotating our Network TV sample (For details about how to rotate now, see above)
5. On January 1, 2010, we reduced the number of radio talk shows we code. Starting on that date, we started coding 2 conservative talkers and 1 liberal talker.
6. On January 1, 2010, we changed the way we rotate radio talk shows: now we code one conservative and one liberal on one day and only a conservative the next.
7. At the start of 2010, we started rotating our sample of radio headlines. Now we code one set of morning headlines and one set of evening headlines every weekday.
8. On January 1, 2010, we introduced All Things Considered into our sample and started rotating its 4 – 4:30pm and its 5 – 5:30pm coverage. For details on how we rotate NPR shows, see the Methodology.
2009
1. At the start of 2009, PEJ updated the weights given to each media sector for 2009 NCI: newspapers (0.25), online (0.23), network TV (0.16), cable TV (0.25), and radio (0.11).
2. On January 1, 2009, we introduced a few changes to our sample:
In the newspaper sector, we switched out our Tier 2 and Tier 3 sample. (See above).
In the online sector, we expanded the list of websites we code. (See above).
3. On January 1, 2009, we started rotating our website sample. On that date, we also changed the rotation schedule for prime time slots on CNN and the Fox News Channel. (For details about website rotation, refer to our methodology. For cable rotation, see above).
4. On October 5, 2009, we changed our talk radio sample based on the Talkers Magazine ratings. (See above). For a link to Talkers’ ratings, refer to the Methodology.
5. Beginning on January 1, 2009, PEJ started including audio and video stories for the online sector.
2008
1. On June 16, 2008, PEJ updated the weights given to each media sector for 2008 NCI: newspapers (0.26), online (0.20), network TV (0.18), cable TV (0.24), and radio (0.12). As PEJ launched the Index, the 2007 NCI weights are: newspaper (0.28), online (0.16), network (0.18), cable (0.26), and radio (0.12).
2. On March 31, 2008, we changed our sample of Tier 2 and Tier 3 newspapers. (See above).
3. On July 1, 2008, PEJ started rotating Rush Limbaugh’s radio program to code him every other day. Prior to that, we had been coding Rush Limbaugh every weekday.
4. On April 28, 2008, we began rotating the time at which we capture websites. We capture websites between 9-10am Eastern time and 4-5pm Eastern time. Prior to that we only captured websites between 9 and 10am Eastern time.
5. On March 31, 2008, PEJ began rotating between the first and second half hour of the PBS News’hour. Prior to that we coded only the 1st half hour of Newshour.
6. Beginning on January 14, 2008, coding for NPR’s Morning Edition alternated between the first 30 minutes of the first hour and the first 30 minutes of the second hour. Prior to that, we coded the first half hour of Morning edition only.
7. Beginning the week of Jan 14, 2008, the cycle of NCI/CCI changed from Sunday-Friday to Monday-Sunday.
2007
1. The Lead Newsmaker variable was added to the coding scheme on July 1, 2007.
2. On March 19, 2007 we started coding day time cable shows from 2-2:30 p.m. EST. Prior to this we coded them from 1-1:30 p.m. EST (from Jan 1, 2007-March 16, 2007).
|
{"Source-Url": "https://assets.pewresearch.org/wp-content/uploads/sites/13/2014/03/NCIChronologyofoutletsPUBLICJan92012.pdf", "len_cl100k_base": 4701, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 33419, "total-output-tokens": 6090, "length": "2e12", "weborganizer": {"__label__adult": 0.0008540153503417969, "__label__art_design": 0.0028629302978515625, "__label__crime_law": 0.0011205673217773438, "__label__education_jobs": 0.0144195556640625, "__label__entertainment": 0.007297515869140625, "__label__fashion_beauty": 0.0006513595581054688, "__label__finance_business": 0.00838470458984375, "__label__food_dining": 0.0015459060668945312, "__label__games": 0.0025463104248046875, "__label__hardware": 0.0028171539306640625, "__label__health": 0.0004611015319824219, "__label__history": 0.0008482933044433594, "__label__home_hobbies": 0.0004901885986328125, "__label__industrial": 0.0006990432739257812, "__label__literature": 0.002132415771484375, "__label__politics": 0.01047515869140625, "__label__religion": 0.0011138916015625, "__label__science_tech": 0.00495147705078125, "__label__social_life": 0.0013513565063476562, "__label__software": 0.07470703125, "__label__software_dev": 0.85693359375, "__label__sports_fitness": 0.0007557868957519531, "__label__transportation": 0.0018253326416015625, "__label__travel": 0.00078582763671875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15777, 0.10807]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15777, 0.06925]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15777, 0.91528]], "google_gemma-3-12b-it_contains_pii": [[0, 1049, false], [1049, 2098, null], [2098, 2669, null], [2669, 3866, null], [3866, 5222, null], [5222, 6692, null], [6692, 7820, null], [7820, 8208, null], [8208, 9498, null], [9498, 10122, null], [10122, 11026, null], [11026, 13270, null], [13270, 15612, null], [15612, 15777, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1049, true], [1049, 2098, null], [2098, 2669, null], [2669, 3866, null], [3866, 5222, null], [5222, 6692, null], [6692, 7820, null], [7820, 8208, null], [8208, 9498, null], [9498, 10122, null], [10122, 11026, null], [11026, 13270, null], [13270, 15612, null], [15612, 15777, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 15777, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15777, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15777, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15777, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15777, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15777, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15777, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15777, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15777, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15777, null]], "pdf_page_numbers": [[0, 1049, 1], [1049, 2098, 2], [2098, 2669, 3], [2669, 3866, 4], [3866, 5222, 5], [5222, 6692, 6], [6692, 7820, 7], [7820, 8208, 8], [8208, 9498, 9], [9498, 10122, 10], [10122, 11026, 11], [11026, 13270, 12], [13270, 15612, 13], [15612, 15777, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15777, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
2ac8cfc7e1bd920061a873adf35b1298c7e6b759
|
Using the Caché ActiveX Gateway
Version 2018.1
2019-12-20
Using the Caché ActiveX Gateway
Caché Version 2018.1 2019-12-20
Copyright © 2019 InterSystems Corporation
All rights reserved.
InterSystems, InterSystems Caché, InterSystems Ensemble, InterSystems HealthShare, HealthShare, InterSystems TrakCare, TrakCare, InterSystems DeepSee, and DeepSee are registered trademarks of InterSystems Corporation.
InterSystems IRIS Data Platform, InterSystems IRIS, InterSystems iKnow, Zen, and Caché Server Pages are trademarks of InterSystems Corporation.
All other brand or product names used herein are trademarks or registered trademarks of their respective companies or organizations.
This document contains trade secret and confidential information which is the property of InterSystems Corporation, One Memorial Drive, Cambridge, MA 02142, or its affiliates, and is furnished for the sole purpose of the operation and maintenance of the products of InterSystems Corporation. No part of this publication is to be used for any other purpose, and this publication is not to be reproduced, copied, disclosed, transmitted, stored in a retrieval system or translated into any human or computer language, in any form, by any means, in whole or in part, without the express prior written consent of InterSystems Corporation.
The copying, use and disposition of this document and the software programs described herein is prohibited except to the limited extent set forth in the standard software license agreement(s) of InterSystems Corporation covering such programs and related documentation. InterSystems Corporation makes no representations and warranties concerning such software programs other than those set forth in such standard software license agreement(s). In addition, the liability of InterSystems Corporation for any losses or damages relating to or arising out of the use of such software programs is limited in the manner set forth in such standard software license agreement(s).
THE FOREGOING IS A GENERAL SUMMARY OF THE RESTRICTIONS AND LIMITATIONS IMPOSED BY INTERSYSTEMS CORPORATION ON THE USE OF, AND LIABILITY ARISING FROM, ITS COMPUTER SOFTWARE. FOR COMPLETE INFORMATION REFERENCE SHOULD BE MADE TO THE STANDARD SOFTWARE LICENSE AGREEMENT(S) OF INTERSYSTEMS CORPORATION, COPIES OF WHICH WILL BE MADE AVAILABLE UPON REQUEST.
InterSystems Corporation disclaims responsibility for errors which may appear in this document, and it reserves the right, in its sole discretion and without notice, to make substitutions and modifications in the products and practices described in this document.
For Support questions about any InterSystems products, contact:
InterSystems Worldwide Response Center (WRC)
Tel: +1-617-621-0700
Tel: +44 (0) 844 854 2917
Email: [email protected]
Table of Contents
About This Book .................................................................................................................................... 1
1 Introduction ........................................................................................................................................ 3
1.1 Architecture ................................................................................................................................ 3
1.2 Overview of ActiveX / COM ...................................................................................................... 4
1.2.1 What Is a COM Object? ................................................................................................... 4
1.2.2 COM Interfaces ................................................................................................................ 4
1.2.3 The IDispatch Interface .................................................................................................... 4
1.2.4 Type Libraries .................................................................................................................. 5
2 Using Caché Activate .......................................................................................................................... 7
2.1 The Caché Activate Wizard ........................................................................................................ 7
2.2 Using the Generated Wrapper Classes ....................................................................................... 9
2.2.1 Example: Accessing a Property ........................................................................................ 9
2.2.2 Example: Enumerating COM Interfaces .......................................................................... 9
2.2.3 Special Considerations for Properties ............................................................................ 10
2.3 Exception Handling .................................................................................................................. 11
2.3.1 Example: Exception Handling ....................................................................................... 11
2.4 %Activate.IDispatch and %Activate.GenericObject ................................................................ 11
2.4.1 Example: Using CreateObject ........................................................................................ 11
2.5 Monikers ................................................................................................................................... 12
2.5.1 Example: Using GetObject ............................................................................................ 12
2.6 The Become Method ................................................................................................................ 12
2.7 Events ....................................................................................................................................... 12
2.7.1 Example: Using COM Events ........................................................................................ 12
About This Book
This book is a guide to using Caché Activate to manipulate an external ActiveX object as if it were a native Caché object.
This book contains the following sections:
- Introduction
- Using Caché Activate
There is also a detailed Table of Contents.
For general information, see *Using InterSystems Documentation.*
Caché Activate gives Caché applications an easy way to interoperate with ActiveX (also known as COM) components from within a Caché server. By means of wrapper classes, ActiveX components are made available as instances of Caché object classes and can be used in the same manner as any other class. Caché Activate provides the ability to instantiate an external COM object and manipulate it as if it were a native Caché object.
**Note:** The terms “ActiveX” and “COM” are used interchangeably within this document.
Caché Activate is available only on platforms that support ActiveX (both 32-bit and 64-bit versions of Microsoft Windows). Caché Activate works as follows:
1. Using the Caché Activate Wizard, you can create one or more wrapper classes. These are Caché classes that provide methods that correspond to the interface of an ActiveX component.
2. Within a Caché application, you can create an instance of an ActiveX wrapper class. Caché Activate transparently creates an instance of the appropriate ActiveX component within the same process. When you invoke the methods of the wrapper class, it automatically dispatches them to a method of the appropriate ActiveX interface.
You must exercise caution when using ActiveX components within Caché. Caché is designed to provide a safe environment for running application code. Every Caché server process runs an instance of the Caché virtual machine, is isolated from other service processes, and can handle application errors quite safely. ActiveX, unfortunately, is not a safe technology. Using ActiveX incorrectly or using poorly implemented ActiveX components can lead to memory leaks or unexpected application crashes. If you are using ActiveX components within a critical application, you should take extra care to ensure that you are using the interfaces of the component interfaces correctly and that the components have been thoroughly tested. It is a good idea to test any components using a tool such as Visual Basic before using them within your application.
### 1.1 Architecture
Caché Activate consists of the following components:
- **The Caché Activate Wizard:** This provides a simple graphical interface that lets you choose from the ActiveX components on your Caché server and automatically creates Caché wrapper classes for the components you select. The Caché Activate Wizard is accessible from the **Add-Ins** item on the **Tools** menu of the Atelier development environment. The Activate Wizard is available only on Windows systems.
- **The Caché Activate Class Hierarchy:** These are helper classes used by the generated wrapper classes in order to communicate with ActiveX.
The Caché ActiveX Gateway: This is a shared library (DLL) loaded by and used by a Caché process to perform operations (loading, invoking methods, and releasing) on ActiveX components.
1.2 Overview of ActiveX / COM
The following is a simple overview of ActiveX / COM component architecture as it relates to Caché. If you intend to make use of ActiveX within your application, you should consult one of the many published works on the subject.
1.2.1 What Is a COM Object?
A COM object is a piece of code that conforms to the COM specification and provides one or more services that may be consumed by client programs. A certain class of COM objects, those which support the notion of Automation, are specially designed to be easily accessible from high-level programming languages such as VisualBasic, Delphi and now Caché. Such automation objects may be implemented as a dynamic link library and provide a simple function such as encryption of a text string or they may be full-blown executable applications such as Microsoft Excel or Microsoft Word which provide dozens of different services.
1.2.2 COM Interfaces
COM objects expose their functionality as interfaces. An interface is simply a collection of methods and properties that encapsulate some particular functionality. For example, a word processing object may provide a spell checking interface as well as a printing interface. Each implementation of a COM object is given a unique identifier in the form of a class id and each interface which it exposes also has a unique identifier referred to as an interface id. Once the class id of a particular object and the interface id of the required interface is known, it is possible for a client application to instantiate the COM object and avail itself of the services provided by the requested interface. By convention, when the name of an interface is written it is preceded by a capital “I”, so the SpellCheck interface becomes ISpellCheck.
1.2.3 The IDispatch Interface
Different programming languages have different internal data types which are incompatible at a binary level. For example, a Caché local variable has a completely different implementation from that of a VisualBasic string or a C++ string. This makes it difficult to call an object written in one language from another, because conversion has to be done from say, a C++ data type to a Caché variable and vice versa. To solve this problem and enable different programming languages to communicate, the notion of the VARIANT data type and the IDispatch interface was developed.
At its simplest, IDispatch provides the ability to call a method or access a property in an external COM object by specifying the name of the method or parameter and passing the appropriate arguments. Arguments are represented by a VARIANT type, which is a standardized data type that the operating system supports. This standardized type is “understood” by all programming languages that support the use of COM automation.
By creating a COM object and requesting its IDispatch interface, a client program or language, such as Caché, can easily access the functionality exposed by the object.
Although IDispatch provides a generic means to access a COM Automation object, it is really intended as a technique that a programming language uses internally to provide COM object services via the particular constructs of that language. In other words, the high-level language should abstract the details of calling IDispatch and provide programming language constructs to ease use of external objects. Ideally such COM objects should act as if they are native objects within a programming environment. In Caché Activate, the key to this is to exploit the information contained in a COM objects type library.
1.2.4 Type Libraries
Most, if not all, COM Automation objects expose their metadata, i.e., a description of the types, methods and properties, in the form of a type library. The type library may be bound into a .DLL (dynamic link library), within an executable file as a binary resource, or it may exist in a separate file with an extension such as .tlb. Within the type library, each object is identified by a class id and is known as a CoClass. A CoClass may expose at most one IDispatch-derived interface known as the default interface. (Another interface known as the source interface may or may not be present. However it is not directly callable, and can safely be ignored for now). Some objects do not implement an IDispatch-derived interface at all and consequently are not callable via the IDispatch based mechanisms.
Caché Activate exploits the metadata contained in the type library by reading and decoding the information and creating Caché classes that expose the methods and properties defined therein. A type-library may contain one or more CoClass objects and potentially many IDispatch-derived interface definitions. There may be many interfaces because, although a CoClass may not expose more than a single IDispatch derived interface as its default interface, it is free to define methods and properties that either return or are typed as interfaces. In fact, this situation is common where a single CoClass (object) may define a rich object model. Consider a word processor for instance. It may provide a default interface of IApplication, which has methods such as AboutBox, Exit, etc. It also may provide a collection of documents (IDocuments*) as property called Documents.
**Note:** Many COM interfaces are quite complex; they may contain hundreds of methods and may use many additional COM objects as parameters. If your application needs to use only a small subset of a specific interface, you should consider building a wrapper COM component (for example using Visual Basic) to expose only the interfaces you actually need and to pass any requests to these interfaces to the original COM component.
This chapter describes how to create a Caché wrapper class for an ActiveX component and how to use this wrapper class within an application.
2.1 The Caché Activate Wizard
The Caché Activate Wizard automatically creates one or more Caché wrapper classes for a given set of ActiveX interfaces.
To use the Wizard:
1. Start Atelier.
2. Select a project for your application.
3. Select Tools > Add-Ins... from the main menu and press the Next button.
4. Expand the item Standard Add-Ins and select Activate Wizard.
5. Press the Finish button to start the Activate Wizard:

Enter the package name you wish to use for the generated classes, and press the Next button.
6. The Wizard displays a list of available COM interfaces (these are interfaces available on the Caché server, not the machine on which Atelier is running):
![Activate Wizard Image]
Choose one or more interfaces and press the **Next** button.
7. The Wizard automatically generates wrapper classes within the selected package and compiles them:
![Activate Wizard Image]
2.2 Using the Generated Wrapper Classes
The classes that are generated in Caché are proxy classes for the COM objects. Once the classes have been generated and compiled, you can then use them in Caché applications.
For example, using the Activate Wizard, you can generate wrapper classes for the Microsoft SysInfo Control, which provides some information regarding system resources.
The Caché Activate Wizard creates the following classes for the SysInfo COM object:
- `Activate.SysInfoLib.ISysInfo` — An abstract interface class that defines the methods and properties which the `ISysInfo` interface provides. It cannot be instantiated. Among others it has a calculated property called `BatteryLifePercent` along with corresponding get and set methods for that property.
- `Activate.SysInfoLib.SysInfo` — This is a concrete class that inherits from the `ISysInfo` class. It contains the code that finds and instantiates the external COM object and maintains a “connection” to that object. You use this concrete class to manipulate the external object. When the object is closed, the external COM object is closed (released) also.
2.2.1 Example: Accessing a Property
Here is an example that uses the SysInfo wrapper object to obtain the remaining battery life percentage for a laptop computer:
```caché
Set obj = ##Class(Activate.SysInfoLib.SysInfo).%New()
Write obj.BatteryLifePercent,!
Set obj = ""
```
The object is created in the same manner as any other within Caché. The `BatteryLifePercent` property is written out and finally the object is closed.
2.2.2 Example: Enumerating COM Interfaces
The Caché Activate Wizard enumerates the type libraries on a Caché Server by using a COM object called TL.dll (or TL64.dll on 64-bit systems. The file is placed in the `<CacheRoot>/Bin` directory and automatically registered during Caché installation). The Caché classes that are generated from this object are preloaded into the `%Activate.TLLib` package.
These classes consists of:
- `%Activate.TLLib.IUtils` — an abstract interface class that has a single property, libraries of type `ILibraries`. Use this property to retrieve the `ILibraries` interface for enumerating the type libraries on the system.
- `%Activate.TLLib.ILibraries` — an abstract interface class that exposes the `Count` and `Item` properties. Use these properties to enumerate the type libraries on the system.
- `%Activate.TLLib_Utils` — a concrete subclass that expresses the `IUtils` interface. Instantiate this class to access the `Libraries` property
Here is an example ObjectScript method that enumerates the type libraries on the system by using these classes. A concrete instance of the UTILS class is created and the objLibs property is retrieved. Notice that the Item property is called via the ItemGet method, because Cache does not currently support calculated, indexed properties:
```objectscript
Class MyApp.ActivateTest
{
// ...
/// Demonstrate COM object Access and provide type library enumeration ;
ClassMethod ListTypeLibs() {
Set objUtils =
Set objLibs =
Set $ZT = "tlerr"
Set objUtils = ##class(%Activate.TLib.UTILS).%New()
Set objLibs = objUtils.Libraries
For i = 1:1:objLibs.Count
Set tld = objLibs.ItemGet(i)
// tld is a | delimited string
Write !, $Piece(tld,"|"), !, $Piece(tld,"|",2), !, $Piece(tld,"|",3), !!
}
xit ; Exit point
If objLibs'='' Set objLibs =
If objUtils'='' Set objUtils =
Quit
tlerr ; Exception handler
Set $ZT =
Goto xit
}
}
```
### 2.2.3 Special Considerations for Properties
As shown in the previous example, in COM, some properties have parameters. Furthermore, some objects have what is known as a “default property,” which means you can reference that property without specifying its name explicitly.
For example, collections (as in the previous example) always have the Count and Item property. You will note that the Item property is (obviously) not a method but that it does take an argument. An Item property is often the default property of a collection. Consider an example with Microsoft Excel. If we have a collection of workbooks, then in Visual Basic, we can access a specific workbook by name in this manner:
```vbnet
Application.Workbooks("Sheet1")
```
Although we are accessing the Item called “Sheet1”, Item is not explicitly referenced. What the code is really doing is calling:
```vbnet
Application.Workbooks.Item("Sheet1")
```
Caché distinguishes between method call and property reference by the presence or absence of parentheses. This means that it interprets “person.Name” as a property and “person.RaiseSalary()” as a method. This makes default properties awkward because, unlike Visual Basic, Caché does not have the ability to define a default parameter nor the ability to do a property reference while passing parameters. For example, Caché cannot support the following Visual Basic syntax that has an implicit reference to an property:
```vbnet
Workbooks("Sheet1") ' Implicit reference to Item property
```
Neither can Caché support the following syntax, where Item is a property:
```vbnet
Workbooks.Item("Sheet1") ' Item is a property!
```
This does not work, because the Caché Interpreter considers Item to be a method. To work around this difference in the languages, use the following syntax:
```objectscript
Workbooks.ItemGet("Sheet1")
```
This works because ItemGet is the method that retrieves the Item property.
## 2.3 Exception Handling
Any COM object may raise an exception as the result of some operation, be it a method call or a property set/get. When an exception is raised, the exception is propagated into Caché via the \_ZTrap mechanism. The calling code will receive an error with the error code <ZACTX> and the local variable %objlasterror will contain a complete textual description of the error. Programmers should plan for this error and take action accordingly.
### 2.3.1 Example: Exception Handling
Here is an example of using a COM object which retrieves files by FTP. The object is created and the CurrentDirectory property is queried. The COM object throws an exception because it is not valid to try to determine the current directory until the FTP connection has been made. We will try this from a Caché command line (terminal session):
```plaintext
Set obj = ##Class(Activate.RETRIEVERLib.FtpRetriever).%New()
Write obj.CurrentDirectory
```
In this case, this will throw an error:
```plaintext
<ZACTX>CurrentDirectoryGet+4^Activate.RETRIEVERLib.FtpRetriever.1
```
The error code associated with the <ZATCX> error should be in the local variable %objlasterror. We can retrieve the complete text of the error message by calling $system.OBJ.DisplayError:
```plaintext
Do $system.OBJ.DisplayError(%objlasterror)
```
Which will result in the following output:
```
ERROR #1101: Com Exception: '-2147220888 Ftp Retriever Connection must be established before attempting this operation'
```
## 2.4 %Activate.IDispatch and %Activate.GenericObject
Some COM objects do not come with a type library or you may find that the return type of a method or a property type of a COM object is just an IDispatch interface. How do you call methods and access properties for such objects?
Caché Activate provides two classes which assist with this problem, %Activate.IDispatch and %Activate.GenericObject. Many COM objects are identified by what is called a “ProgId”, a string usually consisting of a library/object name which can be used to identify an object. In Visual Basic there is a CreateObject call which takes a Progid and returns an object reference which can be used to manipulate the object. Caché provides a CreateObject method too, as a class Mmethod of the %Activate.GenericObject class. Here is how it is used:
### 2.4.1 Example: Using CreateObject
Using the same Microsoft SysInfo object as above, we instantiate the object via its ProgId. Because the object is generic, that is, we have no type information for this object when instantiated in this manner, we must call the generic methods from the IDispatch interface which get and set properties and invoke methods by name:
```plaintext
Set obj = ##Class(%Activate.GenericObject).CreateObject("SYSINFO.SysInfo")
Write obj.GetProperty("BatteryLifePercent")
Set obj = ""
```
2.5 Monikers
COM provides an alternative way of instantiating an object indirectly by using what is known as a moniker as a substitute for the ProgId. Visual Basic provides the **GetObject** call which takes a moniker and returns an object reference which can be used to manipulate the object. Caché provides a **GetObject** method as a Class Method in the `%Activate.GenericObject` class. Here is how it is used:
2.5.1 Example: Using GetObject
Here a moniker that accesses the LDAP protocol of the Active Directory service. It is used to return a reference to a collection of nodes which represents users in the current domain. The count of users is written out and the object closed:
```vba
Set obj = ##Class(%Activate.GenericObject).GetObject("LDAP://CN=USERS")
Write obj.Count()
Set obj = ""
```
2.6 The Become Method
Sometimes a type library specifies a method or a property which has a return type of the generic IDispatch interface. This can be very inconvenient because what you get is, in effect, an instance of `%Activate.IDispatch` on which you are forced to use generic methods (such as **GetProperty**) in order to get and set properties and invoke methods. If you know the interface that it really should be (from documentation or otherwise), then you can call the **Become** method on an instance of `%Activate.IDispatch` object and retrieve the new (now typed) interface. The **Become** method takes the name of a class as its argument. Effectively, `%Activate.IDispatch` becomes an instance of the class name you pass to the method. **Become** will throw an exception if the object you call does not support the new typed interface.
2.7 Events
Some COM components have the ability to fire events during the processing of a method. The events are grouped into an event or “source” interface given a name. For example, given a COM object called MyClass, the interface may be called “MyClassEvents” or in the case of a COM object created with Visual Basic “__MyClass”.
Caché Activate provides for the event handling via two classes: `%Activate.RegisterEvents` and `%Activate.HandleEvents`. If a COM object generates events, the generated Caché class will inherit from the `%Activate.RegisterEvents` interface class. This adds two methods `%RegisterHandler` and `%UnRegisterHandler`. In addition to the regular COM object proxy class, another class is generated which represents the Event interface. This will inherit from `%Activate.HandleEvents` and implements the `%Advise` and `%UnAdvise` methods as well methods to handle specific events as defined by the event interface.
2.7.1 Example: Using COM Events
An example may make things clearer. Suppose we have a hypothetical COM object which does an FTP transfer. As well as implementing methods such as **Connect**, **Close**, and **Download**, the object implements an Event interface which expresses a single method, **BytesTransferred**. Following a successful connection and initiation of a download, the FTP object will fire the “BytesTransferred” Event after each 1 kilobyte of data that it has downloaded. The Event will be represented by a **BytesTransferred** method which has two parameters, an integer, `Bytes` and a boolean, `Cancel` which is passed by reference. When the Event fires, the **BytesTransferred** method will be called passing the current value of the arguments, `Bytes` and `Cancel`. These values are then available for processing. Typically the `Bytes` argument will be displayed via the user interface.
Because the Cancel argument has been passed by reference, its value may be set and returned to the COM object which fired the event. In this instance setting Cancel to True (-1 for COM) will indicate to the COM object that the current operation should be interrupted and the call to Download should return immediately. If the download completes normally, the call to Download will return control to the caller and no more events will be fired. In Caché, the FTP COM object would be represented by a generated class such as Activate.SomeLibrary.FTP and the event interface by the class Activate.SomeLibrary.FTPEvents.
This example would look something like this. First an instance of the FTP object would be created:
```
Set FTP = ##Class(Activate.SomeLibrary.FTP).%New()
```
We want to handle events so we create an instance of an event handler:
```
Set FTPHandler = ##Class(Activate.SomeLibrary.FTPEvents).%New()
```
Before events can be handled the event handler must be registered with the object that actually fires the events, so we call:
```
Do FTP.%RegisterHandler(FTPHandler)
```
Now we connect and do a download:
```
Do FTP.Connect("ftp.intersys.com")
Do FTP.Download("/public/somefile.txt")
```
During the download the following method would be called on the Activate.SomeLibrary.FTPEvents class:
```
Class MyApp.Test
{
//...
Method BytesTransferred(Bytes As %Integer,Cancel As %Boolean)
{
//...
}
}
```
**Note:** It is up to the developer to actually implement the BytesTransferred method by editing the Activate.SomeLibrary.FTPEvents class directly or preferably by subclassing the class and providing the implementation in the subclass.
Following the download, we do not want events to be handled anymore so we unregister the handler:
```
Do FTP.%UnRegisterHandler(FTPHandler)
```
and tidy up:
```
Set FTPHandler = ""
Set FTP = ""
```
|
{"Source-Url": "https://cedocs.intersystems.com/latest/csp/docbook/pdfs/pdfs/BGAX.pdf", "len_cl100k_base": 6045, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 32414, "total-output-tokens": 6906, "length": "2e12", "weborganizer": {"__label__adult": 0.00021517276763916016, "__label__art_design": 0.00021159648895263672, "__label__crime_law": 0.00014662742614746094, "__label__education_jobs": 0.0003509521484375, "__label__entertainment": 3.8564205169677734e-05, "__label__fashion_beauty": 6.830692291259766e-05, "__label__finance_business": 0.00017976760864257812, "__label__food_dining": 0.00012254714965820312, "__label__games": 0.0003733634948730469, "__label__hardware": 0.00047659873962402344, "__label__health": 0.0001156926155090332, "__label__history": 8.100271224975586e-05, "__label__home_hobbies": 4.1544437408447266e-05, "__label__industrial": 0.0001735687255859375, "__label__literature": 0.0001195669174194336, "__label__politics": 8.320808410644531e-05, "__label__religion": 0.0002007484436035156, "__label__science_tech": 0.0022525787353515625, "__label__social_life": 3.832578659057617e-05, "__label__software": 0.017822265625, "__label__software_dev": 0.9765625, "__label__sports_fitness": 0.00010788440704345704, "__label__transportation": 0.00014460086822509766, "__label__travel": 0.00010126829147338869}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29782, 0.024]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29782, 0.73684]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29782, 0.82203]], "google_gemma-3-12b-it_contains_pii": [[0, 59, false], [59, 2799, null], [2799, 5991, null], [5991, 5991, null], [5991, 6325, null], [6325, 6325, null], [6325, 8988, null], [8988, 12759, null], [12759, 14888, null], [14888, 14888, null], [14888, 15585, null], [15585, 15967, null], [15967, 18512, null], [18512, 21463, null], [21463, 24386, null], [24386, 27895, null], [27895, 29782, null], [29782, 29782, null]], "google_gemma-3-12b-it_is_public_document": [[0, 59, true], [59, 2799, null], [2799, 5991, null], [5991, 5991, null], [5991, 6325, null], [6325, 6325, null], [6325, 8988, null], [8988, 12759, null], [12759, 14888, null], [14888, 14888, null], [14888, 15585, null], [15585, 15967, null], [15967, 18512, null], [18512, 21463, null], [21463, 24386, null], [24386, 27895, null], [27895, 29782, null], [29782, 29782, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 29782, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29782, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29782, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29782, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29782, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29782, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29782, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29782, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29782, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29782, null]], "pdf_page_numbers": [[0, 59, 1], [59, 2799, 2], [2799, 5991, 3], [5991, 5991, 4], [5991, 6325, 5], [6325, 6325, 6], [6325, 8988, 7], [8988, 12759, 8], [12759, 14888, 9], [14888, 14888, 10], [14888, 15585, 11], [15585, 15967, 12], [15967, 18512, 13], [18512, 21463, 14], [21463, 24386, 15], [24386, 27895, 16], [27895, 29782, 17], [29782, 29782, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29782, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
97e3e34699ae3ddd2bc441e4c4c5820f31406d1c
|
[REMOVED]
|
{"Source-Url": "https://www.csee.umbc.edu/courses/undergraduate/471/spring22/02/notes/14_machine_learning/14_4_methodology.pdf", "len_cl100k_base": 5188, "olmocr-version": "0.1.50", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 46104, "total-output-tokens": 5645, "length": "2e12", "weborganizer": {"__label__adult": 0.0004162788391113281, "__label__art_design": 0.0005431175231933594, "__label__crime_law": 0.0005578994750976562, "__label__education_jobs": 0.0031032562255859375, "__label__entertainment": 0.00013315677642822266, "__label__fashion_beauty": 0.00028061866760253906, "__label__finance_business": 0.0005698204040527344, "__label__food_dining": 0.0004706382751464844, "__label__games": 0.0007734298706054688, "__label__hardware": 0.0013523101806640625, "__label__health": 0.0008883476257324219, "__label__history": 0.00043892860412597656, "__label__home_hobbies": 0.0002276897430419922, "__label__industrial": 0.0008225440979003906, "__label__literature": 0.00047516822814941406, "__label__politics": 0.0003707408905029297, "__label__religion": 0.0005826950073242188, "__label__science_tech": 0.2802734375, "__label__social_life": 0.00024187564849853516, "__label__software": 0.0208282470703125, "__label__software_dev": 0.685546875, "__label__sports_fitness": 0.00043845176696777344, "__label__transportation": 0.0005693435668945312, "__label__travel": 0.0002510547637939453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15199, 0.03787]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15199, 0.36995]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15199, 0.81312]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 365, false], [365, 698, null], [698, 1108, null], [1108, 1254, null], [1254, 4311, null], [4311, 4618, null], [4618, 5019, null], [5019, 5409, null], [5409, 5731, null], [5731, 6129, null], [6129, 6734, null], [6734, 7062, null], [7062, 7532, null], [7532, 7983, null], [7983, 8425, null], [8425, 8842, null], [8842, 9238, null], [9238, 9376, null], [9376, 9597, null], [9597, 9851, null], [9851, 10507, null], [10507, 10826, null], [10826, 11331, null], [11331, 12181, null], [12181, 12751, null], [12751, 13106, null], [13106, 13434, null], [13434, 13524, null], [13524, 13862, null], [13862, 14185, null], [14185, 14495, null], [14495, 14856, null], [14856, 15199, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 365, true], [365, 698, null], [698, 1108, null], [1108, 1254, null], [1254, 4311, null], [4311, 4618, null], [4618, 5019, null], [5019, 5409, null], [5409, 5731, null], [5731, 6129, null], [6129, 6734, null], [6734, 7062, null], [7062, 7532, null], [7532, 7983, null], [7983, 8425, null], [8425, 8842, null], [8842, 9238, null], [9238, 9376, null], [9376, 9597, null], [9597, 9851, null], [9851, 10507, null], [10507, 10826, null], [10826, 11331, null], [11331, 12181, null], [12181, 12751, null], [12751, 13106, null], [13106, 13434, null], [13434, 13524, null], [13524, 13862, null], [13862, 14185, null], [14185, 14495, null], [14495, 14856, null], [14856, 15199, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15199, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15199, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15199, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15199, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 15199, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15199, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15199, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15199, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15199, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15199, null]], "pdf_page_numbers": [[0, 0, 1], [0, 365, 2], [365, 698, 3], [698, 1108, 4], [1108, 1254, 5], [1254, 4311, 6], [4311, 4618, 7], [4618, 5019, 8], [5019, 5409, 9], [5409, 5731, 10], [5731, 6129, 11], [6129, 6734, 12], [6734, 7062, 13], [7062, 7532, 14], [7532, 7983, 15], [7983, 8425, 16], [8425, 8842, 17], [8842, 9238, 18], [9238, 9376, 19], [9376, 9597, 20], [9597, 9851, 21], [9851, 10507, 22], [10507, 10826, 23], [10826, 11331, 24], [11331, 12181, 25], [12181, 12751, 26], [12751, 13106, 27], [13106, 13434, 28], [13434, 13524, 29], [13524, 13862, 30], [13862, 14185, 31], [14185, 14495, 32], [14495, 14856, 33], [14856, 15199, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15199, 0.12132]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
7691ae500febb7fa54245b0bb512b3e23c0d41c8
|
Abstract—This paper presents an approach for developing Linux interface standards aimed to improve portability of applications among different Linux distributions. The approach is based on usage of database-driven informational system that simplifies creation and maintenance of interface standards by standardization committees and their usage by application and distribution developers. A logical model of interfaces between Linux applications and distributions is described which is used to design schema of the informational system’s database.
Keywords—Software requirements and specifications, Software standards, Data management.
I. INTRODUCTION
The Linux operating system becomes more and more popular. Nowadays it is used not only by enthusiasts, but by many commercial companies, corporations and government organizations. Nevertheless, the market share of Linux in some areas (in particular, on desktops) is still relatively small. One of the main reasons which prevents the growth of Linux popularity in these market segments is lack of applications for this operating system that would satisfy all the needs of target audience.
This lack of applications arises, in particular, from a huge variety of existing operating systems based on the Linux kernel, GNU libraries and utilities and other common components. Such systems are called Linux distributions; there are several hundreds of distributions at the moment [1] and the situation is constantly changing – as time goes by, new distributions appear, while the others become obsolete and unsupported, but the total number of distributions is permanently increasing.
Most components that form a distribution are maintained not by distribution vendors themselves, but by different third party developers. This allows to save a lot of resources and efforts, but leads to another kind of problems. The thing is that many developers in the Open Source Software (OSS) world follow the “Release early, release often” policy [2], and it is not uncommon for software updates to appear several times a month. Such often releases lead to situations when a lot of different versions of the same component exist which, in general, provide different functionality. Moreover, distribution vendors often modify software taken from upstream, sometimes slightly, but sometimes significantly – for example, they can add some new unique functionality which will give more advantages to their system with respect to the others. As a result, functionality of the same component in different distributions can vary significantly.
A large variety of distributions provides users with a wide choice of Linux implementations, but such a variety makes it difficult to develop portable software that would be able to run in every Linux distribution without any additional actions from the user side. Approaches used by software vendors to increase the number of supported distributions depend on kind of license under which their programs are delivered. From licensing point of view, we should distinguish open software, whose source code can be obtained by interested parties for investigation and modification, and closed, or proprietary software, whose license forbids code modifications.
Developers of open source programs usually leave the task of software adoption for those distribution vendors who want to include their programs. In this case it is distribution engineers who test applications inside particular systems and modify their source code, if necessary. Finally, users themselves can build program from sources (and rely on programs like GNU Autotools that can take care of differences in build environments [3]).
Developers of proprietary software cannot follow this way. Instead, they have to provide binary executable files and shared libraries for their applications that are ready to use “as is”, without recompilation or other actions. But it can be very expensive and time consuming to test some application in every existing distribution. That’s why many proprietary vendors declare that they only support a few selected systems – usually those that have significant market share, such as SUSE Enterprise Linux or Red Hat Enterprise Linux (for example, IBM XL Fortran supports only these two distributions [4]; Intel Fortran Compiler supports seven systems [5], but this is also not a large number). However, end users normally expect to buy products “for Linux”, not “for SUSE” or “for Red Hat”.
A promising approach to simplify creation of portable applications distributions is standardization – development of requirements that should be satisfied by all standard compatible systems. In our case, interface standards are required guaranteeing that every compliant operating system provides certain interfaces (in particular, libraries and functions) that can be used by applications.
Standards are useful not only for proprietary vendors, but
also for developers of open source programs. The thing is that the more modifications are required to adopt a particular application for some distribution, the more likely the modified program will significantly differ from its origin and will be not exactly that thing which the original developer wants it to be. In addition, it’s likely that if several programs exist providing the same functionality, then distribution vendors will choose those that require less efforts for maintenance and adoption. Following standards will give developers guarantees that their product will suite perfectly for any standard compliant system and will be unlikely subjected to significant modifications.
Modern Linux distributions are large and provide millions of interfaces of different kinds. For standardization committees, it is important to investigate which interfaces are mostly required and useful; due to a huge number of existing interfaces, some automation of this analysis is desired. But even with careful selection of standardized interfaces, standards can, in turn, become huge, so their size will cause problems for both standardization committees (responsible for standard maintenance and further development) and for developers, who will have to investigate thousands of pages of specification text. Thus, an approach is required to organize development process of an interface standard which will simplify both standard maintenance and development by appropriate committees and standard usage by its target audience – primarily, application and distribution developers.
The remainder of the paper is structured as follows: Section 2 observes the most valuable interface standards in the OSS world and analyzes approaches and techniques used during their development. Section 3 introduces an approach for interface standard development process organization which is based on using of database-driven informational system. Section 4 describes the application of the approach to the Linux Standard Base development process. Finally, Section 5 summarizes the main ideas.
II. STANDARDS IN THE OPEN SOURCE WORLD
Portability problem is not a new one for Open Software, and standardization is declared to be one of the key principles of the Open Systems that should solve this problem (at least partially). However, even with such a principle, real life shows that it’s not always easy to achieve full compatibility between different products. Problems arise in two areas – standard development and maintenance by standardization groups and committees and standard usage by its target audience – developers of applications and OS components.
Roots of the first problem lie in a huge number of existing libraries and functions – a modern Linux distribution delivered on a single DVD disk provides several hundreds of libraries which, in turn, export hundreds of thousands of functions. Not all of these functions can be considered as stable, safe, backward compatible, etc. – that is, not all functions can be characterized as a “best practice” and recommended to be used by everyone. One of the main tasks of standardization committee is to select those interfaces that are proved to be useful, and probably try to help to improve those interfaces which are not mature yet. That’s why it is important to estimate real needs of applications, capabilities of existing Linux implementations and common practices used to solve particular problems, in order to standardize the mostly requested and important interfaces first. The more so, since besides such interface importance analysis, standardization process involves development and maintenance of specification text, tests and other accompanying products and informational resources – that is, standardization is actually an expensive and time-consuming task, so it is not desirable to waste resources.
Another effect of a large number of existing libraries and interfaces is that standards can become very large, too. This leads to the second problem – large specifications are hard to use for their target audience, since it’s not easy to investigate a dozen volumes of specification, several hundreds of pages each. In order to make developers life easier, some standards are accompanied by auxiliary tools, informational resources and other additional products. A common example of such a product is a test suite that can be used to check if application meets all standard requirements. A more sophisticated example is a specialized development environment whose usage during the application compilation and build processes guarantees compliance of resulting program with the standard.
Such auxiliary components form a standard environment. All parts of this environment should be kept in sync with each other and with the specification text. For example, if it is decided to remove some interface from the specification, then the test suite for applications should be updated to forbid usage of this interface, the application development tools should be modified to avoid usage of this interface, and so on. Thus, while complicated and feature rich environment of a standard is useful for its target audience, it can significantly complicate development and maintenance of standard and accompanying tools.
One more issue of standardization we’d like to mention is that standards are not always fully suitable for every particular area. It’s not uncommon when several standards exist that cover some area or when a small subset of a standard is enough for some class of systems. In such cases, standard profiles are developed – unions of existing standards or their subsets aimed to create a specification covering a certain class of systems. As for interface standardization, profiles are asked for when developing highly tailored products – for example, intended to be used only on high-loaded servers or inside mobile devices. Developers of such applications only consider operating systems that can work on their target platforms, and it would be useful for them to have a standard that describes only such particular class of systems. To be sure, existence of specifications that already cover (at least partially) target area can simplify development of a new document, and profile development is usually cheaper then development of a standard from scratch. However, it can introduce its own problems – when selecting subsets of existing standards and then joining these subsets into a single document, it is important to keep internal consistency of resulting specification.
In addition, it can be useful to reuse existing auxiliary tools, and these tools should be also adopted for a new profile – superfluous tests should be dropped, informational resources from different specifications that form the profile should be somehow combined and so on. Thus, profile development is not as cheap as it can seem to be.
All the problems mentioned above are not new and they were faced by different standardization workgroups. Let’s consider different approaches used in order to solve them by some famous interface standards that are in use in the Linux world.
A. POSIX and SUS
The most famous and mature open standards for operating system interface are POSIX and Single UNIX Specification (SUS). Initially, these specifications were developed to achieve portability of applications among different UNIX implementations on the source level. This approach supposes standardization of the system Application Programming Interface (API), the core part of which are functions provided by system libraries and declared in appropriate header files. It is guaranteed that any application that meets requirements of some API standard can be compiled from its sources in any operating system compatible with that standard.
Roots of the Single UNIX Specification lie in the Common API Specification, developed in the early 1990th by the COSE alliance formed by all leading UNIX vendors of that time. The main purpose of this alliance was to investigate existing UNIX implementations and create a list of functions that were present in all UNIX systems. The resulting list contained 1170 functions and due to this reason it is also known as Spec 1170. In 1992-1993, during the SUS development, an additional research of 50 leading UNIX applications was performed and additional list of 130 functions was created that were suggested for standardization [7].
Application and distribution analysis during SUS and POSIX development was primarily performed manually and involved deep source code investigation by analysts. In early 1990th, this approach was suitable and allowed to perform a high quality and complete analysis.
A problem with initial versions of POSIX and SUS was that these standards considered only some relatively low level functions and calls, but this was not enough for many applications even in that time – such popular areas as graphical user interface or multimedia were completely out of standardization scope. The need for more areas was understood by standardization committees, and it was decided to develop several SUS profiles – specifications that were based on POSIX but extended it with interfaces specific to particular areas. The SUS version 2 specification presented three profiles – Base Specification (predecessor of POSIX 2001), UNIX98 Workstation (with GUI requirements based on the Common Desktop Environment – CDE – and the Motif library) and UNIX98 Server (specifying additional network services and Java Runtime Environment).
Unlike the base specification, extended profiles were suitable for UNIX-based systems only – for example, there were no free Motif and CDE implementations for Linux. Moreover, there were no concurrent implementations of CDE or Motif at all; concurrent implementations of some other standardized items were allowed, but they had to follow other existing specifications (like Java RE). Thus, during extended UNIX profile development, standardization workgroups didn’t have to analyze alternative implementations, they only had to choose some top-level standardization directions – for example, once it was decided that CDE would be a standard desktop environment, there were no need to investigate different (and partially incompatible) implementations of CDE, since there was only one implementation of it in the wild.
On the other side, the POSIX itself was divided on several subsets that also formed a set of profiles – such as POSIX.1b real-time extensions. However, these profiles were even smaller than POSIX and their creation haven’t require investigation of some new standardization techniques.
B. LSB
An alternative approach to API standardization is to standardize Application Binary Interface (ABI), giving developers an opportunity to use the same executable files and shared libraries in all compliant systems, without a need for recompilation. The core part of such ABI standards are shared libraries that should be provided by operating system and binary symbols exported by them (binary symbol is a binary level entity corresponding to either a function or a global variable exported by library). For application developers, this ABI standardization is more preferable than the one for API, since it doesn’t require any actions (neither from developers nor from users) in order to port a program to any standard compliant system. However, ABI standards contain much more limitations for OS – in particular, it is clear that all target systems should use the same format for binary executables and shared libraries. That’s why ABI standards often cover less systems than API ones.
Nowadays this approach is used by the Linux Standard Base specification (LSB) which is intended to be applied for Linux based systems only [12]. Roots of LSB lie in POSIX and SUS, and standardization process is also similar in many ways. In particular, LSB developers constantly perform analysis existing distributions and applications in order to select the mostly important and useful interfaces. Initially, the analysis process was also performed manually; but up to now the size of data that should be analyzed increased dramatically, and manual analysis doesn’t work fine any more. In particular, during LSB 3.0 development, only interfaces provided by RHEL and SLES distributions were taken into account, while there were several hundreds of different Linux distributions in the world.
LSB has a rich environment, consisting of test suites, development environment for application vendors, online informational resources and other products. All these items are, on the one hand, independent products; on the other hand, they
all represent the LSB in some way and should be kept in consistency with it. The size of all these products makes it hard to perform such synchronization manually; in order to automate this task, a specification database was designed to store some information about standardized elements accompanied with a set of tools that were used to synchronize LSB environment components with each other and with LSB itself.
After LSB 3.0 was released and development of the next version was started, it became clear that the current infrastructure implies too many manual work and can’t satisfy all the needs of the LSB workgroup. In December, 2006, Ian Murdock (CIO of Free Standards Group that was responsible for LSB development at that moment) on the LSB Face-to-Face meeting formulated the following problems of the LSB Infrastructure [12]:
- absence of possibilities of Linux ecosystem analysis that would allow to effectively select further development directions;
- complexity of support of several LSB versions at once caused by absence of information about standard evolution in the database;
- high complexity of adding new interfaces to LSB – though the database solved the problem of synchronization of specification text and environment components, the task of populating database with data was not a trivial task;
- lack of auxiliary tools that would help distribution and application vendors to use LSB in the development process.
Summarizing POSIX and LSB experience, we can conclude that as the size of operating systems (measured in a number of interfaces) grows, the amount of work to be performed by standardization committees increases dramatically, and those approaches for standard development that proved to be useful a decade ago nowadays fail to satisfy all the needs of both standardization committees and those developers who use standards. New approaches are required that would help both standardization workgroups and standard users to perform their work effectively.
III. AN APPROACH FOR LINUX INTERFACE STANDARDS DEVELOPMENT
In this paper, we present an approach for Linux Interface Standards Development. The approach includes the following stages:
1) Analysis of the Linux ecosystem:
- selection of popular and mostly important applications, analysis of their requirements for system libraries and functions;
- collection of information about existing distributions – in particular, about provided libraries and exported functions.
The set of applications and distributions is constantly evolving, so it is necessary to have data not only with respect to some fixed time point, but collect information about the Linux ecosystem evolution during last several years. It is important to perform constant monitoring of the ecosystem, and results of this monitoring at some certain time points can be used to create next version of a standard, as demonstrated at Fig.1.
2) Preparation of a new standard version. This stage includes selection of interfaces which are mostly needed by applications, proved to be stable and provided by all modern distributions. Then, on the basis of this set of a consistent set of interfaces is constructed which will be included in the specification.
3) Addition of semantic information (in particular, descriptions of functionality that should be provided by interfaces), development of tests, adopting the standard certification system to support certification process for the new version and other tasks that should finalize release of a new standard version.
In order to support this method, we suggest to build an informational system which could be used to automate (at least partially) the mostly time-consuming tasks. The suggested informational system is based on a logical model of interfaces in the Linux ecosystem.
A. Logical Model of Application Interfaces with the Linux OS
In this paper, we concentrate on Application Binary Interface (ABI) – that is, we consider interfaces between binary executables and libraries of applications and shared libraries of distributions. Thus, we consider applications as a set of compiled files (executables and shared objects). In Linux, the main format used for such files is ELF (“Executable and Linking Format”). In our model, we’ll include some items related to the ELF format; the general ELF description is provided by the System V ABI Specification [6]; some Linux specific extensions are described in the appropriate LSB sections [8].
All properties of any item which is a part of system ABI or API can be divided in two groups:
- **structural** properties, that can be checked statically – for example, names of functions exported by library or signature of any function from a given header file;
• **semantic** properties, whose analysis usually requires runtime testing – for example, function behavior.
The model described in this paper includes structural interface properties only, abstracting away from semantic aspects.
As elements represented in the model, we use interfaces involved in the process of **dynamic loading** of application files [10]. Compatibility between application and distribution with respect to such interfaces guarantees that application can be successfully **launched** in the distribution – that is, dynamic loader will be able to resolve all external dependencies of application, form the executable image in memory and pass the control to application’s main entry.
The following interfaces are considered:
- **libraries** – a special kind of ELF files that can export interfaces;
- **binary symbols** exported by libraries – these are binary level entities corresponding to functions and global variables;
- **structure and size of types** used as function parameters and return values;
- **ELF file attributes** – class (32bit or 64bit), target architecture of a file and types of sections that exist in file.
Concentrating on application launching process, the model leaves out of account the following ways of interaction between Linux applications and distributions:
- dynamic loading of shared libraries and dynamic invocation of symbols exported by them at runtime (for example, using the **libdl** library capabilities);
- invocation of external commands and utilities at runtime (for example, using the **system** or **exec** functions).
However, modern recommendations on developing of portable applications forbid usage of such possibilities, unless all files involved in the interaction are part of the application. Indirect dependency on a system library or command cannot be checked by means of the operating system itself (e.g., by dynamic loader), so it is application developer who should check that necessary files exist and provide all required interfaces. However, such checks add complexity to any program, and improperly performed checks can lead to program crash or unexpected behavior [9].
### B. Informational System to Support Development and Usage of Linux Interface Standards
In order to support the approach to interface standard development described above, we use an informational system providing the following possibilities:
- planning of further standard evolution;
- creation of new versions of standard and its profiles;
- ensuring consistency of standard environment components;
- checking of how different Linux distributions and applications are compliant with the standard.
The informational system is aimed to automate the most time consuming tasks that arise during the processes described above.
The main components of the system are like the following:
- a **database** with information about both standardized interfaces and interfaces used by existing applications and provided by distributions. The database schema is based on the logical model of interfaces described above;
- automated **data collection tools** used to gather information to populate the database with data;
- automated **generators** that use the database to create components of standard environment.
The database should store information about all interfaces with their characteristics described in the specification which are used by at least one component of the standard environment. If any component during its work requires some information about standardized interfaces which is described in the specification, this information should be either directly queried from the database when such a need occurs, or should be embedded in the component code at compilation time by appropriate automated generators. In particular, if some component needs to know the list of included interfaces, this list should be always taken from the database. This approach guarantees that all components are kept synchronized with each other and with specification text. To be sure, it is required for the specification text itself to be synchronized with the database; one of the ways to achieve it is to generate those parts of the text that are represented in the database – that is, the database should be the only one source of information about standardized items.
Besides the information about standardized items, the database should also contain all the data which is used by several components of standard environment, even though this data doesn't concern the standard itself. This will allow to keep different components synchronized with respect to their common data.
Due to a large number of interfaces that exist in the Linux world and should be subjected for analysis, data collection tools should be as automated as possible. Collection of data about interfaces included in our logical model can be almost fully automated, as demonstrated in authors’ work [15]. Moreover, collection of additional information (e.g., header files) which is not used during Linux ecosystem analysis but required for development of different LSB environment components can be also automated significantly [14].
A data work flow diagram in out informational system is shown at the Fig.2.
two tasks can require knowledge about different characteristics of the same interfaces. In particular, due to the big amount of existing interfaces that should be subjected to analysis, it can be reasonable to store only those ecosystem data that can be collected automatically; however, standards can be more descriptive and include more characteristics in addition to the collected ones, so the automatically collected data can be insufficient for environment generators.
In order to store information about several versions of a standard (that is, to store standard history), the database schema should be extended with attributes containing temporal data. Different approaches exist for introducing such extensions; in our work, we use the Temporal Relationship Model (TRM) [11], which is based on the relational model but adds new temporal attributes to every relation. With this model, there is no need to use a specialized temporal DBMS; the database can be served by any relational DBMS – the most popular and widespread kind of DBMS at the moment.
The two obligatory attributes added by the temporal model are the beginning and the end of entity life period – a time interval during which the entity preserves its characteristics. In our case, such interval boundaries are standard versions – that is, a time interval for some standardized item indicates a set of standard versions where this item was included with the same characteristics. A special value NULL is used to indicate unbounded intervals which correspond to items which exist in the last standard version (that is, that have been never excluded from the specification).
Temporal attributes are added only to those entities that correspond to standardized items; these attributes are not required for entities that represent interfaces existing in the Linux ecosystem. More details about using temporal databases for tracking standard evolution can be found in another author’s work – [13].
IV. THE LSB INFRASTRUCTURE PROGRAM
One of the largest standards that specify interfaces of the Linux OS is Linux Standard Base (LSB). The standard is being developed by international consortium named The Linux Foundation which is formed by leaders of the Linux market. The primary content of the standard is formed by lists of libraries that should be present in any compliant Linux distribution, accompanied by lists of binary symbols that should be exported by these libraries. The standard is constantly evolving, and more and more interfaces are added – the latest version, LSB 4.0, describes more than 38,000 functions from 57 libraries. It is noticeable that during the four years passed from LSB 3.0 release, more than 30,000 functions were added.
Such a swift growth of the specification size exposed some significant problems in its development process and surrounding infrastructure. Among the most important issues, the lack of support for Linux ecosystem analysis was mentioned, as well as difficulties with specification text usage by application developers – even LSB 3.0 consisted of several thousands of pages and contained references to several dozens of other specifications [12].
In 2006, the joint Program of The Linux Foundation and Institute for System Programming of RAS was started aimed to improve the LSB Infrastructure. The main purpose of the Program was to resolve existent issues that made difficulties for standard maintenance; it was decided to create an informational system that would allow to both simplify further LSB development and simplify its usage by target audience – Linux application developers and distribution vendors.
By the beginning of the Program, the LSB infrastructure already contained a central database with information about standardized interfaces. That database was used to generate parts of the specification text (lists of libraries, binary symbols, etc.), to create header files and stub libraries for LSB Development Environment and to generate primitive tests checking presence of certain objects (libraries, commands, etc.) in distributions.
During the LSB Infrastructure Program, the following tasks were performed:
- an extension of the LSB database was developed called Community Database to store information about interfaces provided by existing Linux distributions and used by Linux applications; automated tools were developed to collect such data and populate the database with it. Nowadays that database contains information about 250 Linux distributions and 1200 applications;
- during the LSB Navigator development, automated tools were created aimed to support analysis of data about existing Linux distributions and applications during the LSB development process. The tools allow to discover potential candidates for standardization and check formal rules that should be met by candidates to be finally included in the specification;
- a temporal extension of the LSB database was developed.
to store information about all existing LSB versions. All tools that use information from the database were modified to be able to extract data corresponding to any given specification version. Moreover, some products created using the database now support several LSB versions at once – in particular, the LSB Development Environment can be used to build applications compliant with any given LSB version.
Nowadays the work is in progress on improving profile support in the LSB Infrastructure, caused by a need to develop a profile for mobile devices.
The current structure of the LSB Environment is shown at the Fig.3.
The tools developed during the Program allowed to reorganize LSB development process – automation of many time-consuming tasks allowed LSB workgroup members to concentrate on their primary objective – selecting interfaces that should be included in the specification and elaborating descriptions of their behavior. Moreover, the decision making process itself was also significantly improved – the new infrastructure allowed to perform deeper analysis of the Linux ecosystem and to better understand current needs and evolution tendencies of applications and distributions. For example, during the LSB 3.0 development only two distributions were subjected to deep analysis (RHEL and SLES), and information about application needs was limited to direct requests from application developers (expressed in either LSB Bugzilla or mail lists). With the new infrastructure, during the LSB 4.0 development the workgroup analyzed all versions of 12 distributions released during the last three years and more than 1,000 applications.
This, in turn, allowed to significantly increase the number of standardized interfaces from 6,000 in LSB 3.0 to 38,000 in LSB 4.0. Nowadays we can say that the most significant problem with standardization of new interfaces is development of runtime tests; all other tasks (collecting data for the LSB database, keeping components of the LSB Development Environment synchronized, etc.) are highly automated and do not require much engineering efforts.
V. CONCLUSION
This paper has suggested and approach of developing Linux interface standards aimed to improve portability of applications among different Linux distributions. The approach is based on usage of a database-driven informational system that simplifies creation and maintenance of interface standards and their environment by standardization committees and their usage by application and distribution developers. A logical model of interfaces between Linux applications and distributions is described which is used to design schema of the informational system’s database.
Usage of a central database to create different components of the standard environment allows to keep these components synchronized with each other and with the specification text automatically – every change in the database is automatically reflected in all components by means of appropriate generators. Temporal extensions of the database allow to store standard evolution history, which, in turn, allows to support several standard versions by means of the same database and accompanying tools.
Though in this paper we have considered ABI standards, the approach suggested is suitable for developing API standards, too. In order to support API specification, the model of interfaces between Linux applications and distributions should be modified – binary-only elements (e.g., ELF attributes) should be dropped, while entities that are present on source level only (e.g., constants and macros) should be added. Actually, the LSB database, described in this paper, already store some source-level entities and tools exist to automate collection of such information.
The LSB Infrastructure project has demonstrated the practical strength of the method of Linux interface standards development suggested in this paper. The informational system created during the project allowed to automate analysis of the Linux ecosystem and significantly increased the speed of decision making process. The automated data collection tools and database-driven generators eliminated the technical complexity of adding new interfaces to LSB. Finally, the new LSB Infrastructure supports development of profiles based on the LSB specification.
REFERENCES
|
{"Source-Url": "http://syrcose.ispras.ru/2010/files/syrcose10_submission_1.pdf", "len_cl100k_base": 6531, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 23240, "total-output-tokens": 7690, "length": "2e12", "weborganizer": {"__label__adult": 0.0002340078353881836, "__label__art_design": 0.00024509429931640625, "__label__crime_law": 0.00022327899932861328, "__label__education_jobs": 0.0006155967712402344, "__label__entertainment": 6.103515625e-05, "__label__fashion_beauty": 9.709596633911131e-05, "__label__finance_business": 0.00030040740966796875, "__label__food_dining": 0.00021767616271972656, "__label__games": 0.00048065185546875, "__label__hardware": 0.0009059906005859376, "__label__health": 0.00022089481353759768, "__label__history": 0.0002236366271972656, "__label__home_hobbies": 4.780292510986328e-05, "__label__industrial": 0.0002334117889404297, "__label__literature": 0.00020301342010498047, "__label__politics": 0.00019121170043945312, "__label__religion": 0.0002493858337402344, "__label__science_tech": 0.02276611328125, "__label__social_life": 5.8710575103759766e-05, "__label__software": 0.0302734375, "__label__software_dev": 0.94140625, "__label__sports_fitness": 0.00012946128845214844, "__label__transportation": 0.0002474784851074219, "__label__travel": 0.00015115737915039062}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39181, 0.00921]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39181, 0.55245]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39181, 0.93719]], "google_gemma-3-12b-it_contains_pii": [[0, 4920, false], [4920, 11466, null], [11466, 17581, null], [17581, 22311, null], [22311, 27551, null], [27551, 32495, null], [32495, 37828, null], [37828, 39181, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4920, true], [4920, 11466, null], [11466, 17581, null], [17581, 22311, null], [22311, 27551, null], [27551, 32495, null], [32495, 37828, null], [37828, 39181, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39181, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39181, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39181, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39181, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39181, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39181, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39181, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39181, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39181, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39181, null]], "pdf_page_numbers": [[0, 4920, 1], [4920, 11466, 2], [11466, 17581, 3], [17581, 22311, 4], [22311, 27551, 5], [27551, 32495, 6], [32495, 37828, 7], [37828, 39181, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39181, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
13862ee49859e5aa299745cf03da9e1e70cb5dfb
|
Beyond the Password
Ricardo Almeida
Instituto Superior Técnico
Lisbon, Portugal
ABSTRACT
Nowadays, most of our devices are used to store our personal information and the conventional security mechanism to securely store this data is password. In this report are explained the problems associated with storing personal information when using passwords as a data security mechanism. The workaround for this problem is to use biometrics, rather than passwords, to protect our data within devices. The purpose of this work is to implement a system, called BioAuth, to perform second-level authentication requests using biometrics. To achieve this objective it is first carried out an investigation of already existing works that use biometrics and then an analysis of these same works, where it is concluded that the biometrics most suitable to implement in mobile devices are fingerprints, facial recognition and speech recognition. Finally, is explained the system architecture and implementation, how the system tests were performed and what can be added to the system in the future.
Author Keywords
Biometrics; Authentication; FIDO UAF; Security; Cryptography; Xamarin;
ACM Classification Keywords
• Security and privacy~Biometrics • Security and privacy~Web protocol security • Security and privacy~Digital signatures • Security and privacy~Mobile platform security
INTRODUCTION
Nowadays with the evolution of technology, it is possible access all kind of services such as e-commerce, online banking, e-payment and many others using our personal devices, like smartphones.
Such services require user authentication to allow users to access their accounts or to perform web transactions. Almost all these services use passwords as the authentication and authorization mechanism.
However, problems for the users emerge by using passwords as a security mechanism. As an example, if the user wants a strong protection, he must memorize big passwords, that generally should not contain dictionary words, and are a combination of numbers, special characters, lowercase and uppercase letters [1]. It becomes hard for the user to remember such passwords.
However, even with strong passwords, the information is not completely secure, since over the years, criminals have learned to crack every kind of passwords [2].
Security and memorization are not the only problems associated with big passwords. It is important to notice that nowadays people are using smartphones for everything, so user-friendliness authentication mechanisms are important too [1], since mobile devices are intended for quick and frequent access. Strong passwords are not appropriate for use in mobile devices due to the length of time required for their input, which can lead to password disclosure.
These two problems need to be solved: improve (i) security and (ii) usability of authentication and authorization mechanisms.
To achieve a higher level of security there exists alternative authentication mechanisms apart from knowledge (passwords), such as possession factor and biometrics.
Authentication based on a possession factor, although it is more secure than knowledge factor, it's still not a perfect solution since the user needs to possess and carry an additional object such as a smart card or an USB Pen, in order to authenticate. And it is the opposite of usability.
Biometrics are the answer to these problems. Biometrics are described as automatic recognition of individuals through their unique physiological (fingerprint, face, iris, etc.) or behavioral (voice, gait, signature, etc.) attributes.
Biometrics offer several advantages over knowledge (passwords) since they are present on users all the time, can be inputted as quickly as a glance or a touch and they cannot be forgotten.
With biometrics the two problems mentioned are solved since that even if a smartphone is stolen, the information can only be accessed by the legitimate user, and to access information the legitimate user has no need to remember big and complicated texts, since it is only needed to present to the biometric system one of his biometrics attributes.
Most recent smartphones contain many useful sensors, such as GPS, cameras, touchscreens, fingerprint scanners, gyroscopes and microphones. Such sensors allow the use of biometrics as an authentication mechanism.
Additionally, biometrics are the best choice to study and implement, since mobile devices are typically dedicated to a single individual.
Objectives of the Work
The objective of the proposed work is to study the current technology regarding biometric systems and authentication protocols in literature to implement a second-factor-authentication framework as an alternative method for business applications available on the web, such as BankOnBox and eDocLink (both systems belong to Link Consulting company). The framework must use current mobile devices hardware capabilities to perform biometric authentication.
RELATED WORK
In this chapter are studied:
- Biometrics, which biometrics exist and which present the best results in terms of accuracy to recognize the correct person;
- Authentication protocols, to compare different biometric authentication protocols and implement the system using the best protocol.
Biometrics
Biometrics can be divided into sub-categories:
- Hand Region: fingerprint, palmprint, hand geometry, hand vein pattern and finger knuckle print;
- Facial region: face, ear shape, teeth, tongue print;
- Ocular region: retina, iris, sclera vasculature;
- Medico-chemical: Body odor, DNA, heart sound, electrocardiogram;
- Behavioral: Keystroke dynamics, voice, signature, gait;
Fig. illustrates the accuracy of each biometric system currently [3]. The accuracy of a biometric system is the ability of the system to correctly recognize the user.
Due to current mobile devices hardware limitations most of these biometrics cannot be used as authentication mechanisms, while other, like gait, are not proper for authentication purposes [4].
Currently, the best biometrics to use on mobile devices for authentication are:
- **Fingerprint**, widely implemented in current smartphones, by a fingerprint sensor.
- **Voice**. Every smartphone comes with a built-in microphone.
- **Face**. Like voice, every smartphone comes with a built-in camera and the image quality of these cameras is constantly increasing.
<table>
<thead>
<tr>
<th>Biometric modality</th>
<th>Accuracy level</th>
</tr>
</thead>
<tbody>
<tr>
<td>Fingerprint</td>
<td>99.9%</td>
</tr>
<tr>
<td>Palmprint</td>
<td>> 95%</td>
</tr>
<tr>
<td>Hand geometry</td>
<td>> 95%</td>
</tr>
<tr>
<td>Vein pattern</td>
<td>99%</td>
</tr>
<tr>
<td>Finger knuckle print</td>
<td>Not available</td>
</tr>
<tr>
<td>Face</td>
<td>95%</td>
</tr>
<tr>
<td>Ear</td>
<td>> 95%</td>
</tr>
<tr>
<td>Tongue print</td>
<td>Not available</td>
</tr>
<tr>
<td>Iris</td>
<td>99.9%</td>
</tr>
<tr>
<td>Retina</td>
<td>99%</td>
</tr>
<tr>
<td>Sclera</td>
<td>Not available</td>
</tr>
<tr>
<td>Voice</td>
<td>> 90%</td>
</tr>
<tr>
<td>Keystroke dynamics</td>
<td>> 90%</td>
</tr>
<tr>
<td>Gait</td>
<td>> 90%</td>
</tr>
<tr>
<td>Signature</td>
<td>> 90%</td>
</tr>
</tbody>
</table>
Figure 1 - Biometrics' accuracy
Authentication Protocols
To integrate biometrics in the system to perform remote authentication, the main emerging protocol standards are Fast IDentity Online (FIDO) Alliance and Biometric Open Protocol Standard (BOPS) [5].
The core ideas of FIDO Alliance are (i) ease of use, (ii) security and privacy, and (iii) standardization.
FIDO Universal Authentication Framework (UAF) is a protocol that provides password-less and multifactor security for online services. The users can register their devices in the system by selecting a local authentication mechanism such as fingerprint, voice or face. The UAF protocol allows the service to select which mechanisms are presented to the user. Once registered, the users simply need to repeat the local authentication mechanism, chosen at registration phase, whenever they need to authenticate to the server.
Similar to FIDO, there exists BOPS developed by Hoyos Labs. It was recently published as a standard by IEEE. Both protocols authenticate locally (user device) and the result is then sent to the specific server. The main difference between FIDO and BOPS relies on biometric requirements, since BOPS specify several minimum requirements for the biometric recognition system, such as thresholds for equal error rate (EER), liveness detection, among others requirements, while FIDO is more abstract, in a way that there exist no requirements for the biometric recognition system.
FIDO Alliance and BOPS are standard protocols for secure biometric authentication. However, there are more companies adopting FIDO Universal Authentication Framework (UAF) protocol than BOPS, making FIDO UAF protocol the best option to implement the system.
SOFTWARE ARCHITECTURE
The proposed BioAuth's software architecture (Fig. 1) will be based on FIDO UAF Protocol with fingerprint biometric authentication.
The system is composed by the following components:
1. **User Device** This module represents the user’s mobile device (Android and iOS). For a regular user the mobile device is the way to perform the registration, authentication and deregistration in BioAuth system. The protocol is divided into 4 submodules:
1.1. **Authentication Application** - user interface, where the user interacts with the application.
1.2. **FIDO Client** - responsible for processing all the information received and information to be sent. It processes the information stored inside the mobile device.
1.3. The **Authenticator-Specific Module (ASM)** is a software interface on top of FIDO Authenticators.
2. **The Authentication System** processes registration, authentication and deregistration requests. It contains four submodules:
2.1. **Communication Service** is the entry point of all communication with the Authentication System. Every message is first analyzed by the Communication Service that then forwards it to the FIDO Server.
2.2. **Integration Service** is responsible for the integration between the Business Web Application and the Authentication Module. Messages from the Business Web Application are received by the Integration Service and retransmitted to the Communication Service to be analyzed. The responses are sent to the Business Web Application by the Integration Service too.
2.3. **FIDO Server** is responsible for processing all the requests. It processes the registration,

authentication and deregistration requests and acts in conformity with the operation result. It communicates with the database to store and access user data.
2.4. **FIDO Database** is responsible to store information related to users registered in BioAuth system.
3. **Business Web Application** represents the system which the BioAuth is integrated.
3.1. **Web App Engine** represents the business system BankOnBox, the online home banking system.
3.2. **Authentication Provider Plugin** is responsible to handle the communication between the Authentication System and Business Web Application.
3.3. **Credential Verification Endpoint** used by the Communication Service to verify the user credentials before proceeding to registration.
4. **OneSignal** service is used to send push notifications.
**Data Flow**
BioAuth’s communication protocol is based on FIDO UAF protocol. The communication is divided in three steps: registration, authentication and deregistration.
**Registration**
Before the registration in the BioAuth system, there exists a pre-registration step that starts at BankOnBox (Business Web Application). It is BankOnBox internal policy to manage the client registrations on BioAuth system. If a user wants to register in BioAuth to perform second-factor-authentication using biometrics he needs to communicate that decision to BankOnBox bank operator. The bank operator will send a pre-registration request to the FIDO Server, meaning that the user is, from that moment, is able to register in BioAuth system.
The user will then access the mobile application and fill a form with his BankOnBox credentials, to verify if the user is valid and that a pre-registration already exists. It is verified by the FIDO Server. If the user is valid, the FIDO Server creates a “registration request” message that contains a random challenge that itself created.
The FIDO Client receives the “registration request” and starts the local user registration process. The user is prompted to use his fingerprint to create a cryptographic RSA key pair. The RSA private key is stored inside the mobile device and is used to create a signature of the challenge received. The FIDO Client sends a “registration response” message containing the signature along with the public key.
When the FIDO Server receives the “registration response” message, it verifies the signature using the client’s public key and sends the result of the signature verification to the FIDO Client. After this process, if the verification was valid, the user is registered in BioAuth system and is able to perform second-factor-authentications using the fingerprint.
**Authentication**
The authentication starts with the user in his BankOnBox account submitting a transaction request.
The BankOnBox, using the Authentication Provider Plugin sends a message to the Authentication System, received by the Integration Service, informing that a transaction needs to be verified and the user needs to be authenticated.
The FIDO Server sends a request to OneSignal to be sent a push notification to the user’s device. Inside with that request message goes an “authentication request” message, created by the FIDO Server, containing a random challenge, that the User Device will receive when it receives the push notification.
When the FIDO Client (User Device) receives the push notification, it extracts the challenge from the “authentication request” message. To sign the challenge the mobile device Operating System (OS) needs special user permissions to access the user’s RSA private key, created in the registration phase. The permission to access and use the private key is granted by the user authenticating himself to the system using the fingerprint registered in registration phase. After a valid user fingerprint authentication, the FIDO Client signs the challenge and creates an “authentication response” message containing the signature that sends to the Authentication System.
The “authentication response” is verified by the FIDO Server.
After the authentication, the BankOnBox will send a request to the Authentication System to verify if the authentication was valid. If the authentication was valid, the transaction is performed.
**Deregistration**
Deregistration process is similar to registration. BankOnBox manages the user entries on BioAuth so a pre-deregistration phase must take place before the deregistration. The deregistration request is sent by the bank operator to the Authentication System. After this first step, the deregistration takes place.
Deregistration starts in the user’s mobile device with the user pressing the “Deregister” button on the Authentication Application. The FIDO Client sends a message, containing the user identifier, to the Authentication System informing that a deregistration is requested.
The FIDO Server creates a “deregistration request” message, containing a random challenge.
When the FIDO Server receives the “deregistration request”, extracts the challenge, and signs the challenge. A “deregistration response” message, containing the signature, is sent to the Authentication System.
The FIDO Server verifies the signature and if the result of verification is successful the user is set as invalid in the system (the user entry is never deleted from database). The verification result is sent to the FIDO Client.
If the verification result was successful, the FIDO Client permanently deletes the user RSA cryptographic key pair.
IMPLEMENTATION
Development Process
The system is implemented following a waterfall model.
Fig.3 shows the schedule for BioAuth system.
Development Environment
The system is implemented using Microsoft technology, for mobile and for server implementation.
Mobile device application is implemented using Xamarin Forms. Xamarin is a cross-platform software from Microsoft that makes programmer's life easier, since a single application can be built into multiple Operating Systems (OSs).
This way there is no need to learn another language, like Swift for iOS. In Xamarin Forms, using C# programming language, an application is written for both OSs, Android and iOS.
The Authentication System, where the FIDO Server is located, were implemented using ASP.NET Web API Technology. ASP.NET Web API is a framework for building web APIs on top of the .NET Framework.
User Device
The user device module is responsible for:
- Communication with the Authentication System;
- The creation of the RSA cryptographic key pair;
- RSA cryptographic key pair storage;
- Use of fingerprint to authorize the use of the private key.
These tasks are performed using APIs for specific OS, Android and iOS.
Microsoft provides wrapper classes from native APIs to use native methods from each OS.
Communication with the Authentication System
The communication between the User Device and the Authentication System uses REST web services. The messages are sent as JSON objects, as specified in FIDO UAF Specification documentation.
The communication on the User Device is handled by the FIDO Client. To implement the communication it is used a RESTful library, called Refit.
With Refit is just needed to specify the communication endpoints in an interface class and call these methods passing the objects to be sent. Refit automatically serializes these objects to JSON objects.
RSA cryptographic key pair creation, key pair storage and fingerprint
These tasks were implemented in the specific OS projects, since are used native APIs.
In Android the RSA key pair is created using the "KeyPairGenerator" class. The key pair attributes are passed as methods.
In iOS is used the “SecKey” class, using the method “CreateRandomKey” that receives a dictionary containing the key pair attributes.
The RSA attributes used on both OS are:
- 2048-bit key length
- SHA-256 message digest
- PKCS#1 padding
The key pair is stored inside secure hardware for both OS. On Android is stored inside Secure Element (SE) or Trusted-Execution-Environment (TEE), depending on the hardware specifications. On iOS is stored inside Secure Enclave (SE), a co-processor.
To manage the key storage are used Keystorage Providers for each OS. On Android is used the Android Keystore, on iOS is used Keychain.
The private keys can only be accessed and used by user authorization. The authorization is given by the user by proving his identity using his fingerprint.
In this project only fingerprint authenticator is implemented since Android and Apple do not provide APIs to face and voice authentication to store and access keys.
In Android fingerprint authentication is implemented using Fingerprint API while in iOS is using TouchID API.
In Android fingerprint authentication is required by the application developer by using the method “CreateConfirmDeviceCredentialIntent()”, while on iOS is the OS that automatically asks for the fingerprint each time the OS needs to access the private key.
**Authentication System**
This module is responsible for:
- Communication with external modules (User Device, Business Web Application and OneSignal)
- Signature verification
- Storage of user’s information, like user identifier, and RSA public key.
**Communication with external modules**
The Authentication System communicates with external modules by two ways, the Communication Service and the Integration Provider.
The Communication Service is implemented using Microsoft ASP .NET Web APIs. It communicates with the User Device via HTTPS using RESTful web services. Each method of the Communication Service class has an attribute specifying the endpoint.
The Integration Service is responsible for communication between the Authentication System and the Business Authentication Application. The communication may use RESTful or SOAP Web Services. In this system the Integration Service communicates with BankOnBox system using SOAP. The Integration Service is a SOAP Web Service created using Windows Communication Foundation (WCF) framework (a framework to build service-oriented applications) from .NET.
Communication with OneSignal is performed via HTTP using REST web services. The communication is implemented using OneSignal API, and the construction of requests to send to OneSignal is performed using “OneSignal.CSharp”, that facilitates the creation of all message required fields.
**Signature Verification**
Signature verification is performed by the FIDO Server module.
It was used the BouncyCastle library to perform signature verification using the user public key. Since Android and iOS OSs send the user public keys with different encoding the method to reconstruct the public key and verify the signature is different for each signature sent by each OS.
The public key is sent by Android encoded as ASN1-X.509-DER while iOS encode as ASN1.PKCS#1-DER.
To recreate the public key from Android devices is used the class “PublicKeyFactory” and then the class “RSACng” to verify the signature.
With iOS the public key is recreated using the class “PKCS1Asn1SequenceToPublicKey” and the signature is verified using the class “SignerUtilities”.
**Storage of user’s information**
To integrate the FIDO Server with a database it is used Entity Framework. Entity Framework is an object-relational mapper that let me work with the database using .NET objects. I followed a Code-First approach where I create and manage tables only using code.
**Business Web Application**
The Business Authentication Application represents a system like an home banking system or a file management system. BioAuth current implementation is only integrated with BankOnBox system, an online home banking system.
**Integration Provider Plugin**
BankOnBox uses the concept of providers to authenticate with external systems. I developed my own provider to authenticate with BioAuth framework.
Authentication Provider Plugin handles the requests (pre-registration, authentication, authentication verification and pre-deregistration) from Web Business Application to the Authentication System.
The current implementation uses SOAP Web Services, since all BankOnBox communication structure uses WCF with SOAP. It accesses the Integration Service (Authentication System) with SOAP web services (consumes) using the WCF framework.
**EVALUATION**
The following types of tests were performed to the system:
- Usability tests;
- Interoperability tests;
- Security tests;
- Performance tests;
**Usability Tests**
Usability tests were performed without the BankOnBox component fully integrated (Business Web Application).
It was missing the webpage from BankOnBox, where the user performs the transactions, and the webpage where the bank operator sends the pre-registration and pre-
deregistration requests. Those messages were sent by me using SoapUI, to simulate BankOnBox behavior.
The tests were carried out by me and tested on eight users (seven using Android devices, one using an iOS device).
To test the system usability, I asked the users to perform the registration, authentication and deregistration tasks by themselves using their mobile devices with the BioAuth application installed.
Three users did not know what to do when the fingerprint authorization screen appeared, while registering, and I explained it was to use the fingerprint on mobile device. After this doubt, the users had no more doubts and when the fingerprint authentication screen appeared at authentication and deregistration process all users successfully used the fingerprint.
At the end of test, I asked the personal opinion of the experience. All users appreciated and found the application behavior intuitive, however all users pointed that the graphical user interface should be improved.
**Usability Tests**
Since the system was implemented following a waterfall model, it was crucial to perform interoperability tests to ensure that each module would communicate with other modules without any compatibility issues.
The following interoperability tests were performed:
- Verifying that the User Device was sending messages to the Authentication System using RESTful Web Services communication.
- Verifying that the Authentication System and the Business Web Application were successfully sending messages to each other.
The result was that the integration of all modules was easy and no errors occurred.
**Security Tests**
The security tests, in BioAuth system, could be divided in three parts:
- The security of cryptographic key storage in mobile devices;
- The security of the communication and;
- The security of the FIDO Server;
**Cryptographic Key Storage Security**
While investigating about ways of testing the security of keys storage for iOS and Android [6] [7], I found that the only way to retrieve information from key storage providers is by rooting (Android) or jailbreaking (iOS) the mobile device.
The only thing that could be done in BioAuth system is to prevent the application to be installed on a rooted or jailbroken device, by adding extra code to the application. Although new ways of rooting are always being used by malicious users it still is something to add in the code in the future.
**Communication Security**
The entire system communication is based on HTTPS protocol, so the messages are cyphered.
Although I couldn’t do more, to my knowledge after some online investigation, I tried to sniff the message content exchanged between the User Device and the Authentication System using Wireshark tool. Wireshark is a network packet analyzer that captures network packets and displays the packets data as detailed as possible.
I could see the HTTP messages being exchanged, however I could not extract any useful data from the messages since the data was ciphered. The verification of message exchange between the Authentication System and the Business Web Application were performed too and the result was the same, no useful data could be read from human eyes.
However, I just did a security verification, since i just verified that the content of messages was unreadable. I did not attack the message content, to decipher the data, neither performed any automated test with any testing tool.
**FIDO Server Security**
To test FIDO Server security, I investigated online for an automated tool to perform automated tests. However, I could not find any free automated test tool. Even so, I performed a static analysis of the FIDO Server, by analyzing the code and functionality, based on a STRIDE threat model [8].
- **Spoofing Identity** - since the authentication is performed through biometrics, this threat is less likely to occur than in a system with password, since counterfeit fingerprint is more difficult to perform [4] (depending on the password). FIDO Specification leaves the biometric implementation to the developer, but they also mention this threat as a problem that cannot be avoided but is mitigated by the use of biometrics.
- **Tampering with Data** - communication is performed by HTTPS so information cannot be read by human eyes. But, a malicious user may randomly manipulate the information of the message to create entropy in the system. To mitigate this problem, there is a field in any message sent by the server named “ServerData”, specified in FIDO Specification, which is a random value, created by the FIDO Server and that the client can’t change and must send it as it was received to the client. This value is used to identify the communication connection and to identify if the data in the message was randomly manipulated, since it is encrypted (HTTPS). If so, the FIDO Server discards that message.
• **Repudiation** - In this system all transactions are logged and all messages contain a signature, signed by the user private key, that only the user has access to, by using his fingerprint to unlock the private key usage. This way non-repudiation is granted in this system.
• **Information Disclosure** - In BioAuth’s system all message exchange is performed through HTTPS so all HTTP message data, even the headers, are encrypted. The attacker cannot access any privilege information about the user.
• **Denial of Service** - The FIDO Server verifies the HTTP header of every message received and discard all messages that comes with an incorrect “Content-Type” HTTP header, without even processing the message. This measure is not the most secure and does not prevent completely this threat, but at least prevents some malicious messages to be accepted and slow the system. According to [9] there are not many effective ways to prevent Denial of Service (DoS) attacks. However, some additional methods can be implement in the future, such as deploy a firewall or third-party services to prevent some DoS attacks [10].
• **Elevation of Privilege** - In BioAuth system all users have the same account type, so there is no elevation of privilege threat possible since there are no special permissions for any particular users.
**Performance Tests**
Performance tests were carried out to check how the Authentication System behaves and performs under a different number of concurrent virtual users performing transactions over a certain period using SoapUI tool.
The following variables were used to simulate different user behaviors:
- Number of virtual users (2, 10, 100 and 1000);
- Test time duration (seconds) (always 60 seconds);
- Time delay between each message is sent (milliseconds) (1000 ms and 100 ms);
The following statistics were collected:
- Total number of messages sent;
- The number of bytes processed;
- Requests per second;
- Error rate (percentage of requests that failed);
The results demonstrate that by increasing the number of users for a constant time delay between messages, the error rate increases too. And for the same number of users, the number of error increases when the time delay between messages is decreased from 1000ms to 100ms.
The system can handle and average of 6.69 requests per second without any errors and 25.53 with an error rate of 1%.
In a real-life scenario the system hardware specification is better than the one used to perform this test which can result in a better performance.
**CONCLUSION**
BioAuth is a framework developed for second factor authentication, such as authorize online banking transaction. The authentication is performed using biometrics. At the moment only fingerprint biometric is used.
The framework is composed by an application for mobile devices and a server. The mobile application is used to perform second factor authentications using biometrics. It was implemented using Xamarin Forms. The server was implemented using ASP.NET Web API technology.
The communication between the mobile application and the server are based on FIDO UAF Specification.
User authentication is performed by mean of asymmetric cryptographic keys. The communication is performed using REST Web Services, through HTTS. The BioAuth framework is integrated with BankOnBox, an online home banking system developed by Link Consulting company.
The framework communicates with BankOnBox using SOAP Web Services, through HTTS. The framework was evaluated in terms of usability, performance, interoperability and security. The evaluation results were satisfactory, since no negative result was obtained.
The technical contributions of this project were:
- The use FIDO UAF protocol for remote authentication using biometrics;
- Using Xamarin Forms to implement the mobile application for Android and iOS;
- Using native secure APIs, from Xamarin, to implement biometric authentication and cryptography operations on Android and iOS;
- Implementation of a server that securely communicates with outside modules using REST and SOAP web services over HTTPS;
- Implementation of a server that stores the client’s cryptographic public key and verifies signatures from the client using that same cryptographic public key with BouncyCastle cryptography library;
This framework comes in an era where mobile devices are being used more than laptops or desktops.
It is a different idea, that allows user to use biometrics, and innovative technology introduced in recent smartphones that provide an high level of security too.
Biometrics makes this system different from other authentication systems, since alternative systems base authentication on passwords or an external authentication device, which are authentication methods easily to be forgotten.
**System Limitations and Future Work**
Currently, BioAuth is only integrated with BankOnBox. In the future BioAuth can integrated other business systems.
The current framework only uses fingerprint as biometric authentication mechanism. In the future the system can be improved by adding new biometric authentication mechanisms, such as face and voice recognition.
BioAuth is only used as a second factor authentication mechanism on BankOnBox. Another improvement is to use BioAuth as first factor authentication mechanism, to perform logins, for example. Like was mentioned by the tested users in usability tests, the BioAuth’s mobile application GUI can be improved too in the future. To improve the security between the User Device and the Authentication Service, the communication protocol can be improved, by adding an extra layer of security, like the one used with BankOnBox (WS-Security using x509 certificates).
**REFERENCES**
|
{"Source-Url": "https://fenix.tecnico.ulisboa.pt/downloadFile/1407770020546389/84924-Ricardo-Almeida_Resumo.pdf", "len_cl100k_base": 6773, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 30421, "total-output-tokens": 7801, "length": "2e12", "weborganizer": {"__label__adult": 0.00064849853515625, "__label__art_design": 0.0005817413330078125, "__label__crime_law": 0.00202178955078125, "__label__education_jobs": 0.000579833984375, "__label__entertainment": 0.00010311603546142578, "__label__fashion_beauty": 0.0002884864807128906, "__label__finance_business": 0.0005731582641601562, "__label__food_dining": 0.0003974437713623047, "__label__games": 0.0009717941284179688, "__label__hardware": 0.00637054443359375, "__label__health": 0.0011577606201171875, "__label__history": 0.00035309791564941406, "__label__home_hobbies": 0.00011485815048217772, "__label__industrial": 0.0007939338684082031, "__label__literature": 0.0002751350402832031, "__label__politics": 0.0003812313079833984, "__label__religion": 0.0005350112915039062, "__label__science_tech": 0.2188720703125, "__label__social_life": 9.97781753540039e-05, "__label__software": 0.0260467529296875, "__label__software_dev": 0.73779296875, "__label__sports_fitness": 0.0003733634948730469, "__label__transportation": 0.0006804466247558594, "__label__travel": 0.0001928806304931641}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35791, 0.00953]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35791, 0.42049]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35791, 0.89897]], "google_gemma-3-12b-it_contains_pii": [[0, 4487, false], [4487, 8687, null], [8687, 10662, null], [10662, 15803, null], [15803, 18787, null], [18787, 23404, null], [23404, 28300, null], [28300, 32889, null], [32889, 35791, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4487, true], [4487, 8687, null], [8687, 10662, null], [10662, 15803, null], [15803, 18787, null], [18787, 23404, null], [23404, 28300, null], [28300, 32889, null], [32889, 35791, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35791, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35791, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35791, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35791, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35791, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35791, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35791, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35791, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35791, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35791, null]], "pdf_page_numbers": [[0, 4487, 1], [4487, 8687, 2], [8687, 10662, 3], [10662, 15803, 4], [15803, 18787, 5], [18787, 23404, 6], [23404, 28300, 7], [28300, 32889, 8], [32889, 35791, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35791, 0.06773]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
9c6d8944a8d2d89feab18e90b9976deb8942a36f
|
[REMOVED]
|
{"Source-Url": "http://users.monash.edu.au/~dtaniar/Papers/2004-EUC-JamesCorba.pdf", "len_cl100k_base": 5745, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 25323, "total-output-tokens": 7008, "length": "2e12", "weborganizer": {"__label__adult": 0.0003197193145751953, "__label__art_design": 0.00023853778839111328, "__label__crime_law": 0.0002560615539550781, "__label__education_jobs": 0.0004451274871826172, "__label__entertainment": 6.4849853515625e-05, "__label__fashion_beauty": 0.00014221668243408203, "__label__finance_business": 0.00027680397033691406, "__label__food_dining": 0.0002713203430175781, "__label__games": 0.0005202293395996094, "__label__hardware": 0.002490997314453125, "__label__health": 0.0005011558532714844, "__label__history": 0.0002237558364868164, "__label__home_hobbies": 6.0677528381347656e-05, "__label__industrial": 0.0003657341003417969, "__label__literature": 0.00019037723541259768, "__label__politics": 0.000171661376953125, "__label__religion": 0.0003533363342285156, "__label__science_tech": 0.04425048828125, "__label__social_life": 4.857778549194336e-05, "__label__software": 0.01305389404296875, "__label__software_dev": 0.93505859375, "__label__sports_fitness": 0.0002627372741699219, "__label__transportation": 0.0004892349243164062, "__label__travel": 0.0001931190490722656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30503, 0.03421]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30503, 0.59732]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30503, 0.90887]], "google_gemma-3-12b-it_contains_pii": [[0, 2375, false], [2375, 5493, null], [5493, 8445, null], [8445, 11577, null], [11577, 14341, null], [14341, 16270, null], [16270, 18489, null], [18489, 21667, null], [21667, 23774, null], [23774, 26882, null], [26882, 29871, null], [29871, 30503, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2375, true], [2375, 5493, null], [5493, 8445, null], [8445, 11577, null], [11577, 14341, null], [14341, 16270, null], [16270, 18489, null], [18489, 21667, null], [21667, 23774, null], [23774, 26882, null], [26882, 29871, null], [29871, 30503, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30503, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30503, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30503, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30503, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30503, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30503, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30503, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30503, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30503, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30503, null]], "pdf_page_numbers": [[0, 2375, 1], [2375, 5493, 2], [5493, 8445, 3], [8445, 11577, 4], [11577, 14341, 5], [14341, 16270, 6], [16270, 18489, 7], [18489, 21667, 8], [21667, 23774, 9], [23774, 26882, 10], [26882, 29871, 11], [29871, 30503, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30503, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
f28d57024c5f949c4a7778e636ade7b2c07e9e63
|
A Hybrid Model of Particle Swarm Optimization (PSO) and Artificial Bee Colony (ABC) Algorithm for Test Case Optimization
Abraham Kiran Joseph\textsuperscript{a}, Dr. G. Radhamani\textsuperscript{b} \textsuperscript{*}
\textsuperscript{a}Research Scholar, Dr.G.R Damodaran College of Science, Affiliated to Bharathiar University, Tamilnadu. E-mail: [email protected]
\textsuperscript{b}Professor/Director - Department of Computer Science, Dr.G.R Damodaran College of Science, Affiliated to Bharathiar University
Abstract
In this paper a hybrid model called Particle Swarm Artificial Bee Colony algorithm (PSABC) has been proposed. The PSABC algorithm is a combination of Particle Swarm Optimization (PSO) and Artificial Bee Colony (ABC) Algorithm. PSABC has been used in this context to optimize the fitness value of a population in ABC algorithm using Particle Swarm Optimization. PSABC could be used to optimize the test suite, and integrate both ABC and PSO for a better result, as compared to their individual performance in terms of test case optimization. In PSABC, the sites are comparable to nodes in the Software under Test (SUT). While relating to the ABC algorithm, the bees here are the Test cases that adapt with time. The main objective of the bee is to find the areas with highest coverage and highest usage, so that failures could be identified at an earlier stage. The artificial bees adapt the test cases with time and the bee’s plan is to find out the areas where the nodes have higher coverage. The ABC algorithm is used to generate the optimal number of test-cases, which are sufficient to cover the paths. These paths are generated by using the Control Flow Graph (CFG), and the PSO generates the individual test cases first. This algorithm determines the node with the highest usage by a given test case. Based on the proposed hybrid approach, an optimal result for test case execution is obtained. The performance of the proposed method is evaluated and is compared with other optimization techniques such as PSO and Ant Colony Optimization (ACO).
Keywords: Search Domain based PSABC algorithm, Test cases, statement coverage and fault coverage, Optimization
1. Introduction
Software testing is one of the primary techniques used to achieve high quality in Software. Software testing is a time consuming and costly task it uses approximately 50\% of the software system development resources [1]. Software testing can also be defined as the process of verifying and evaluating it to make sure that software meets the technical and business requirements [2].
The maintenance phase in a Software Development Life Cycle involves regression testing to be done extensively. It is necessary to retest the existing test suite whenever any alterations are done to the software. Regression testing is the phenomenon of re-running the test cases from the test suite to assure error free and modified software. It guarantees that modifications in the software have not influenced its functional characteristics [3].
Verification is done to ensure that the software meets specification and is close to structural testing whereas validation is close to the functional testing and is done by executing the Software under Test (SUT). Broadly, testing techniques include functional (black box) and structural (white box) testing. Functional testing is based on functional requirements whereas structural testing is done on code itself. Gray box testing is a hybrid of white box testing and black box testing.
The main purpose of testing can be quality assurance, reliability estimation, validation or verification. The other objectives or software testing includes, [4]
\begin{itemize}
\item The better it works the more efficiently it can be tested.
\item Better the software can be controlled more the testing can be automated and optimized.
\item The fewer the changes, the fewer the disruption to testing.
\end{itemize}
* Corresponding author. E-mail: [email protected]
266
• A successful test is the one that uncovers an undiscovered error.
Optimization techniques have been effectively used in test case generation and prioritization in recent years. Although, a number of optimization techniques had been proposed and good results have been obtained, problems such as complexity in dynamic data sets, and higher time consumption for convergence exist in the traditional optimization techniques. Thus, there is a scope of improvement for the improvement of optimization results. This research work focuses on using the appropriate optimization techniques, for test case prioritization which provides the optimal results.
This approach uses swarm intelligence based techniques for test case optimization using test case prioritization. A number of swarm intelligence approaches have been observed to produce significant results in terms of accuracy, convergence behaviour, and time taken. This research uses a couple of recent swarm intelligence approach called as the Particle Swarm Algorithm (PSO) and Artificial Bee Colony (ABC) Algorithm for Test Case Optimization [5].
1.1. Particle Swarm Optimization
PSO algorithm is motivated by the social behaviour of a collection of migrating birds trying to arrive an unknown destination. In PSO, each solution is a ‘bird’ in the flock and is known as a ‘particle’. A particle is equivalent to a chromosome (population member) in [6]. In [7], unlike Genetic Algorithm’s (GA’s), the evolutionary process in the PSO won’t generate new birds from parent ones. Instead, the birds in the population merely develop their social behaviour and as a result their movement towards a destination. The process is initiated with a collection of random particles (solutions), N. The ith particle is denoted by its position as a point in S-dimensional space, where S denotes the number of variables.
1.2. Artificial Bee Colony
In ABC algorithm, the solution of the optimization problem is denoted by the position of a food source and the quality of the solution is represented by the nectar amount of the source. In the initial step of ABC, the locations for the food source are generated randomly. In other words, for SN (the number of employed or onlooker bees) solutions, a randomly distributed initial population is produced. In the solution space, each solution $X_i = (x_{i1}, x_{i2}, \ldots, x_{iS})$ is a vector on the scale of its optimization parameters Y [8].
2. Related Work
Ant Colony Optimization (ACO) is a technique based on the real life behaviour of ants. In [3] presented a paper on the implementation of an already introduced Ant Colony Optimization Algorithm for Test Case Selection and Prioritization. This approach clearly explains the nature of ACO in identifying the possible paths and chooses the optimal solution from those paths. Results show that ACO leads to solutions that are in close proximity with optimal solutions.
ACO approach performs better than Genetic Algorithm as convergence is guaranteed, but time to convergence is uncertain. Moreover, in Non Deterministic polynomial time-hard (NP-hard) problems, high-quality solutions are required at a faster rate, but ACO focuses only on quality of solutions. Test case prioritization has also been done through PSO. Basic PSO is more appropriate to process static, simple optimization problems [9]. Moreover, it is hard to adapt to non-metric problem domains in PSO.
Bee Colony Optimization (BCO) is an emerging field for researchers. It has been applied to solve “Travelling Salesman Problem” which is a NP-Hard combinatorial problem where an optimal path is to be searched from source to destination.
Arvinder Kaur et al [10] presented the Bee Colony Optimization (BCO) algorithm for the fault coverage of a regression test suite. In the bee colony, Scout bees and Forager bees are accountable for the progress and maintenance of the colony. BCO algorithm developed for the fault coverage in a regression test suite makes use of the behaviour of these two bees. The BCO algorithm is designed to attain maximum fault coverage in minimal units of execution time of each test case.
The ABC algorithm is an optimization algorithm used to find an optimal solution to the problem in [3]. The algorithm works based on the honey bee foraging behaviour.
The main drawbacks of ABC are
• Slow convergence rate.
• As the random number is stochastic in basic ABC, certain good solutions are predictable to be skipped.
Due to the drawbacks of the above said optimization algorithms, an optimization algorithm which provides best convergence rate, less complexity, higher accuracy is required to solve the test case prioritization problem.
3. Methodology
In this approach, a combined form of PSO algorithm and ABC algorithm is used for optimizing the Test Cases.
In the proposed methodology each test case would symbolize a food source of the bees and the objective of this method is to find a best food source that refers to the test cases with maximum coverage.
The food source position of the bees corresponds to a potential solution of the optimization problem and the nectar amount match to the fitness of the associated solution.
3.1. ABC-PSO Hybrid Algorithm (PSABC)
ABC runs until its stopping condition to reach the maximum number of iterations. The end value of the
iteration is considered to be an optimal value of the individuals. The optimal values of individuals produced by the ABC algorithm are given as an input to the PSO algorithm. By this the PSO algorithm is initialized its position. PSO arbitrarily produces its initial individual sets, however in this situation of hybridization that is taken in consideration by giving the initiating point for the PSO whose final values for individuals is produced by the ABC. The advantages of proposed algorithm is
- Easy to implement.
- Broad applicability, even in complex functions, or with continuous, discrete or mixed variables.
- High flexibility, which allows adjustments and the introduction of specific knowledge of the problem by observing nature.
- Robust against initialization, regardless of feasibility and distribution of the initial solutions population.
3.2. Proposed Methodology
This research work proposes that the optimized test suite produced by the algorithm will comprise of all possible statements and faults in the program. ABC is functional to produce an Optimal Test suite by generating optimal test data which would have higher statement and path coverage. The test data will be the required input to be given to the SUT, for travelling along the path and vice versa. At first, the program is given to the Test Case optimization tool, which transforms the corresponding program into an equivalent Control Flow Graph (CFG). The independent paths from the start node to the end node are produced from CFG. Each independent path consists of number of normal nodes and predicate nodes. Every independent path would denote a Test Case. ABC algorithm is given to produce an Optimal Test suite through optimal test data which would cross the independent paths and then into to the test cases. The search bee would be a search agent which looks for the execution state of the SUT and also initiates the test cases with the initial test data through equivalence partitioning and boundary value analysis. Then the search agent computes the fitness value of each test node through evaluating the coverage of each node. This is repeated until an executable state of SUT is determined.
Then the search bee gives the fitness value of the traversed nodes/neighbouring nodes to the chosen agent [11]. The chosen bee evaluates the fitness value of traversed nodes and the neighbouring nodes. If the fitness value of the node obtained is greater than the neighbouring node’s fitness value, the node’s information is stored in the optimal test case repository. The node whose fitness value is observed less is discarded.
The algorithm for test case optimization using PSABC algorithm approach is seen below. In order to implement any algorithm, the algorithm must be converted into the pseudo code before programmatic development into an application.
The process is initiated with a collection of random particles (solutions), N. The ith particle is denoted by its position as a point in S-dimensional space, where S denotes the number of variables.
The following is the detailed algorithm
1. Initialize the test case performed by the search bee in the algorithm, and then evaluate the test cases.
2. Initialize the current traversal path, set cycle=1
3. Repeat the cycle
4. Generate the initial population and select the half part of bees as employed with PSO
5. Evaluate the fitness value for the initialized population
6. For each employee bee calculate the new test cases and find the fitness value for that new solution by applying greedy process.
7. Probability value for the new solution is calculated.
8. Above two processes is repeated for the onlooker bee, then replace it with the obtained new solution, which will be randomly produced and it is stored.
9. Add the test case to the optimal repository
10. In the next iteration scouts generate the new test data
ABC iterates till its stopping criterion is met which determines the maximum number of paths covered and faults covered. Then the optimal values of individuals obtained from the ABC are given to the PSO. Then the PSO is initiated. During the process, each particle observes three values namely its current position \( x_i \), the best position it arrived in previous cycles \( P_i \), its flying velocity \( V_i \). These three values are denoted as follows:
\[
\text{Current position } \quad x_i = (x_{i1}, x_{i2}, \ldots, x_{iS}) \\
\text{Best previous position } \quad P_i = (P_{i1}, P_{i2}, \ldots, P_{iS}) \\
\text{Flying velocity } \quad V_i = (V_{i1}, V_{i2}, \ldots, V_{iS})
\]
(1)
In each time interval (cycle), the position \( P_i \) of the best particle \( g \) is computed as the best fitness of all particles. Thus, each particle updates its velocity \( V_i \) to get closer to the best particle \( g \), as follows:
\[
\text{New } \quad V_i = \omega \times \text{current } V_i + c_1 \times \text{rand}() \times (P_i - X_i) + c_2 \times \text{Rand}() \times (P_i - X_i)
\]
(2)
As such, using the new velocity \( V_i \), the particle’s updated position becomes:
\[
\text{New position } \quad x_i = \text{current position } x_i + \text{New } V_i \text{ with } V_{\text{max}} \leq V_i \leq -V_{\text{max}}
\]
(3)
where \( c_1 \) and \( c_2 \) represent two positive constants named learning factors (usually \( c_1 = c_2 = 2 \)); \( \text{rand}() \) and \( \text{Rand}() \) denote two random functions in the range \([0,1]\); \( V_{\text{max}} \) is an upper limit on the maximum change of particle velocity [12], and \( \omega \) denotes an inertia weight employed as an enhancement proposed by Shi and Eberhart [13] to manage the influence of the previous history of velocities on the current velocity. The \( \omega \) balances the global search and the local search; and it is introduced to minimize linearly with time from a value of 1.4–0.5 [12]. For itself global search is initiated with a large weight and then decreases with time to favour local search over global search [14].
It is observed that the second term in equation (2) indicates cognition or the private judgment of the particle
when comparing its current position to its own best position. The third term in equation (2), denotes the social collaboration between the particles and compares a particle’s current position to that of the best particle [15]. Furthermore, in order to control the change of particles velocities, upper and lower bounds for velocity change is limited to a user-specified value of $v_{\text{max}}$. Once the new position of a particle is computed using equation (3), the particle, then, flies towards it [13]. Therefore, the main parameters used in the PSO are the population size (number of birds); number of generation cycles; the maximum change of a particle velocity $v_{\text{max}}$ and $\omega$. However in this situation of hybridization that is taken in consideration by giving the initiating point for the PSO whose final values for individuals is produced by the ABC.
The detailed pseudo code of the Test Case Optimization algorithm using PSABC approach is presented in the following section.
**Detailed pseudo-code of PSABC algorithm:**
1. **Initialize the test cases** which is performed by the search bee
- Search for an executable state and evaluate the test nodes
- Initialize the current traversal path as cycle=1
2. **Repeat**
- Generate the initial population $X_{t1} = 1, 2, \ldots, N$
- Select half part of bees as employed bee with PSO
- Evaluate the fitness ($f_i, p_i$) of the population
- Set cycle to 1
3. **Repeat**
- For each employed bee Do
- Produce new solution $V_i$
- Calculate the value $f'_i$
- Apply greedy selection process
- Calculate the probability values $p_i$ for the solutions $X_i$
- For each onlooker bee
- Select a solution $X_i$ depending on $p_i$
- Produce new solution $V_i$
- Calculate the values $f'_i$ Apply greedy selection process
- If there is an abandoned solution for the scout
- Replace it with a new solution which will be randomly produced
- Memorize the best solution so far
- Add the test case to the optimal repository
- cycle = cycle + 1
- until cycle=MCN
To implement the above algorithm, the proposed approach uses the Test Suite Optimization tool to optimize the Test Cases by employing the PSABC algorithm. The tool considers a program as an input to generate independent paths. Using the generated independent paths Test Cases are traversed along the paths with the help of PSABC algorithm. By doing so, the test cases with maximum coverage (High fitness Value) are recognized. Finally the optimal Test Suite is generated as an output.
### 4. Simulation Results
The experiment is implemented in MATLAB. The test case prioritization technique’s basic evaluation is to have maximum number of faults covered and statement covered with minimum number of test cases required. In this approach, the execution time of every test case is also analyzed. The fault measuring technique used in fault coverage is based testing technique. In this example, there are test cases forming Test Suite ($T_S = \{T_1, T_2, T_3, T_4, T_5, T_6, T_7, T_8, T_9, T_{10}\}$ and the faults covered by those test cases are represented as Faults Covered ($FC = \{F_1, F_2, F_3, F_4, F_5\}$). Similarly the statements covered by the test cases are denoted as Statements Covered ($SC = \{S_1, S_2, S_3, S_4, S_5, S_6, S_7\}$). The Control Flow Graph (CFG) is seen in figure 1.

This section compares the performance of the proposed PSABC approach with the other optimization approaches such as ACO, PSO and ABC, in terms of percentage of statement coverage and fault coverage.

Table 1 and 2 clearly shows the Test cases with the faults and statements covered in particular execution time.
<table>
<thead>
<tr>
<th>Test Case/Faults</th>
<th>F1</th>
<th>F2</th>
<th>F3</th>
<th>F4</th>
<th>F5</th>
<th>No. of Faults Covered</th>
<th>Execution Time</th>
</tr>
</thead>
<tbody>
<tr>
<td>T1</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td></td>
<td></td>
<td>3</td>
<td>10</td>
</tr>
<tr>
<td>T2</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td></td>
<td>4</td>
<td>8</td>
</tr>
<tr>
<td>T3</td>
<td>x</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td>2</td>
<td>11</td>
</tr>
<tr>
<td>T4</td>
<td>x</td>
<td></td>
<td>x</td>
<td>x</td>
<td></td>
<td>3</td>
<td>15</td>
</tr>
<tr>
<td>T5</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td></td>
<td>4</td>
<td>11</td>
</tr>
<tr>
<td>T6</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>2</td>
<td>9</td>
</tr>
<tr>
<td>T7</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td></td>
<td>4</td>
<td>8</td>
</tr>
<tr>
<td>T8</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>2</td>
<td>6</td>
</tr>
<tr>
<td>T9</td>
<td>x</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td>2</td>
<td>5</td>
</tr>
<tr>
<td>T10</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>3</td>
<td>5</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Test Case/ Faults</th>
<th>S1</th>
<th>S2</th>
<th>S3</th>
<th>S4</th>
<th>S5</th>
<th>S6</th>
<th>S7</th>
<th>No. of Faults Covered</th>
<th>Execution Time</th>
</tr>
</thead>
<tbody>
<tr>
<td>T1</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>3</td>
<td>9</td>
</tr>
<tr>
<td>T2</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td></td>
<td>x</td>
<td></td>
<td></td>
<td>4</td>
<td>6</td>
</tr>
<tr>
<td>T3</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>3</td>
<td>10</td>
</tr>
<tr>
<td>T4</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td>4</td>
<td>7</td>
</tr>
<tr>
<td>T5</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td>5</td>
<td>11</td>
</tr>
<tr>
<td>T6</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td>x</td>
<td></td>
<td></td>
<td>2</td>
<td>6</td>
</tr>
<tr>
<td>T7</td>
<td>x</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>3</td>
<td>8</td>
</tr>
<tr>
<td>T8</td>
<td>x</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>3</td>
<td>4</td>
</tr>
<tr>
<td>T9</td>
<td>x</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>4</td>
<td>3</td>
</tr>
<tr>
<td>T10</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td>4</td>
<td>3</td>
</tr>
</tbody>
</table>
Figure 2 shows the comparison of the number of cycles and the statement coverage in percentage. From the figure 2 that the proposed test case prioritization approach using PSABC provides better statement coverage when compared with ACO, ABC and PSO optimization approaches.
When the number of cycles increases, the Statement coverage also increases linearly. For instance, when the number of cycles is 14, the statement coverage of the ACO, PSO and ABC approach is 31%, 40% and 47% respectively. But, when the proposed PSABC approach is considered, the statement coverage attained is 51%. Similarly, for the other cycles, the proposed PSABC based Test case prioritization approach provides better results when compared with the other existing approaches.
Figure 3 shows the fault coverage comparison in percentage for the approaches such as ACO, PSO, ABC and PSABC. It can be observed from the graph that, there is significant increase in the faults coverage with the increase in the number of cycles. The proposed approach outperforms the other two approaches in terms of the fault coverage. For example, considering the number of cycles is 14, the fault coverage obtained by the approaches like ACO, PSO and ABC are 42%, 54% and 59% respectively. But, when the proposed PSABC approach is considered, the fault coverage obtained is 65% which is higher than the approaches taken for consideration.
Thus, it can be observed from the simulation results that the test cases are prioritized based on higher statement coverage and fault coverage using the PSABC approach.

5. Conclusion
Testing ensures that the software meets the user conditions and necessities. Effectual generation of test cases and prioritization of test cases has to be addressed in the field of Software Testing. Factors like effort, time and cost of the testing are factors influencing these as well. A
number of research work have been proposed in the literature for test case prioritization. The main aim for prioritization of test cases is to minimize the cost and time of regression testing. The objectives considered in this research work are statement coverage and fault coverage within a minimum execution time. This research work aims in attaining test case prioritization results using PSABC. It is observed from the experimental results that the proposed PSABC based test case prioritization based approach provides better results when compared with ACO, PSO and ABC.
References
|
{"Source-Url": "http://searchdl.org/public/book_series/AETS/6/543.pdf", "len_cl100k_base": 5837, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 21751, "total-output-tokens": 6661, "length": "2e12", "weborganizer": {"__label__adult": 0.0004570484161376953, "__label__art_design": 0.0003781318664550781, "__label__crime_law": 0.0005593299865722656, "__label__education_jobs": 0.0012388229370117188, "__label__entertainment": 9.816884994506836e-05, "__label__fashion_beauty": 0.0002338886260986328, "__label__finance_business": 0.0004105567932128906, "__label__food_dining": 0.0004649162292480469, "__label__games": 0.0010271072387695312, "__label__hardware": 0.001697540283203125, "__label__health": 0.0011911392211914062, "__label__history": 0.0003476142883300781, "__label__home_hobbies": 0.000148773193359375, "__label__industrial": 0.0006432533264160156, "__label__literature": 0.0003974437713623047, "__label__politics": 0.00033283233642578125, "__label__religion": 0.0004963874816894531, "__label__science_tech": 0.1365966796875, "__label__social_life": 0.00012445449829101562, "__label__software": 0.00812530517578125, "__label__software_dev": 0.84326171875, "__label__sports_fitness": 0.0005369186401367188, "__label__transportation": 0.0007500648498535156, "__label__travel": 0.0002493858337402344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27108, 0.03239]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27108, 0.16007]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27108, 0.88345]], "google_gemma-3-12b-it_contains_pii": [[0, 4015, false], [4015, 9339, null], [9339, 15435, null], [15435, 19174, null], [19174, 23477, null], [23477, 27108, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4015, true], [4015, 9339, null], [9339, 15435, null], [15435, 19174, null], [19174, 23477, null], [23477, 27108, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27108, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27108, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27108, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27108, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27108, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27108, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27108, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27108, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27108, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27108, null]], "pdf_page_numbers": [[0, 4015, 1], [4015, 9339, 2], [9339, 15435, 3], [15435, 19174, 4], [19174, 23477, 5], [23477, 27108, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27108, 0.14634]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
af2dab5d52bc4d4a1873f711153638c4996b8b9c
|
Adoption of Blockchain to build a Cryptocurrency for Ledger Systems
Salil Abrol, Ajay Dureja, Aman Dureja
Abstract: The idea of the cryptocurrency was to decentralize the currency system by establishing transactions over distributed peer to peer network [1]. The technology of Blockchain was adopted to achieve this motive. The term blockchain comes from the idea of list of blocks, growing continuously over time wherein every block carries the data relating to the transactions and data regarding the cryptographical linkage using secure hash algorithms and the protocols [2]. Through this paper, we have shown the implementation of the blockchain technology so as to build the cryptocurrency. While building up the cryptocurrency, called the ‘SantCoin’, the idea about how this technology can revolutionize the traditional existing ledger systems can be upgraded so as to implement secure means of transactions over a distributed network. This implementation work suggests the use of technology in almost every governing body so that they can secure themselves and limit the dependency on human resource to do their central authoritative work [3].
Keywords: Block, Blockchain, Cryptocurrency, Cryptocurrency linkage, Currency, Distributed network, Ledger, Peer to peer networks, Transactions, SantCoin, Server.
I. INTRODUCTION
Through the course of this paper, we will see how a cryptocurrency can be built up based on the technology of Blockchain [4]. By building up the cryptocurrency through this technology, it is depicted that in similar way the Blockchain can be employed to revolutionize the task in almost every aspect of every sector [5], whether it may be a banking sector [6], accounting sector, property sector, or any other business-oriented tasks. Through this technology used in this implementation, it is depicted that there is no central authority and everything is being carried out over distributed peer to peer network and transactions are getting stored in a continuously growing list of records called a Blockchain, and there it is secured by secure hash algorithm SHA256 and various rules are applied so as to prevent the blockchain being hacked. No peer is acting as central server, all have equal access to the chain and is maintained in every peer’s local memory. Go through the implementation work, and you will certainly find it helpful for using this technology in any of your domain.
The Paper is organized in 3 chapters. Chapter 2 discusses about the IDEs, and tool used. It also discusses about the approach that is to be followed to build the cryptocurrency. In this chapter the code is also given to implement the cryptocurrency. Chapter 3 provides the simulation results of the python code with screenshots of every domain of the code and finally Chapter 4 concludes the research paper by discussing about the future works that can be done based on the knowledge provided by this paper.
II. METHODOLOGY, IDES, TOOLS AND IMPLEMENTATION
A. SPYDER IDE
Spyder is a great Integrated development environment which is itself developed in python for developing applications and constructs in python. It has a compiled environment and is a great tool for development and data exploration. It was initially released in October 2009. However stable release was found in 2019 only. The repository is available at github at github.com/spyder
B. Use of Spyder IDE And The Supporting Tools
Spyder IDE will be used to implement our blockchain in python. Along with the Spyder, we need to install two more tools i.e. the Flask, which is a framework that will provide tools and libraries such as Werkzeug and Jinja used to build a web application, and the other tool is Postman having a user-friendly UI used to deal with the HTTP Get and Post requests. The Flask can be installed by simply giving the command in the command prompt.
deg install Flask==0.12.2
while the Postman is downloaded from the official website of Postman i.e. www.getpostman.com.
Fig. 1. Glimpse of Spyder IDE
Adoption of Blockchain to build a Cryptocurrency for Ledger Systems
On the lest side is the working environment to write the python code, and on the bottom right is the python console which shows the result of the simulation.
We proceed in three parts, Part 1 will show the building up of the Blockchain and Part 2 will show the mining of the blockchain and Part 3 will be for implementing the most important feature of blockchain that is the decentralization.
C. Building up the Blockchain
Before proceeding with the blockchain code implementation, we create two json files, transaction.json and nodes.json.
Contents of transaction.json are:
```
{
"sender": "",
"receiver": "",
"amount":
}
```
Contents of nodes.json are:
```
{
"nodes": ["http://ip:port-number1",
"http://ip:port-number2",
"http://ip:port-number3"]
}
```
The nodes.json shows that there are 3 nodes in distributed p2p network, with port numbers port-number1, port-number2, port-number3.
So, we start the implementation by building up the blockchain. We build the blockchain in two parts, in the first part we just build the architecture of the blockchain and in part 2 we will make 2 functions to get the state of the blockchain in order to display it in the postman and other function to mine the blocks.
First, we will import some needed libraries like the datetime library because each block will have a time-stamp of when it was mined.
Second, we will use the hashlib library that will be used to hash the blocks.
Third, we will import the json library from which we will use he functions to encode the block.
Fourth, we will use the flask library’s flask class which will be the pre made web application itself, and then we use the jsonify class to post the messages to interact with our blockchain.
After importing the required libraries, we build the basic architecture of the blockchain. We will define a class that will include all the methods and properties related to the blockchain. In this class, we will include the genesis block, then we will initialize a chain that will be done in the ‘init’ function, then we will make a create block function to add a new block, and which will be used to mine a block later on, and then we will add some tools to make sure that the blockchain is solid i.e. the chain which can not be hacked or broken. The initialization will be done by the ‘init’ function that we first create in the blockchain class and that function will take the ‘self’ as the parameter. The ‘self’ is the object of the class that is created when we will build the whole class. That means, ‘self’ is the instance of the class, and can be said as the blockchain itself, because our class defines the architecture of the blockchain, and thus its instance would be a fully functional blockchain. Inside the ‘init’ function, we will declare a list, and this list would be the chain containing the blocks, that would be initially empty when no block is mined, and within this ‘init’ method, we will make the genesis block by calling the function that will create a block and we call this function using the object ‘self’. This function is defined in the class only, and takes two arguments, the first one is the proof of the block and the second argument is the previous hash i.e. the hash of the previous block. As we are making the genesis block, we keep the value of second argument as 0, because there is no previous hash for the genesis block. In the ‘init’ function, we make another list to store the transactions, that will take place.
The class will carry a bunch functions:
- A function to initialize the properties and create a genesis block.
- A function to create a block that will be used whenever we want to mine a block and this function will create a block having the properties such as index, timestamp, proof, previous hash and the transactions. Finally, this block will append the created block to the blockchain.
- A function to return the index of the previous block.
- A function to define the proof of work, and to implement the SHA256 algorithm. We can make any formula to encode the hashes of the blocks and convert them to string, and then encode them to hexadecimal code and store it in a separate variable.
- A function to encode the block that will be done using the ‘dumps’ function from the json class.
- A function to check if the chain is valid or not. The checking will be done on the basis of comparison between the value of the previous hash variable and the hash of the previous block which is computed. If they match, then chain is valid, else not.
- A function to add the transactions to the block. This will add the data to the list of transactions we created in the ‘init’ function and the transactions will carry the name of the sender, the name of the receiver, and the amount sender sent to the receiver.
- A function to add the nodes to the distributed peer to peer network on which the blockchain is working.
- A function to replace the chain, if there is a situation of competing chains on the network. The chain will be preferred which will have the longest length and obviously on the basis of the decision made by the consensus protocol.
D. Mining up the Blockchain
To mine the blockchain we choose the following steps:
- Create a web app using the flask.
- Create the address for the node on the port.
- Create blockchain class’ instance.
- We mine a new block using the mine block function. Create a mine block function that mines the block, adds the transactions to the new block. The new block will be of-course created by calling the function that was created inside the class to create a new block.
- At last we check the validity of the blockchain.
E. Decentralizing the Blockchain
Decentralizing the blockchain only connects up all the nodes in a distributed peer to peer network, deals with the collisions and chain replacements if needed and finally running the app. Now, the only thing to do is, we need to replicate the above python files by number times, there are the number of nodes in the network. Suppose, if there are 3 nodes in our example, then we will build 3 copies of these python files which will be differentiated as per the port number on which they will run. To do this, we just need to add one small line at the end of each file:
```
# For Running the app
app.run(host = "host-id", port = "port-number1")
```
for every node, the port number will be different as:
```
app.run(host = "host-id", port = "port-number2")
app.run(host = "host-id", port = "port-number3")
```
### III. SIMULATION RESULTS AND DISCUSSIONS
A. SIMULATION IN POSTMAN
We use Postman to simulate the blockchain on a distributed peer to peer network. The distributed peer to peer network will generally be established on two or more different nodes or computers, but to simulate them on the same machine, we have the POSTMAN simulator, where we can transfer GET and POST requests within the nodes and exchange the transactions [8].
To start with the simulation, first we run our python files on the console. In our example, there are 3 python files, namely santcoin_node_5001.py, santcoin_node_5002.py and santcoin_node_5003.py. The following figure shows the execution of santcoin_node_5001.py on Console1/A of SPYDER IDE.

Similarly, we can add the number of consoles depending on the number of nodes in our network and run the respective python files on the respective consoles. So, we run the other two python files also.
```
In [1]: running [' command here']
```


Now, we open up the POSTMAN. The following figure shows the glimpse of POSTMAN, how it looks like.

Now, by default there is always a Genesis Block present in the blockchain, which forms the first block of the blockchain. So, in the POSTMAN, there is a menu to select the type of request, we want to send. In this menu we will select the GET or POST request.
Also, there is a address bar, where we will put the URL of the node, to which we want to send the request. So, let us start by sending the GET request to the node 5001, that is our first node. We send the request by calling the method 'get_chain’ we built in our python file. This method will display the current blockchain existing at that node.

Here, it shows that there is one block in the blockchain which has the previous hash of 0, index is the block number, proof shows the proof of work, which is 1 for the genesis block, and transactions tab shows the transactions that are stored in the particular block. As in the genesis block, there are no transactions for the first time, so it is empty. Timestamp is another parameter that is stored in the block, that displays the time at which the block is mined.
Similarly, we call ‘get_chain’ method on all the nodes that is the nodes 5002 and 5003, we would get the genesis block in the blockchain.
B. CONNECTING THE NODES TO FORM DISTRIBUTED P2P NETWORK
To connect the nodes, select the POST request from menu of the POSTMAN and call the method ‘connect_node’ that we built in the python file on each of the nodes that is here we call on nodes 5001, 5002 and 5003. So we connect node 5001 to 5002 and 5003, then we connect node 5002 to 5001 and 4003, and at last we connect node 5003 to 5001 and 5002. So now, we will have a fully connected network just like a fully connected graph.
We paste the contents of nodes.json file in the body section of the POSTMAN and run the requests to connect the nodes.
By this time, our distributed peer to peer network is fully ready and now it is the time to start mining blocks, play transactions and load them onto the blocks and finally concatenating the mined blocks on the blockchain maintained in the network [9].
C. MINING A BLOCK
To mine a block, we need to switch to the GET request in the POSTMAN, and call the method ‘mine_block’. The node on which the ‘mine_block’ will be called, will mine a block and will get a reward by the blockchain system to successfully mine a new block. So, to mine a block on the node 5001, switch to GET request and call the method ‘mine_block’ on this node.
The above figure shows that a block is mined successfully, and this block will be going to have index 2, that means it will be the second block in the blockchain as the first one was genesis block. Then it displays a message “congratulations, you just mined a block”, it shows the hash of the previous block, proof of work, timestamp, and contains one transaction, the transaction which will carry the reward for the node which mined the block, here the reward is for the node 5001, which is the receiver of the reward, and sender of the reward is the blockchain system, and the reward is 1 santcoin.
Now, we see the chain status by calling the method ‘get_chain’ on the node 5001.
Fig. 16. Blockchain at Node 5001 has 2 blocks.
But if we see the chain status on other two nodes, it will show only one block, because we have not applied the consensus yet.
Fig. 17. Blockchain at node 5002 has 1 block.
To apply the consensus, we need to call ‘replace_chain’ method on the nodes 5002 and 5003 so that they also get the updated blockchain.
Fig. 18. Blockchain at node 5003 have 1 block.
After, applying the ‘replace_chain’ method, the system shows the message, “The nodes had different chains so it was replaced by largest chain”, which is the fundamental phenomenon of the operation of consensus protocol. Now, if we call ‘get_chain’ method on node 5002, it will show the blockchain as it was on node 5001, that means now consensus is achieved between the nodes 5001 and 5002.
Fig. 19. Applying consensus at node 5002.
Fig. 20. Consensus achieved between node 5001 and 5002.
Similarly, we bring the consensus at node 5003, by calling ‘replace_chain’ and then we will see the blockchain with two blocks at node 5003 also.
Now, all the nodes in the distributed p2p network carry the same blockchain. So, now we can start transactions between the nodes[10]. These transactions will happen and get recorded on the peers itself, and whenever any new block is mined by any of the peers, these transactions and, also a new transaction of reward for mining the block for the peer which mined it, will be added to that new block and then consensus protocol will help to achieve consensus between all the nodes in the network ensuring that all of them have the same copy of the blockchain.
**D. CARRYING OUT TRANSACTIONS BETWEEN THE NODES**
To carry out the transactions between the nodes, we will use the transaction.json file, in which we built up the template for the transaction. It has a field ‘sender’, which will carry the public key of the sender peer, i.e. the peer which will send the santcoins. It has a field ‘receiver’, which will carry the public key of the peer which is going to receive the santcoins and also it has the field ‘amount’ which will carry the number of santcoins that the sender will send to the receiver.
So, we copy and paste the transaction.json file’s contents in the body of the postman work area and simultaneously fill the fields sender, receiver and amount, and then send the POST request to the method ‘add_transaction’.
On calling the ‘add_transaction’ method, the transaction is recorded at the peer 5001, and whenever a new block will be mined, the transaction will be added to the ‘transactions’ part of that block. Let us check it by mining a block by calling the ‘mine_block’ method at node 5001.
As, we can see, two transactions are added to this block 3, the first transaction is of 5000 santcoins that we sent from node 5001 to 5002 and the second transaction is the transaction of reward given by blockchain system to node 5001 for successfully mining the block. As there are 3 blocks now at node 5001, the consensus protocol will take care of achieving the consensus by calling ‘replace_chain’ on the other nodes on the network so that they also carry the same blockchain as the node 5001 has.
Fig. 26. Achieving consensus at node 5002.
Similarly, ‘replace_chain’ will help to achieve consensus at node 5003, and the blockchain at all the 3 nodes in the network will be as follows:
```json
{
"chain": [
{
"index": 1,
"previous_hash": "0",
"proof": 1,
"transactions": []
},
{
"index": 2,
"previous_hash": "7460ac46ea33decf42a68bbc3e74ddd12e51a07da9bfe0cc37edcf9e5d94c4",
"proof": 533,
"transactions": [
{
"amount": 1,
"receiver": "Hadelin",
"sender": "Hadlein"
}
]
},
{
"index": 3,
"previous_hash": "af32442afeb7a15f1bb67ba6c3259b4cef13b3c0af92354509956a3ce25bc4a",
"proof": 45293,
"timestamp": "2019-05-24 00:14:39.616839",
"transactions": [
{
"amount": 5000,
"receiver": "Kirill",
"sender": "Hadlein"
},
{
"amount": 1,
"receiver": "Hadelin",
"sender": "d9bacc8d9a149ac9696ef56051a1797"
}
]
}
],
"length": 3
}
```
Now, let us add some ore transactions, first we send 10000 santcoins from node 5003 to 5002.
Fig. 27. Transaction will be added to block 4.
Now, we send 33 santcoins from Node 5002 to 5003. As after the previous transaction, no block is mined, so now there are two transactions waiting at the respective peers to be added to the block whenever a new block is mined.
Fig. 28. Transaction will be added to block 4.
Now, let us mine a block by calling ‘mine_block’ on node 5003, so the waiting two transactions will be added to block 4 and one more transaction of reward by blockchain system to the node 5003 ill also be added. After this the node 5003 will have a blockchain with 4 blocks and other two nodes will have blockchain with 3 blocks, so the consensus protocol will come to play and call ‘replace_chain’ to update the largest chain on all the nodes of the network.
So, the chain will be:
```json
{
"chain": [
{
"index": 1,
"previous_hash": "0",
"proof": 1,
"transactions": []
},
{
"index": 2,
"previous_hash": "7460ac46ea33decf42a68bbc3e74ddd12e51a07da9bfe0cc37edcf9e5d94c4",
"proof": 533,
"transactions": [
{
"amount": 1,
"receiver": "Hadelin",
"sender": "d9bacc8d9a149ac9696ef56051a1797"
}
]
},
{
"index": 3,
"previous_hash": "af32442afeb7a15f1bb67ba6c3259b4cef13b3c0af92354509956a3ce25bc4a",
"proof": 45293,
"timestamp": "2019-05-24 00:14:39.616839",
"transactions": [
{
"amount": 5000,
"receiver": "Kirill",
"sender": "Hadlein"
},
{
"amount": 1,
"receiver": "Hadelin",
"sender": "d9bacc8d9a149ac9696ef56051a1797"
}
]
},
{
"index": 4,
"previous_hash": "c1698495e943f91a0c9437a3ea880d04aa0a373d",
"proof": 100473,
"timestamp": "2019-05-24 00:29:48.171658",
"transactions": [
{
"amount": 33,
"receiver": "Hadelin",
"sender": "Hadlein"
}
]
}
],
"length": 4
}
```
Adoption of Blockchain to build a Cryptocurrency for Ledger Systems
"transactions": []
},
{
"index": 2,
"previous_hash": "7460ac46fe23ddec42af68bc3e74dad5a07da9b0f00c37e95a4e594c4",
"proof": 533,
"transactions": []
}
],
{
"index": 3,
"previous_hash": "af32442afeb7a15f1bb67ba6c3259b4cef13b3c0a9235450995c6a3e3c25bc4a",
"proof": 45293,
"timestamp": "2019-05-04 00:14:39.616839",
"transactions": []
}
],
{
"index": 4,
"previous_hash": "7813b05b5616cf204976c41fb5d0f15556f15f884b892151d07cd7052751565",
"proof": 21391,
"timestamp": "2019-05-24 00:25:41.839079",
"transactions": []
}
]
IV. CONCLUSION AND FUTURE WORK
BLOCKCHAIN technology as we saw is a very intelligent method to solve the world’s largest problem that is the data Security. The term blockchain was limited to original-ly for the cryptocurrencies but now it is a universal method that can be implemented to various projects and various sectors for implementing the security within the whole sector. Being an emerging technology, it will occupy our surroundings in every way and thereby providing us a secure digital world. This paper carried the understanding about the blockchain technology and concludes with show the practical implementation of this technology so as to build a new Cryptocurrency based on Blockchain, or can be adopted as a very efficient method to implement the Banking systems, ledger systems, accounting systems, payment wallets, credit and debit management in a totally decentralized manner without any central governing body.
REFERENCES
AUTHORS PROFILE
Salil Abrol pursued Bachelor of technology from PDM College of Engineering, MDU University in year 2017. He pursued his Master of Technology from PDM University in year 2019. He is currently working as Assistant Professor in PDM University.
Ajay Dureja pursuing Ph.D. from DCRUST, Murthal, Sonepat. He pursued Master of Technology from PDM College of Engineering, MDU University in year 2010. He pursued Bachelor of Technology from Bhuvan Institute of Technology & Sciences, MDU in year 2007. He is currently working as Assistant Professor in Department of Computer Science & Engineering, PDM University since 2010. He has published more than 20 research papers in reputed international journals including Scopus Indexed and conferences including IEEE and it’s also available online. His main research work focused on Internet of Vehicles, MANET and Image Processing. He has 9 years of teaching experience and 4 years of Research Experience.
Aman Dureja pursuing Ph.D. from GGSIPU, Dwarka, New Delhi. He pursued Master of Technology from PDM College of Engineering, MDU University in year 2010. He pursued Bachelor of Technology from Bhuvan Institute of Technology & Sciences, MDU in year 2007. He is currently working as Assistant Professor in Department of Computer Science & Engineering, PDM University since 2010. He has published more than 20 research papers in reputed international journals including Scopus Indexed and conferences including IEEE and it’s also available online. His main research work focused on Machine Learning & Deep Learning. He has 9 years of teaching experience and 5 years of Research Experience.
|
{"Source-Url": "https://www.ijeat.org/wp-content/uploads/papers/v9i1/A9506109119.pdf", "len_cl100k_base": 5805, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 27719, "total-output-tokens": 7664, "length": "2e12", "weborganizer": {"__label__adult": 0.0006017684936523438, "__label__art_design": 0.0004258155822753906, "__label__crime_law": 0.0007610321044921875, "__label__education_jobs": 0.0029506683349609375, "__label__entertainment": 0.00018203258514404297, "__label__fashion_beauty": 0.000278472900390625, "__label__finance_business": 0.00872802734375, "__label__food_dining": 0.0007510185241699219, "__label__games": 0.0016632080078125, "__label__hardware": 0.0025806427001953125, "__label__health": 0.0013427734375, "__label__history": 0.0005106925964355469, "__label__home_hobbies": 0.0002846717834472656, "__label__industrial": 0.0010309219360351562, "__label__literature": 0.0004355907440185547, "__label__politics": 0.0006585121154785156, "__label__religion": 0.0006031990051269531, "__label__science_tech": 0.276611328125, "__label__social_life": 0.00017511844635009766, "__label__software": 0.0176239013671875, "__label__software_dev": 0.67919921875, "__label__sports_fitness": 0.0005292892456054688, "__label__transportation": 0.00144195556640625, "__label__travel": 0.0003345012664794922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26988, 0.06024]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26988, 0.45239]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26988, 0.89677]], "google_gemma-3-12b-it_contains_pii": [[0, 4016, false], [4016, 9773, null], [9773, 13120, null], [13120, 13981, null], [13981, 15945, null], [15945, 18210, null], [18210, 21563, null], [21563, 25342, null], [25342, 26988, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4016, true], [4016, 9773, null], [9773, 13120, null], [13120, 13981, null], [13981, 15945, null], [15945, 18210, null], [18210, 21563, null], [21563, 25342, null], [25342, 26988, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26988, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26988, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26988, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26988, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26988, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26988, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26988, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26988, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26988, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26988, null]], "pdf_page_numbers": [[0, 4016, 1], [4016, 9773, 2], [9773, 13120, 3], [13120, 13981, 4], [13981, 15945, 5], [15945, 18210, 6], [18210, 21563, 7], [21563, 25342, 8], [25342, 26988, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26988, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
4abcecb981b3160c0946caf1e1435118e715cab5
|
Properties of Associations between Objects in Object-Oriented Software Package Style
A. Ramalakshmi
Ph. D. Scholar
Department of Computer Science
Manonmanium Sundaranar University
Tirunelveli
ABSTRACT
One in every of the fashionable paradigms to develop a system is object directed analysis and style. During this paradigm, there are many objects and every object plays some specific roles. When characteristic objects, the assorted relationships among objects should be known. This paper makes a literature review over relationships among objects. Mainly, the relationships are 3 basic varieties, as well as generalization/specialization, aggregation and association. This paper presents 5 taxonomies for properties of the relationships. The primary taxonomy relies on temporal read. The second taxonomy relies on structure and therefore the third one depends on behavioural. The fourth taxonomy is nominative on mathematical read and fifth one associated with the interface. to boot, the properties of the relationships are evaluated in an exceedingly case study and several other recommendations are projected.
Keywords:- Taxonomy, Class, Object, Relationship, Object-Oriented, software package Engineering.
I. INTRODUCTION
The fashionable paradigm for developing software package is Object-Oriented (OO). During this paradigm, we have a tendency to describe our world mistreatment the article classes (classes) or object varieties (pure abstract category or Java interface) (see [12],[13] and [26]). Every class/object plays a selected role within the software package. These roles are programmed in Object-Oriented languages like C++ and Java. Several attributes (data variables) and services (operations/functions/methods) are assigned to those categories. Then, we have a tendency to model the behaviour of the globe as a sequence of messages that are sent between numerous objects. In OO models, variety of relationships (inheritance, association, and aggregation- see [22],[3] ,[20], [23] and [26]) are known between the classes/objects. Moreover, there are several in style modelling processes and tips like GRASP [28] and ICONIX [27] for assignment responsibility to categories and objects in object-oriented style. In recent years, few researchers target object directed software package engineering. Fakes et al. (2012) describe a way and a tool designed to meet precisely the extract category refactoring [11]. The tactic involves 3 steps: (a) recognition of extract category opportunities, (b) ranking of the opportunities in terms of improvement to anticipate which of them to be thought of to the system style, and (c) absolutely machine-driven application of the refactoring chosen by the developer. Biota et al. (2014) propose associate degree approach for automating the extract category refactoring [1]. This approach analyses structural and linguistics relationships between the ways in every category to spot chains of powerfully connected ways. The known technique chains are wont to outline new categories with higher cohesion than the initial category, whereas conserving the coupling between the new categories and therefore the categories interacting with the initial category. The primary step for building associate degree OO model is to search out the objects. During this step, we have a tendency to aren't extremely finding objects. In fact, we have a tendency to be literally finding classes and kinds (analysis concepts) that may be enforced mistreatment categories and pure abstract categories. The results of drawback analysis may be a model that: (a) organizes the info into objects and categories, and offers the info a structure via relationships of inheritance, aggregation, and association; (b) specifies native purposeful behaviours and defines their external interfaces; (c) captures management or international behaviour; and (d) captures constraints (limits and rules). Within the world, no object couldn't be freelance of all different objects, like associate degree island. Objects generally rely on different objects for services and probably for error handling, constant information, and exception handling. Relationships capture the interdependencies between objects and supply the means that by that objects fathom one another. In object orientation, each service request (function call) should be sent to a selected object whereas within the procedural languages a perform may be referred to as...
directly. For example, so as for object A to send a message to object B, object A require a handle to object B (in C++, a reference or pointer). Accessing another object’s services may be performed within the following ways (See [7], [9], [10], [11], [14] and [29]): The job object, that features a handle, passes the handle of the opposite object together of the arguments of the perform (message) signature. The referred to as object features a relationship (aggregation or link) to the opposite object. The required service belong to associate degree 'ancestor' category. Ascendant means that a super-class. The access of static category perform, which can be thought of a managed international perform. The most motivation of this paper is to survey the relationships among objects and makes 5 taxonomies for his or her properties. The structure of remaining sections is as follows. In Section a pair of, the literature review and main relationships among objects are delineated. In Section three, the taxonomies are nominative. In Section four, sensible expertise and guidelines are bestowed. Finally, Section five is taken into account to outline and future works.
II. LITERATURE REVIEW
In the literature ([2], [3], [4], [5], [6], [8], [16], [26], [23] and [20]), we have a tendency to found 3 basic relationships among classes/objects: generalization/specialization (inheritance), aggregation and association. These are by no means new ideas and most professionals work with them a day in modelling. GENERALIZATION/SPECIALIZATION:
We all learned generalization/specialization once learning taxonomies in biology category. This can be a relationship between categories instead of objects. Generalization/Speciation 'Is a Type' relationship between categories. As an example, take into account 2 objects: ‘Person’ and ‘Student’. Student 'is-a' Person. Thus, the attributes of an individual is additionally attributes of student. During this relationship, attributes, relationships, services, and ways are familial from the generalization (super-class) by the specialization (subclass).
AGGREGATION:
This may be a relationship within which one object is created from different objects; e.g. Automotive and engine. Aggregation captures the whole-parts relationship between objects. In distinction to generalization/specialization, there's no inheritance between objects taking part in associate degree aggregation. The most benefits of aggregations are that they scale back complexness by permitting software package engineers to treat several objects together object. Association: this can be a relationship by that associate degree object is aware of concerning another one. a superb example of associate degree association (link) is wedding. Moreover, links within the kind of associations are wide used for years in the info modelling community. These relationships and their identifications are delineated within the following subsections.
2. 1 GENERALIZATION/SPECIALIZATION
To characteristic generalization/specialization relationship, software package engineers should perform the 'IS_A' test between pairs of objects when characteristic objects. In fact, software package engineers raise the questions: (a) Is Object An associate degree Object B? ; (b) Is Object B associate degree Object A? Note that we have a tendency to be extremely asking if associate degree object of A is associate degree object of group B. Allowed answers of these queries are: (a) ‘always’; (b) ‘sometimes’ and (c) ‘never’. Supported the answers, software package engineers create some interpretations in keeping with the data given in Table-1. a lot of details on the matter together with some examples are given in [16].
<table>
<thead>
<tr>
<th>Questions</th>
<th>Interpretation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Is B an A?</td>
<td>Is Aa B?</td>
</tr>
<tr>
<td>Always</td>
<td>Always</td>
</tr>
<tr>
<td>Sometimes</td>
<td>Always</td>
</tr>
<tr>
<td>Always</td>
<td>Sometimes</td>
</tr>
</tbody>
</table>
Table-1: The interpretation of the results in IS_A Test
2. 2 AGGREGATION
Sadly, most software package engineers have difficulties applying this relationship properly in observe as a result of the object-oriented paradigm has not outlined the aggregation mechanism o. k. . . the newest literature on this subject argues that this can be
because of the very fact that aggregation, itself, is associate degree ‘ancestor’ construct. It’s our belief that software package engineers ought to use the ‘descendent’ ideas (more specialization) to be able to use this mechanism effectively. These descendent ideas, or totally different forms of aggregation, can capture extra properties that may facilitate software package engineers to manage complexity effectively. From a theoretical perspective, linguists, logicians, and psychologists have studied the character of relationships. One in every of relationships that has been studied affordable well is that the relationship between the components of things and therefore the wholes that they create up. In an exceedingly joint paper, Morton wins ton, Roger Chaffin, and Douglas Herrmann mentioned this whole-parts relationship [25]. They delineated many forms of aggregation. The paper known six varieties of aggregation; Lee associate degree Tepfenhart (2005) else a seventh [16]; we have a tendency to else an eighth:(a)Assembly-Parts;(b) Component-Integral Composition;(c) Material-object Composition;(d) Portion-Object Composition;(e) Place-Area Composition;(f) Collection-Members composition;(g) Container-Content(Member-Bunch Composition);(h) Member-partnership composition and (i) Compound-Elements Composition. Assembly-Parts (Component-Integral) Composition: during this aggregation, the entire is comprised of the parts that maintain their identity even after they are a part of the entire. The components have a selected purposeful or structural role with relevance one another. To spot this aggregation, software package engineers should hunt for key words like ‘is partly’ and ‘is created from’. As an example, suppose bread is formed from flour, a table is formed from wood and an automotive is formed of materials like iron, plastic and glass.
Portion-Object Composition:
This aggregation defines the same configuration of components within the whole. Usually, parts of the objects may be divided mistreatment standards measures like inches, litres, hours and then on. The portion-object composition supports the arithmetic operations +, -, ×, /. To spot this sort of relationship, software package engineers should hunt for some keywords like ‘portion of’, ‘slice’, ‘helping of’, ‘segment of’, ‘lump of’, and such similar phrases. As an example, a second is a component of every day and a meter is a component of kilometre.
Place-Area Composition:
This aggregation defines the same and invariant configuration of components in an exceedingly whole. It’s unremarkably wont to establish links between places and explicit locations at intervals them. Once searching for this aggregation, hunt for preliminary Portion-object composition then raise if this relationship is invariant. As an examples, Colchester is a component of UK and a space is a component of an edifice.
Collection-Members Composition:
This aggregation may be a specialised version of the Place-Area Composition. Additionally to being homogeneous and invariant configuration of components at intervals and entire, there's associate degree understood order to its members. Once searching for this aggregation, hunt for place-area aggregation then check if there's associate degree understood order. As an example, suppose Collection-members Composition Airline reservation with its numerous flight segments and Monthly timesheet-daily timesheets.
Container-Content (Member-Bunch) Composition:
This aggregation defines a group of components as an entire. The sole constraint, here, is that there's a spatial, temporal or social association for deciding once a member is a component of the gathering. This aggregation tends to be an enclosure (contents while not classification) for aggregation-type relationships. For example, suppose a box with contents of the box and a bag with its contents of bag.
Member-Partnership Composition:
In this aggregation, the components bearer neither a purposeful nor a structural relationship to every different or to the entire. The contents are neither homogeneous nor invariant. As an example, we will take into account associate degree Union and members and an organization and its staff. This can be associate degree invariant kind of the container-content aggregation. Members during this relationship cannot be removed while not destroying the aggregation.
Compound-Elements Composition:
During this aggregation, the components bearer neither a purposeful nor a structural relationship to every different or to the entire. The contents are homogeneous and variant. The parts are haphazardly (incidentally) organized within the whole. As an example, we will take into account a celebration and folks in an exceedingly society. Note that associate degree object may be viewed as over one aggregation. As an example, we will take into account Bread as combination of slices (Portion-Object) and Bread as fabricated from flour, egg (Material-Object).
2. 3 ASSOCIATION
Associate degree association may be a relationship that enables associate degree object to grasp concerning another one. This relationship is taken into account to be bi-directional as link through that one object traverses in either direction. Associate degree association will have attributes and services. The simplest supply for initial identification and specifying associations and aggregations is that the needs documents. Links, like services are typically seen as verbs. As an example, ‘which it gets from’, ‘keep track of’, ‘changes with’, and ‘depends upon’. The sequence diagrams and behaviour specification documents additionally facilitate to search out the links. Once software package engineers are identifying between association and aggregation, many points should be considered: (a) associate degree aggregate might not connect associate degree object to itself (e. g., supervise is between 2 instances); (b) Multiple connections between objects are potential (e. g. Worker doing many tasks). (c) Self associations are potential and customary (e. g. Relative association on Student) and (d) Multiple association doesn't imply that an equivalent 2 objects are connected double.
III. TAXONOMIES
One the main gaps and analysis desires is to possess an outline and taxonomy on properties of relationships among classes/objects in Object-Oriented software package development. In keeping with Merriam-Webster [18], taxonomy is that the study of the final principles of scientific classification, and is particularly the orderly classification of things in keeping with their probable natural relationships. The main variations between properties of relationships among objects, in general, rely on the temporal, structure, behavioural and interface views, and specifically mathematical read. There are, therefore, 5 taxonomies to reason properties of the relationships among objects in Object-Oriented development. These taxonomies are delineated within the following subsections.
3. 1 THE INITIAL TAXONOMY: PROPERTIES ON TEMPORAL
The primary taxonomy for properties of the relationships among objects is bothered with varied aggregation dependency over time. Therefore, there are 2 properties of the connection during this taxonomy: Static: during this property, parts in an exceedingly whole are fastened and can't be modified over time. Within the aggregations per Section 2-2, Assembly-Parts (Component-integral) Composition, Material-object Composition and Portion-object Composition are during this taxonomy. as an example, a phone is assembled from its components and Windows are components of a house. Dynamic: during this property, parts in an exceedingly whole might vary over time. Within the aggregations known in Section 2-2, Material-Object Composition, Place-area composition, Collection-members composition, Container-content (Member-Bunch) composition and Member-Partnership composition are dynamic.
3. 2 THE SECOND TAXONOMY: PROPERTIES ON STRUCTURE
The second taxonomy relies on the question of whether or not or not the relationships bearer a selected purposeful or structure among classes/objects. Within the generalization/specialization relationship, this taxonomy associated with the subsequent properties: Attributes: The descendent can have all of the attributes of the ascendant. As an example, suppose the worker category that inherits from the Person category in an exceedingly general payment system; the worker has the age attribute as a result of it's a descendant category of Person. Links: The descendent can have all of the non-generalization links of the ascendant. As an example, if we have a tendency to add a wedding link between 2 persons, ‘Student’ will have a wedding link as a result of it's a descendant of ‘Person’. Within the aggregation relationship, we will reason the properties of the
relationships according to the mixture of the subsequent
facets: Configuration: during this aspect, we have a
tendency to should confirm whether or not or not the
components bear a selected purposeful or structure
relationship.
Homogenous:
In this facet, we have a tendency to confirm
whether or not or not the components are from an
equivalent quite factor within the whole.
Invariance:
In this facet, the sort of the connection is set by
the fundamental properties of whether or not or not the
components may be separated from the entire. Table-
2 shows the categories of aggregation known in Section
2-2, in keeping with the properties on the structure read. Table-2: totally different combination of properties
within the Aggregation relationship.
<table>
<thead>
<tr>
<th>Type of Aggregation</th>
<th>Configuration</th>
<th>Homogenous</th>
<th>Invariance</th>
</tr>
</thead>
<tbody>
<tr>
<td>Assembly-Parts (Component- Integral) Composition</td>
<td>Yes</td>
<td>No</td>
<td>Yes</td>
</tr>
<tr>
<td>Material-Object Composition</td>
<td>Yes</td>
<td>No</td>
<td>Yes</td>
</tr>
<tr>
<td>Portion-Object Composition</td>
<td>Yes</td>
<td>No</td>
<td>Yes</td>
</tr>
<tr>
<td>Place-Area Composition</td>
<td>Yes</td>
<td>No</td>
<td>Yes</td>
</tr>
<tr>
<td>Collection-Members Composition</td>
<td>Yes</td>
<td>No</td>
<td>Yes</td>
</tr>
<tr>
<td>Container-Content Member-Bunch) Composition</td>
<td>Yes</td>
<td>No</td>
<td>Yes</td>
</tr>
<tr>
<td>Member-Partnership</td>
<td>Yes</td>
<td>No</td>
<td>Yes</td>
</tr>
<tr>
<td>Compound-Elements Composition</td>
<td>Yes</td>
<td>No</td>
<td>Yes</td>
</tr>
</tbody>
</table>
Windows are parts of a house
A care is made of materials such as iron, plastic and glass
A second is part of a day
A room is part of a hotel
Monthly timesheet and daily timesheets
A box and contents of the box
Union and members
A party and several people
3. 3 THE THIRD TAXONOMY: PROPERTIES ON BEHAVIOR
The third taxonomy for properties of the
relationships is supported however the behaviour of
classes/objects betting on others. In Generalization/
Specialization relationship, we've got 2 varieties of
properties: Generalization while not polymorphism
(Good child): All ways equipped by the ascendant for
services also are employed by the descendant to produce
the corresponding services. Generalization with
polymerism (Bad child): Some ways provided by the
ascendant for its services are employed by the descendant.
However, the descendant will provide its own made-to-
order ways that replace the suitable ways.
3. 4 THE FOURTH TAXONOMY: PROPERTIES ON MATHEMATICAL
The fourth taxonomy for properties of
relationships relies on mathematical read. Within the
generalization/specialization relationship, we've got 2
following properties between categories: Anti-
symmetric: If class A may be a descendant of
sophistication B, then category B cannot be a descendant
of sophistication A. E. g. ‘Employee’ may be a person,
however not all persons are staff. Transitivity: If
category C may be a descendant of sophistication B and
sophistication B may be a descendant of sophistication A,
then category C may be a descendant of sophistication A. E. g. If we have a tendency to add the very fact that a
‘Salesperson’ may be a ‘Employee’ then ‘Sales Person’ is
additionally a ‘Person’. Moreover, it additionally has the
age attribute. Within the aggregation relationship, we've
got 2 following properties of the connection between
objects: Anti-symmetric: If associate degree object A
may be a a part of associate degree object B, then the
article B cannot be a region of the article A. Transitivity: If associate degree object C {is part |is a component is
associate degree element} of associate degree object B
and therefore the object B is a component of an object A,
then C is a component of A. Note that the transitivity holds just for aggregations of an equivalent kind. As an example, we will consider: (a) Microwave is a component of a room (Component-integral) and (b) room is a component of a house (Place-area), however Microwave isn't a part of a house.
3.5 THE FIFTH TAXONOMY: PROPERTIES ON INTERFACE
The fifth taxonomy for properties the relationships is associated with providing service by associate degree object for others. With this read, within the generalization/specialization relationship the descendant should additionally give all services provided by the ascendant. As an example in an exceedingly Personnel Management System, if the ‘Person’ object had a ‘Get_Degree’ service, then ‘Student’ will have a ‘Get Degree’ service as a result of ‘Student’ may be a descendent of ‘Person’. Within the association relationship, a link may be binary (between 2 objects), ternary (among three objects), or higher. In observe, it's rare to search out links with a linguistics which means that tie together objects of 3 totally different object varieties (classes)[16]. An honest example for binary association would be a link between ‘Student’ and ‘Course’. By extending this relationship, we will have a ternary relationship among the ‘Student’, ‘Software’, and ‘Course’ objects. It captures the very fact that students use numerous software package tools for various courses.
IV. PRACTICAL EXPERTISE AND TIPS
So as to gauge the relationships and their properties in observe, we have a tendency to use a bearing Command Police System (CCPS) that a mini-requirement is concisely delineated in[23]. We swollen this technique and utilized in our study due to its fertility for reusability in each application and system software package. This police service system should respond as quickly as potential to rumoured incidents and its objectives are to make sure that incidents are logged and routed to the foremost acceptable police vehicle. The foremost necessary factors that has to be thought of that vehicle to decide on to an occasion include: sort of incident: some necessary and worsening events want immediate response. It’s suggested that nominative classes of response actions are assigned to a precise sort of incident. Location of obtainable vehicles: usually, the simplest strategy is to send the nearest vehicle to handle the incident. Confine mind that it's unfeasible to grasp the precise position of the vehicles and will ought to send a message to the automotive to work out its current location. Sort of obtainable vehicles: some incident want vehicles want and a few special incident like traffic accidents might have auto and vehicles with specific instrumentality. Location of incident: In some areas, causing only 1 vehicle for response is enough. In different areas, is also a police vehicle to reply to an equivalent sort of accident is enough. Different emergency services like fireman and ambulance: the system should mechanically alert the wants to those services. Reportage details: The system ought to record details of every incident and create them obtainable for any data needed. The utilization Case Diagram and Activity Diagram of this technique are delineate in Fig. 1, and Fig. 2, severally. We have a tendency to enforced this technique in Microsoft Foundation categories (MFC) as application framework for MS Windows (see[17], [19], [21], [27] and [28] for tips of implementations). The category Diagram of this technique is delineate in Fig. 3. During this category diagram, there are several categories. the most categories, here, are ‘Incident’, ‘Police Staff’, ‘Police Vehicle’, ‘Police Officer’, ‘Director’, ‘Route Manager’, ‘Incident Waiting List’, ‘Response’ and ‘GPS Receiver’.
Fig. 1: the utilization Case Diagram of the management Command Police System
Fig. 2: The Activity Diagram of the management Command Police System
In keeping with the expertise, one in every of the foremost troublesome tasks in building an object-oriented model is to determine whether or not a possible relationship is better captured as an argument within the signature of the service (function), or as a link, aggregation, or generalization/specialization. The subsequent are the rules obtained from our experience:
Guideline-1: A relationship should capture some ideas that apply to the matter domain or some sub-domain that's required for implementation. In different words, there should be a linguistics desiring to the connection. A service (see the property Interface read in Section 3-5) ought to solely traverse the connection once its usage is according to that linguistics which means. As an example, consider the link between 'Specialized Vehicle' and 'Police Vehicle' (see Fig. 3). Today, with some MI, it's potential for 'Specialized Vehicle' to work for 'Police Vehicle'. It'd be improper and poor modelling to use the link relationship to urge to work domain services of the opposite vehicle. A second link (Security Service) must be established to capture this totally different linguistics relationship.
Guideline-2: When the connection is ‘permanent’ (the static property within the initial taxonomy in Section 3-1), software package engineers should care around this term. If software package engineers take into account a state of affairs as a unit of time (e.g. Across an occasion in our experience), then permanent implies that the connection must be identified across eventualities. Basically, if it's to be keep in memory to be used by another freelance method just like the management method between 'Dispatcher' and 'Police Office', then it's permanent.
Guideline-3: In every aggregation, software package engineers should check that that each one of the components as in the same domain and supply an equivalent purposeful or structural configuration to the whole. Apply transitivity and anti-symmetric properties tests (see the properties in Section 3-4) to examine for consistency. Note that transitivity is feasible solely with aggregations of an equivalent kind. It’s very common for novices to combine components of various forms of aggregation in one aggregation. This will cause the transitivity test to take a look at to fail. Once this happens, software package engineers probably need to appear at the components to examine if there are differing kinds of aggregates. As an example, consider the room that
has the subsequent parts: laptop, monitors, printers, chairs, windows, floors, ceilings and walls. If we place all of those components into one aggregation, we've got mixed components from 2 different semantic aggregations. The pc, monitors, printers are process a purposeful configuration of the building; whereas the windows, floors (meaning the physical floor), ceilings, and walls are process a structural configuration of the building. These parts must be captured in 2 totally different aggregations, as they need totally different linguistics.
Guideline-4: No aggregations connect 2 objects of an equivalent kind to every different. This would violate the anti-symmetric property of the aggregation. As an example in our expertise, a 'Dispatcher' may not be associate degree combination of 'Police officer'.
Guideline-5: An association might connect two objects of an equivalent kind. As an example, the relation between the 'Reporter' and 'Reporter UI' within the management Command Police System is valid (See Fig. 3).
Guideline-6: Aggregation is commonly confused with topological inclusion. In Topological inclusion, we've got a relationship between a instrumentation, area, or temporal length which is contained by it.
Suppose within the management Command Police system:
(1) The 'Dispatcher' is within the room,
(2) The 'Incident' is in the evening, and
(3) The 'Incident' is in Colchester and county.
In every case, the instrumentation surrounds the topic. However, it's not a part of the container in any significant linguistics domain. As an example, the 'Dispatcher' isn't part of the room, nor is the 'Incident' not a part of the region of the evening. Moreover, the 'Incident' isn't a part of Colchester or county.
Guideline-7: The attributes of associate degree object, sometimes, is also confused with aggregation. Attributes describe the object as a whole sort of a recording equipment approach whereas aggregation describes the components that make the entire like white box approach. In our expertise of the management Command Police system &##40;see Fig. 3&##41;; the 'Route Planner' have attributes such as 'Incident Node' and 'Vehicle Node'.
Guideline-8: Attachment of 1 object to a different object doesn't guarantee aggregation. Certainly 'GPS Receiver' is hooked up to the 'Police Vehicle' and that they are a part of the system; but, 'Vehicle Radio' or 'Vehicle Stereo' are hooked up to the Vehicle, however they're not a part of the Vehicle. Note that 'GPS Receiver' provide functional support to the 'Police Vehicle', whereas 'Vehicle Radio' or 'Vehicle Stereo' don't provide any purposeful or structural support in our case study.
Guideline-9: Ownership might generally be confused with aggregation. Definitely a 'Police Vehicle' features a variety, and 'GPS Receiver' are half of 'Police Vehicle'. However, the very fact that 'Dispatcher' features a vehicle doesn't imply that the 'Police Vehicle' is a component of 'Dispatcher'. Thus, possession should be captured by a link.
Guideline-10: Multiple associations among objects are potential within which every association ought to be wont to capture a distinct semantic which means. For example, the 'Alarm' and 'Call Taker' have multiple links in our expertise (See Fig. 3).
V. SUMMARY AND CONCLUSION
This paper reviewed the relationships among objects in object-oriented software package development and created 5 taxonomies for his or her properties. Mainly, the relationships are 3 basic varieties. This paper presents 5 taxonomies for properties of the generalization/specialization, association and aggregation relationships. The primary taxonomy relies on temporal read and therefore the second relies on structure. The third taxonomy depends on behavioural read and therefore the fourth one is nominative on mathematical read. Finally, the fifth taxonomy associated with the interfaces between objects. Moreover, during this paper the relationships are evaluated in an exceedingly case study then many recommendations are projected. The main conclusion is that the relationships should capture some ideas that applies to the matter domain or some sub-domain. They’re important for software package engineers in implementation.
REFERENCES
|
{"Source-Url": "http://www.ijetajournal.org/volume-3/issue-6/IJETA-V3I6P5.pdf", "len_cl100k_base": 6654, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 27337, "total-output-tokens": 8110, "length": "2e12", "weborganizer": {"__label__adult": 0.0002925395965576172, "__label__art_design": 0.0003802776336669922, "__label__crime_law": 0.0002777576446533203, "__label__education_jobs": 0.0012559890747070312, "__label__entertainment": 4.881620407104492e-05, "__label__fashion_beauty": 0.00013554096221923828, "__label__finance_business": 0.00021219253540039065, "__label__food_dining": 0.00022840499877929688, "__label__games": 0.00052642822265625, "__label__hardware": 0.00045013427734375, "__label__health": 0.00028395652770996094, "__label__history": 0.0001883506774902344, "__label__home_hobbies": 8.314847946166992e-05, "__label__industrial": 0.0002491474151611328, "__label__literature": 0.0002815723419189453, "__label__politics": 0.00018274784088134768, "__label__religion": 0.0002739429473876953, "__label__science_tech": 0.006809234619140625, "__label__social_life": 0.0001016855239868164, "__label__software": 0.005680084228515625, "__label__software_dev": 0.9814453125, "__label__sports_fitness": 0.00017905235290527344, "__label__transportation": 0.00029587745666503906, "__label__travel": 0.00012731552124023438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35400, 0.01289]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35400, 0.34868]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35400, 0.91745]], "google_gemma-3-12b-it_contains_pii": [[0, 4455, false], [4455, 8737, null], [8737, 12667, null], [12667, 17597, null], [17597, 21291, null], [21291, 25071, null], [25071, 25218, null], [25218, 27719, null], [27719, 32389, null], [32389, 35400, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4455, true], [4455, 8737, null], [8737, 12667, null], [12667, 17597, null], [17597, 21291, null], [21291, 25071, null], [25071, 25218, null], [25218, 27719, null], [27719, 32389, null], [32389, 35400, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35400, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35400, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35400, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35400, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35400, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35400, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35400, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35400, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35400, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35400, null]], "pdf_page_numbers": [[0, 4455, 1], [4455, 8737, 2], [8737, 12667, 3], [12667, 17597, 4], [17597, 21291, 5], [21291, 25071, 6], [25071, 25218, 7], [25218, 27719, 8], [27719, 32389, 9], [32389, 35400, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35400, 0.09816]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
869422713ca316dd75cee8712ffacffa22400014
|
A visual analytics architecture for the analysis and understanding of software systems
(Una arquitectura de analítica visual para el análisis y la comprensión de los sistemas de software)
Antonio González-Torres,1,2 José Navas-Sú,1 Marco Hernández-Vásquez,1 Franklin Hernández-Castro1 y Jennier Solano-Cordero1
Abstract
Visual analytics facilitates the creation of knowledge to interpret trends and relationships for better decision making. However, it has not been widely used for the understanding of software systems and the changing process that takes place during their development and maintenance. This occurs despite the need of project managers and developers to analyze their systems to calculate the complexity, cohesion, direct, indirect and logical coupling, detecting of clones, defects and bad odors, and the comparison of individual revisions. This research considers the design of an extensible and scalable architecture to incorporate new and existing methods to retrieve source code from different versioning systems, to carry out the analysis of programs in different languages, to perform the calculation of software metrics and to present the results using visual representations, incorporated as Eclipse and Visual Studio extensions. Consequently, the aim of this work is to design a visual analytics architecture for the analysis and understanding of systems in different languages and its main contributions are the specification of the design and requirements of such architecture, taking as base the lessons learned in Maleku (A. González-Torres et al., 2016).
Keywords
Code analysis, repository mining, software visualization, metrics.
1. Introducción
Visual analytics is the science of analytical reasoning, which uses advanced data analysis, interactive visualizations, human-computer interaction and the visual and cognitive abilities of human beings, to make sense of abstract information. This process allows the creation of useful...
knowledge to interpret trends and relationships, which are difficult to appreciate at a glance, aiming to improve the process of decision making (Thomas & Cook, 2006). In line with this, visual analytics systems allow better decision making, which is why organizations are motivated to use it. However, visual analytics has been little used to facilitate the understanding of the dynamics of software systems and the change process that takes place during their development and maintenance.
The study of software systems looks to improve software development and maintenance through the analysis of continuous change, complexity, growth and quality control (Lehman, Ramil, Wernick, Perry, & Turski, 1997). Therefore, programming teams require tools to recover and analyze software projects to discover patterns and relationships, calculate software quality metrics (e.g., complexity, cohesion and direct, indirect and logical coupling), detect clones, defects and bad odors, and extract facts from the comparison of individual revisions (D’Ambros, Gall, Lanza, & Pinzger, 2008).
Software evolution is a result of the change record of a software system. It is a cyclic process that is based on the understanding of the current state of systems and the accumulation of previous changes (Mens & Demeyer, 2008). A change usually involves a group of source code items that are frequently together and are associated by some kind of relationship. The changes performed on any of these items can be carried out by one or several programmers, simultaneously, and are propagated automatically to other elements, coupled directly or indirectly. The comprehension of changes implies not only to appreciate the modifications to software elements, but its effects on the system structure and the relationship between the elements that compose it.
Furthermore, the evolution of systems usually expands through several years and generates thousands and even millions of lines of code (Kagdi, Collard, & Maletic, 2007), hundreds of software components and thousands of revisions (D’Ambros et al., 2008). In addition, source code is composed of variables, constants, programming structures, methods and relationships among those elements (Cárdenas & Aponte, 2017). Besides logs, the analysis of software systems also requires the retrieval of data from communication systems and the metadata records from bug tracking and SCM tools which keep records with dates, comments, changes made to the systems and details of the associated programmers (Hassan, 2005). Hence, the tools aimed to analyze software systems and their evolution should facilitate the understanding of system changes and depend upon techniques and methods, such as advanced source code analysis, software visualization and interaction techniques (Antonio González-Torres et al., 2013).
Consequently, this research is aimed to contribute with the design of a visual analytics architecture for the analysis and understanding of software systems written in different languages, using as basis the theory and the requirements of the industry and software practitioners. Therefore, the main contributions of this research are the specification of the design and requirements of such architecture, taking as base the lessons learned in Maleku (A. González-Torres et al., 2016).
This paper is an extension of the paper accepted in INCISCOS 2018 (A Gonzalez-Torres et al., 2018) and discusses the previous work carried in the proposal and implementation of similar architectures and frameworks (see section 2), explaining a general overview of the proposed architecture (see section 3), its main requirements (see section 4) and discussing the main conclusions (see section 5).
2. Previous work
SonarQube, TRICORDER and Understand are three popular systems which offer support for the analysis of source code in multiple languages, permitting the extraction of a predefined set of metrics and the possibility to define custom metrics. SonarQube and TRICORDER use a plugin model, so programmers can contribute with new or improved functionality.
SonarQube is a platform that works with Sonarlint (SonarSource, 2018) and it’s available in commercial and community edition. Its architecture allows programmers to contribute with new plugins to the platform and run local real time analysis as the programmers write the source code in the supported IDEs (i.e., Eclipse, IntelliJ and Visual Studio). This system supports more than 7 versioning systems, 20 languages and the extraction of several metric types. It has a pre-defined set of rules that can be used to define custom metrics to detect bugs, security vulnerabilities and code smells, and measure the reliability, maintainability and security of systems.
The plugins of SonarQube can connect to SCM repositories and perform analysis tasks, whereas the server permits to configure the parameters of the analysis, process the reports and store the results into a database. The execution of these tools is triggered by a continuous integration server which call a source code analysis scanner associated to a chosen language. The results are processed, stored and sent to managers to be reviewed, and displayed to programmers.
Similarly, TRICORDER is a robust and scalable platform that is based on microservices to accomplish the static analysis of programs at Google. This tool reads a program snapshot when it becomes available, calls a language specific driver to calculate the dependencies, builds the files included in the change list to obtain the inputs required by a compiler, generates an Abstract Syntax Tree (AST) and use it to perform the analysis, which later is displayed to users (Sadowski, van Gogh, Jaspan, Söderberg, & Winter, 2015).
The main difference between Understand and the tools mentioned above is that it has a focus on providing details concerned with the architecture and structure of the systems. Thus, it carries out the analysis of dependencies for an architecture or part of it, examines the control flow of algorithms, studies the hierarchy, and checks the compliance with coding standards. Furthermore, it uses a set of visualizations for presenting the results obtained and the calculated metric values (SciTools, 2018).
Although these systems have many useful features, they lack solid mechanisms for integrating the analysis results with effective methods to facilitate decision making during coding and management tasks. Therefore, González-Torres described the process of applying visual analytics to software evolution to enhance the understanding of changes with the active participation of users by means of human-computer interaction and implemented Maleku, a proof of concept architecture (A. Gonzalez-Torres, 2015; A. González-Torres et al., 2016). Visual analytics is the combination of interactive visualizations with analysis techniques to facilitate the decision-making processes (Keim, Kohlhammer, Ellis, & Mansmann, 2010).
Maleku was designed to support both programmers and managers when correlating metrics, project structure, inheritance, interface implementation and socio-technical relationships (A. González-Torres et al., 2013; A. González-Torres et al., 2016). Such framework performs ETL operations, the automated analysis of software projects and the visual representation of the analysis results. The ETL component performs the connection and source code, project structure, source code revisions, programmer activities and logs retrieval from software repositories, and then it cleans and merges data and loads it into a data warehouse. Thereafter, the automa-
ted analysis process performs metric calculations (e.g., LOC, NOM and Cyclomatic Complexity), the detection of inheritance (parent-child and child-parent) and interface implementation relationships (implementing and implemented by), the identification of socio-technical relationships, the contributions made by individual programmers and carries out examination of the architecture and structure of the project for each revision under study, using details from the metadata and the parsed source code.
The use of visual analytics to aid the analysis of source code is relatively new, although there exists substantial research on the use of software visualization. However, tasks such as debugging, the navigation of dependencies, the detection of indirect coupling and source code clones, code refactoring, the tracking of changes and contributions and software quality metrics monitoring are carried out in the industry without the support of visualization tools.
The outcomes of the research carried out by González-Torres (Antonio Gonzalez-Torres, 2015) discusses, based on an interview carried out during a usability study and on the results of survey, that some reasons that may have an adverse effect on the adoption of visualization tools are visual stress, inadequate design, the complexity of the visualizations, the time needed to learn how to use them; the requirement of prior knowledge and experience of visual tools, as well as aspects related to the lack of clarity and ambiguity of the designs.
Therefore, the same research points out that a possible reason for this situation is that most programmers are not aware of the availability of visualization tools and the options that these systems have. Furthermore, there is no substantial evidence about the diffusion and transference of the results obtained by the investigations to industry, concerning the application of information visualization to software systems and their evolution. Hence, the use of visual tools for assisting programming and management tasks needs to be sponsored by key players in the software industry (e.g., Microsoft, IBM and Borland), incorporating complete toolsets into their IDEs, SCM and bug tracking tools, and by creating training courses and technical documentation that takes them into account as central elements.
3. Architecture perspective
The design of the proposed architecture considers that several source code versioning tools, multiple languages and programming environments are used in practice. So, the design of the architecture has considered the implementation of services to connect and retrieve source code from Git, Subversion and TFVC repositories, and for the analysis of source code written in RPG, Java, C# and Visual Basic, using Eclipse and Visual Studio. The specification of the architecture is based on several micro services (see Figure 1) and consists of three frameworks named CodeRetriever, FactsAnalyzer and VisFramework, which functions are the following:
- CodeRetriever: This component connects to software repositories and retrieves the source code, written in any supported languages, and converts it into a standardized metalanguage. It is a common practice that the development of systems is carried out using different languages, according to their specific needs and the features that are offered by each language. This component requires two sets of connection parameters (URL connection, type of server, credentials). The first set will permit the connection to software repositories, whereas the second one would open a database connection to send the output of CodeRetriever.
• FactsAnalyzer: It carries out the static analysis of the metalanguage, calculates metrics and permits the specification of new metrics using a simple scripting language. This component would require a set of connection parameters to send the results to a database server, in a similar manner than CodeRetriever.
• VisFramework: This module uses multiple linked views and human-computer interaction techniques to visually represent data in an accessible way to be understood by humans in a short period of time. This component would make possible to decode data and transform it into knowledge.
Figure 1. Architecture to support the evolutionary visual software analytics process
The results of CodeRetriever could serve as the input of FactsAnalyzer or used independently. Therefore, academics and practitioners could enter their connection parameters into a web form to retrieve source code from online software repositories and store the metadata and analysis results produced by either CodeRetriever or FactsAnalyzer on their own database servers. In addition, Scripter (an interactive console) would enable users to write custom metrics based on default metrics included in FactsAnalyzer. The process that will be followed by CodeRetriever and FactsAnalyzer is described below:
• The programmer specifies the connection parameters for a software repository and a database server.
• CodeRetriever executes the SCM component to connect to repositories and retrieve source code in a per revision basis.
• CodeRetriever calls ASTParser with the source code retrieved by SCM.
• ASTParser performs the parsing of source code and creates the AST for the corresponding language.
• MetaEngine process the ASTParser output, generates the metalanguage, store it into the Metalanguage database and sends it to the FactsAnalyzer.
• FactsAnalyzer performs the calculation of the basic metrics, carries out the analysis of source code and store the results in the AnalysisFacts database.
The third major component is VisFramework, which would be responsible of loading the analysis facts from the AnalysisFacts database, create the visual representations and display results using multiple linked views. This element will be integrated into Eclipse and Visual Studio as
an extension, so when a visual element is selected from any visualization the corresponding source code will be displayed, and when the code is modified the views will be updated automatically.
Maleku was implemented in Java and a portion of its source code will be reused in this project, although it will also make use of C# and Python to comply with the requirements of the partnering companies. The analysis and visualization methods of the research will be validated by programmers and project managers with the support of their companies, which will promote the dissemination of methods and tools, both internally and externally.
4. Requirements and design specifications
This section provides details of the major functional requirements of the proposed architecture (see Figure 1). The first component of the architecture is CodeRetriever, which is made-up of the SCM, ASTParser and MetaEngine elements. This component is responsible for retrieving source code in different languages and from distinct repositories.
**RQ1:** The system shall access multiple software repositories, either local or remote, managed by different kinds of version control system to extract source code.
carrying out the transformation of code into a metalanguage and its subsequent analysis to provide insight into the system under analysis. Accordingly, **RQ1** is the main requirement that needs to be satisfied by the SCM component.
The strategy followed for the definition of SCM, as well as the one used for the specification of other components, consists on the use of a combination of the Factory Method and Singleton patterns. This would make the architecture scalable through the addition of more elements to provide access to different types of software repositories. This component is critical for retrieving source code and metadata details, such as revision numbers, the date and time of commits, the list of files modified, the names of programmers and the paths changed.
The requirement **RQ2** requires to call ASTParser with the source code retrieved from the SCM component and then pass the results to the MetaEngine element. Hence, the extraction of metrics, and the detection of clones, dependencies and coupling, for example, requires implementing a specific analysis engine for each language.
**RQ2:** The system shall provide a metalanguage equivalent to the abstraction of the syntactic elements from distinct programming languages to reduce the complexity of source code analysis.
The implementation of an analysis engine involves the dissection of source code using hand coded or generated parsers, or creating an AST to parse the code. Consequently, the design of a metalanguage was considered as an alternative to transform source code and perform the analysis using only one engine. The syntax of the metalanguage would be define taking into account the Knowledge Discovery Metamodel (KDM) (OMG, 2016; Pérez-Castillo, de Guzmán, & Piattini, 2011) and Abstract Syntax Tree Metamodel (ASTM) (OMG, 2011) standards from the Object Management Group (OMG).
ASTParser is also based on a combination the Factory Method and Singleton patterns and looks to support the analysis of Java, C#, Visual Basic and RPG to comply with the requirements
specified by the partnering companies. This component will be implemented in Java and C#, whereas Eclipse JDT and Roslyn will be used to generate the corresponding AST. The creation of the AST for RPG will be carried out in Java, having in consideration that Eclipse is the IDE employed to program in this language. The integration of the elements written in C# and Java will be performed using an adapter. The functions of MetaAnalyzer are shown in Figure 2 and follow the next sequence:
1. Read the source code.
2. Perform the syntactic analysis of the code.
3. Generate the Abstract Syntax Trees.
4. Carry out the semantic equivalence of attributes.
5. Create the corresponding output to store it into a database.
6. Feed the output of the MetaAnalyzer into FactsAnalyzer for its analysis.
The FactsAnalyzer component is associated to requirement RQ3 and is made up of several analysis techniques for the calculation of basic metrics and the detection of code clones, direct and indirect coupling and code item dependencies.
**RQ3:** The architecture shall provide an analysis framework capable of performing the characterization of the architecture, structure, changes and dependencies, the calculation of metrics and the detection of clones and coupling with the aim of simplifying their comprehension and identification of design flaws.
**Figure 2.** Source code transformation from different languages into a metalanguage
An integral element of FactsAnalyzer is Scripter, which will allow defining custom metrics based on basic measurements. The basic metrics included by FactsAnalyzer are Weighted Methods per Class (WMC), Depth of Inheritance Tree (DIT), Number of Children (NOC), Response for a Class (RFC), Lack of Cohesion in Methods (LCOM) (Chidamber & Kemerer, 1994), Cycloma-
tic Complexity Number (McCabe, 1976), Number of Methods, Access to Foreign Data, Number of Classes (Lanza & Marinescu, 2006), cohesion and polymorphism (Tahir & MacDonell, 2012). These metrics help on the understanding of the complexity and quality of systems and are included by most source code analyzer tools. However, most of these type of tools do not include the detection of code clones, Direct and Indirect Coupling between Object Classes (CBO) (Yang, 2010) and the network of dependencies of the system.
Code clones are source code fragments with some degree of similarity to other fragments. These can be produced by copy and paste actions, limitations of programming languages, deliberate code duplication, automatic code generation, portability compliance, or accidental coding (Murakami, 2013; Murakami, Hotta, Higo, Igaki, & Kusumoto, 2012). Clones can make software development, maintenance and refactoring tasks difficult and expensive, and can be classified into four types (Solanki & Kumari, 2016):
- **Type-1**: Identical or almost identical copies.
- **Type-2**: Syntactically parameterized copies.
- **Type-3**: Near-miss are syntactically rearranged copies.
- **Type-4**: Semantic copies.
The automatic detection of clones includes text, token, AST, Program Dependency Graphs (PDG), metrics, index and cluster based techniques, as well as hybrids and non-categorized methods (Schwarz, 2014). However, the use of any approach in large software systems is a resource-consuming task that requires an efficient and scalable method. Therefore, it is required to define novel methods to detect code clones, which could be based on Hadoop and the use of the MapReduce pipeline (Vogt, Nierstraszt, & Schwarz, 2014) to execute algorithms using parallel computation.
Coupling captures quality attributes such as complexity, maintainability, and understandability and a low level of coupling more desirable than a high one. Direct coupling between entities means the existence of a direct dependency relationship between them. Coupling relationships that are not direct are tagged as indirect coupling, and correspond to either transitive closure of direct coupling chains or to use-def chains (Yang, 2010).
Use-def chains are sequences of reaching definitions related to local variable definitions, return values, field references or parameter passing. Indirect coupling detection could rely on data-flow analysis, program slicing, ASTs, and PDGs (Yang, Tempero, & Berrigan, 2005). Several levels of granularity could be used to measure coupling, including package, class, and method levels (Almugrin, Albattah, & Melton, 2016; Almugrin & Melton, 2015).
Scripter is a component that will be used to define metrics taking the set of fundamental metrics as base. This element will be responsible of providing a mechanism to create new analysis methods, in a dynamic manner on FactsAnalyzer, without carrying out the modification of the source code (see requirement RQ4). The elements that conform Scripter are:
1. A simple scripting language.
2. A code generation routine that writes new code into FactsAnalyzer.
**RQ4**: The architecture shall provide a simple scripting language and a console for the creation of new metrics based in existing metrics.
The architecture requires producing intermediate outputs such as the metalanguage generated from source code and the results of the analysis, and therefore, the requirements RQ5 and RQ6 must be satisfied. Requirement RQ5 states that the metalanguage should be produced using the metadata and source code of a given software repository, which string connection should be provided by the user to retrieve them, as well as the connection parameters for the database in which the resulting metalanguage will be stored.
**RQ5:** The architecture shall transform source code into a metalanguage using as source a given software repository and store the result into the database specified by the user.
Requirement RQ6 refers to the analysis of the metalanguage to generate facts and store them in the corresponding database, based on the parameters provided by the user for such purpose.
**RQ6:** The architecture shall analyze a metalanguage and store the analysis facts into database specified by the user.
The results of the analysis of software systems provide useful information, but it does not provide sufficient information to carry out the tasks of understanding changes in a satisfactory fashion. Therefore, visual analytics may offer solutions to the problem of supporting programmers and managers during software development and maintenance, because it is a process which offers a comprehensive approach for the visual representation of the analysis results.
Information visualization frequently is referred as visual analytics, however, there exist several differences between both. Visual analytics makes intensive use of data analysis, coordinated and multiple views and combines the advantages of machines with human strengths, such as analysis, intuition, problem solving and visual perception (A. González-Torres et al., 2016). Therefore, it offers the potential to explore different levels of detail using multiple visual representations, coordinated together and supported by the use of interaction techniques (Nortth & Shneiderman, 2000).
Visual analytics facilitates the discovery of relationships and knowledge by means of the analytic reasoning of the analyst. However, the application of visual analytics to the analysis of software systems and their evolution is new, and it has become a good option to support software development and maintenance because the tasks performed by programmers and project managers, and their information needs, are complex. Therefore, the needs of these individuals require the design and implementation of solutions with specific characteristics. In general, visual analytics tools must:
- Allow analysts to understand the massive and constant growing data collections.
- Support multiple levels of data and information abstraction.
- Allow the analysis of temporal data.
- Aid the understanding of unclear, confusing and incomplete information.
However, the documentation on the design of architectures for visual analytic applications is scarce, although there are several works that describe the use of design patterns on the implementation of visualization libraries. Overall, design patterns play an important role in software development, because they allow the use of known and effective solutions to solve
certain problems. Hence, the proposal of a catalog of design patterns to implement visuali-
sations (Chen, 2004; Heer & Agrawala, 2006) and the rapid development of prototypes (Giereth &
Ertl, 2008) have represented an important effort.
The design of visual analytic applications requires the programming of two or more inte-
ractive and configurable visualizations, which can be supported by animations. The use of seve-
ral views is intended to provide different information perspectives to assist in the discovery of
relationships. Multi-view systems are usually based on a three-dimensional model, which consi-
ders the selection, presentation and interaction between the views (Wang Baldonado, Woodruff,
& Kuchinsky, 2000). The detail of each of these dimensions is presented below:
- **Selection of views:** It is the first phase in the design process and involves the identi-
fication of a set of views to be used, in a coordinated way, to support a task.
- **Presentation of views:** Once the views have been selected, it must be decided how
they will be presented, that is, sequentially (for example, the user can use a menu to
switch between different views) or simultaneously.
- **Interaction between views:** Each view can be accessed independently, using a selection
or navigation interaction. Often, these views are linked to the actions that are performed
in one view and have an effect in another view. A common interaction technique is the
master-slave relationship, in which actions in one view produce effects in others. Another
interaction technique is linking, by means of which the data of one view relates to those of
another view. A specific linking type is brushing, in which the user highlights the elements
in one view and the system highlights the corresponding elements in another view.
The development of this type of systems is complex and constitutes a challenge that
demands to make decisions about the design and implementation of sophisticated coordina-
tion mechanisms. Hence, there is a need for tools to support the design and implementation of
scalable and flexible visual analytic facilitating the following aspects:
- The design and implementation of visualizations.
- The linking of visualizations to data.
- The connection of the visualizations to each other.
- The development of the source code associated with the events that are fired when
an action is performed in one of the representations.
- The representation of data in the corresponding visualizations in response to an event.
Therefore, requirement RQ7 is based on the need of designing and implementing a sca-
vable visual analytics architecture to support project managers and programmers during soft-
ware maintenance and development.
RQ7: The architecture shall provide a scalable and extensible visual analytics architecture to
facilitate software development and maintenance upon the results of source code analysis
and the calculated metrics.
Consequently, this research work proposes a tool design to facilitate the creation of visual
analytics applications, by means of an assistant to link the visualizations with the data in a dyna-
mic way and link the visualizations to each other using the master-slave technique. Its definition
will consider the actions listed below, which are shown in Figure 3 with some additional detail:
1. Enter the connection string of the database to be used.
2. Create minimal visualization structures that contain the definition of variables. The visualizations should be annotated with the “View” tag and the variables with the “Field” label.
3. Link the internal variables of each visualization to the appropriate database fields.
4. Develop the data structures to conform the skeletons of the internal structures of each visual representation.
5. Map the elements of the data structures to the appropriate visual objects on each visualization.
6. Carry out the programming of the visualization layouts.
7. Apply the visualization layouts to arrange the visual objects in each visualization in a proper manner, according to the corresponding design.
8. Create the relationship between visualizations to update a view when an event is triggered in a linked view.
9. Define events for visual objects that may trigger actions local to the visualization or in other visualizations, according to the relationship between views.
10. Generate source code.
**Figure 3.** Steps to aid the linking of variables to database fields, the creation of relationships between views and the association of variables in different visualizations
The development of the abstract data structures, the mapping of these to visual objects and the application of the layout to visual objects are tasks that needs to be performed in an individual basis for each view, when a new visualization is designed. Besides, the programming of the visualization layouts can be carried out only once and then, these can be applied several times to many visualizations. Hence, the design of a scalable and extensible visual analytics architecture should offer support to reusability and include at least two independent component libraries: one for the visualizations and other for the visualization layouts. So, the architecture
can offer the possibility to add new visualizations and layouts to the libraries, and the functionality to create a visual analytics application by choosing a combination of existing visualizations and apply the most appropriate layout to each visual representation.
Some tasks such as the linking of database fields, the creation of the relationship between visualizations and the definition of events are repetitive works that can be carried out with the aid of an assistant tool, such as a wizard, available in the sidebar of the IDE as a plugin or extension. It is important to highlight that this proposal is aimed to aid programmers creating visual analytics applications and hence, the input of the tool shall be the source code under development and the output that it is going to produce is the modification of such code with the creation of new methods and statements to carry out the desired functionality.
Figure 4 shows the interface design with the sequence of the steps to link the views to the data, link the views with each other and generate the corresponding source code. The proposal is based on the need of implementing visualizations that are independent of the data and the creation of dependencies between visualizations dynamically, to show the information using a context design + detail (Shneiderman, 1996). The steps illustrated by Figure 4 are the following:
- **Steps 1 and 2:** These steps are used to enter the initial settings, which consist of entering the connection string of the database to which the visualizations will be associated.
- **Step 3:** The path of the visual analytics project is entered, and its source code files, annotated as a “View”, are processed: Then, the variables tagged with the “Field” label are linked to the database fields.
- **Step 4:** The visualizations with the “View” tag are linked according to the analysis flow that will be followed by the users of the visual analytics application. The flow can be unidirectional or bidirectional. Furthermore, in this step the variables that will link visualizations are matched, so when the user clicks on a visual object associated to a variable in one visualization other view is affected and an action is performed, such as highlighting an element or carry out a query.
The final step involves the definition of the events to trigger actions based on the mouse or keyboard behavior, a code is generated for linking variables to database fields, visualizations between each other, variables in different visualizations and to create mouse and keyboard events to respond to user actions.
**Figure 4.** Interface design for the steps to link the views to data and the views with each other
The component diagram of the tool is shown in Figure 5, which shows the module **Wizard** based on two components that act as assistants, the **DataLinker** and the **VisualizationLinker**. These modules are described below:
- **DataLinker**: Allows to perform the database connection, read the visualizations, and provides the interface for the programmer to make the association between the fields of the database and the variables of the visualizations. The component creates the class `clsVisData` and generates the code to perform the binding.
- **VisualizationLinker**: This module facilitates the linking between the visualizations through a mapping between these and their visual elements. It reads the classes, displays a list with the variables required to link the visualizations, creates the class `clsVisLkinker` and generates the necessary code.
**Figure 5.** Component diagram of the visual analytics tool
**VisualizationLinker** is independent of the data and whose visualizations are not linked to each other but can be dynamically modified by **DataLinker** to connect the data and link the views. This component uses an architecture based on Model-View-Controller (MVC).
**5. Conclusions**
Developers and project managers need to understand the software they are developing and maintaining, when they have no prior knowledge or documentation of those systems. This situation acquires greater importance with the fact that software evolution is a process which usually last several years and produces data that shares many of the typical characteristics of Big Data. Thus, the capacities of programmers and project managers are particularly limited when they need to analyze large projects and are not able to extract useful information.
Therefore, the use of visual analytics is a practical alternative due to its advantages to transform large volumes of data into knowledge using a funnel approach powered using advanced and automatic analysis, visual representations and the abilities of humans to detect patterns.
and make decisions. However, there is no evidence of the use of visual analytics in industry and the use of simple visual tools is limited. This research has considered such factor and that the analysis of source code is a non-trivial process that needs methods and techniques that have been proved and validated in other areas to support analytical reasoning and decision making.
The main contribution of this research is the design of an architecture to define frameworks based on the evolutionary visual software analytics process. The architecture was defined using as basis the previous research carried out for the definition of Maleku and the requirements of programmers and project managers of the partnering companies. The implementation of the architecture will be reusing a large portion of source code implemented in the previous work but requires programming several new components to satisfy the requirements. Furthermore, this research will incorporate new methods for the analysis of indirect coupling, code clones, program dependencies as well as novel visualizations to support analytical reasoning.
The role of the companies is key to validate results and to make feasible the knowledge transfer and generate more impact in society. Consequently, the outcomes of this investigation will be validated by the partnering companies and practitioners from other organizations.
References
|
{"Source-Url": "http://ingenieria.ute.edu.ec/enfoqueute/index.php/revista/article/download/455/299", "len_cl100k_base": 7362, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 34998, "total-output-tokens": 10652, "length": "2e12", "weborganizer": {"__label__adult": 0.0003552436828613281, "__label__art_design": 0.0004425048828125, "__label__crime_law": 0.000255584716796875, "__label__education_jobs": 0.0004723072052001953, "__label__entertainment": 4.89354133605957e-05, "__label__fashion_beauty": 0.00012540817260742188, "__label__finance_business": 0.00013947486877441406, "__label__food_dining": 0.0002741813659667969, "__label__games": 0.0004138946533203125, "__label__hardware": 0.0005755424499511719, "__label__health": 0.0003032684326171875, "__label__history": 0.00017189979553222656, "__label__home_hobbies": 5.227327346801758e-05, "__label__industrial": 0.0002460479736328125, "__label__literature": 0.0001798868179321289, "__label__politics": 0.0001894235610961914, "__label__religion": 0.00035691261291503906, "__label__science_tech": 0.00494384765625, "__label__social_life": 5.805492401123047e-05, "__label__software": 0.004108428955078125, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.0002377033233642578, "__label__transportation": 0.0003170967102050781, "__label__travel": 0.00016868114471435547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45643, 0.0231]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45643, 0.61373]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45643, 0.86762]], "google_gemma-3-12b-it_contains_pii": [[0, 1975, false], [1975, 5700, null], [5700, 9606, null], [9606, 13238, null], [13238, 15504, null], [15504, 18764, null], [18764, 20560, null], [20560, 23832, null], [23832, 27105, null], [27105, 30457, null], [30457, 32353, null], [32353, 35058, null], [35058, 37101, null], [37101, 41016, null], [41016, 44699, null], [44699, 45643, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1975, true], [1975, 5700, null], [5700, 9606, null], [9606, 13238, null], [13238, 15504, null], [15504, 18764, null], [18764, 20560, null], [20560, 23832, null], [23832, 27105, null], [27105, 30457, null], [30457, 32353, null], [32353, 35058, null], [35058, 37101, null], [37101, 41016, null], [41016, 44699, null], [44699, 45643, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45643, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45643, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45643, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45643, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45643, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45643, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45643, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45643, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45643, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45643, null]], "pdf_page_numbers": [[0, 1975, 1], [1975, 5700, 2], [5700, 9606, 3], [9606, 13238, 4], [13238, 15504, 5], [15504, 18764, 6], [18764, 20560, 7], [20560, 23832, 8], [23832, 27105, 9], [27105, 30457, 10], [30457, 32353, 11], [32353, 35058, 12], [35058, 37101, 13], [37101, 41016, 14], [41016, 44699, 15], [44699, 45643, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45643, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
4269b78b524f1188c20226fc43a14dfc62c174f3
|
[REMOVED]
|
{"Source-Url": "https://inria.hal.science/inria-00477562/file/Morin07b.pdf", "len_cl100k_base": 7712, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 39937, "total-output-tokens": 10069, "length": "2e12", "weborganizer": {"__label__adult": 0.00031948089599609375, "__label__art_design": 0.0003113746643066406, "__label__crime_law": 0.00024962425231933594, "__label__education_jobs": 0.0004911422729492188, "__label__entertainment": 4.041194915771485e-05, "__label__fashion_beauty": 0.0001399517059326172, "__label__finance_business": 0.00016689300537109375, "__label__food_dining": 0.0002586841583251953, "__label__games": 0.0003025531768798828, "__label__hardware": 0.0004649162292480469, "__label__health": 0.0003147125244140625, "__label__history": 0.0001964569091796875, "__label__home_hobbies": 6.145238876342773e-05, "__label__industrial": 0.0002684593200683594, "__label__literature": 0.000194549560546875, "__label__politics": 0.00024580955505371094, "__label__religion": 0.00036406517028808594, "__label__science_tech": 0.003955841064453125, "__label__social_life": 6.967782974243164e-05, "__label__software": 0.0035190582275390625, "__label__software_dev": 0.9873046875, "__label__sports_fitness": 0.0002440214157104492, "__label__transportation": 0.0003814697265625, "__label__travel": 0.00018215179443359375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43076, 0.02592]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43076, 0.3501]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43076, 0.86658]], "google_gemma-3-12b-it_contains_pii": [[0, 1156, false], [1156, 3839, null], [3839, 6276, null], [6276, 8532, null], [8532, 10772, null], [10772, 13914, null], [13914, 17277, null], [17277, 20111, null], [20111, 21602, null], [21602, 24741, null], [24741, 27008, null], [27008, 28446, null], [28446, 31891, null], [31891, 35191, null], [35191, 38379, null], [38379, 41727, null], [41727, 43076, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1156, true], [1156, 3839, null], [3839, 6276, null], [6276, 8532, null], [8532, 10772, null], [10772, 13914, null], [13914, 17277, null], [17277, 20111, null], [20111, 21602, null], [21602, 24741, null], [24741, 27008, null], [27008, 28446, null], [28446, 31891, null], [31891, 35191, null], [35191, 38379, null], [38379, 41727, null], [41727, 43076, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43076, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43076, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43076, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43076, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43076, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43076, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43076, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43076, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43076, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43076, null]], "pdf_page_numbers": [[0, 1156, 1], [1156, 3839, 2], [3839, 6276, 3], [6276, 8532, 4], [8532, 10772, 5], [10772, 13914, 6], [13914, 17277, 7], [17277, 20111, 8], [20111, 21602, 9], [21602, 24741, 10], [24741, 27008, 11], [27008, 28446, 12], [28446, 31891, 13], [31891, 35191, 14], [35191, 38379, 15], [38379, 41727, 16], [41727, 43076, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43076, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
c3022f294c4c7d29eda76a18b667be9efcfb741c
|
DEVELOPMENT OF A COMPUTATIONAL TOOL FOR FORENSIC DNA ANALYSIS
By ABHISHEK GARG
A thesis submitted to the
Graduate School-Camden
Rutgers, The State University of New Jersey
In partial fulfillment of the requirements
For the degree of Master of Science
Graduate Program in Computer Science
Written under the direction of
Dr. Desmond S. Lun
And approved by
Dr. Desmond S. Lun
Dr. Suneeta Ramaswami
Dr. Dawei Hong
Camden, New Jersey
October 2015
THESIS ABSTRACT
Development of a computational tool for Forensic DNA Analysis
By ABHISHEK GARG
Thesis Director
Dr. Desmond S. Lun
Forensic DNA analysis uses repetitive sequences in the human genome called Short Tandem Repeats (STRs) for human identification. This study has contributed to the building, design, and development of a fast and easy to use software package to calculate the \textit{a posteriori} probability (APP) on the number of contributors in an STR profile and to calculate the likelihood ratio (LR), a statistic that conveys the strength of a match for a given suspect, as well as the distribution of the LR over random non-contributors. This research specifically deals with the (1) design and implementation of an algorithm for MatchIt (the component that calculates the LR and its distribution); (2) optimization the code to reduce running time; and (3) development of a user-friendly interface.
ACKNOWLEDGEMENTS
I would never have been able to finish my dissertation without the guidance of my advisor, support of my committee members, help from friends and full support from my family.
I would like to express my deepest gratitude to my advisor, Dr. Desmond Lun, for his excellent guidance, expertise, patience, and providing me with an excellent atmosphere for doing research. I would like to thank Dr. Dawei Hong, Dr. Michael Palis, Dr. J C Birget and Dr. Sunil Shende who helped me to develop my background in optimization methods, parallel algorithms, numerical methods and big data algorithms.
I would like to thanks Harish Swaminathan, who was always a good friend, was always willing to help and give his best suggestions. Many thanks to Mark Moore and Anurag Arnold for helping in the implementation of user design. My research would not have been possible without their help.
Finally, I would like to say thanks to Dr. Suneeta Ramaswami for giving me an opportunity to be a part of Department of Computer Science at Rutgers Camden.
Table of Contents
Thesis Abstract .................................................................................................................. II
Acknowledgements ........................................................................................................ III
Table of Contents............................................................................................................. iv
List of Figures.................................................................................................................... v
Introduction..................................................................................................................... 1
Purpose ............................................................................................................................. 3
Design and Implementation of MatchIt Algorithm....................................................... 4
3.1 Introduction .............................................................................................................. 4
3.2 MatchIt Algorithm.................................................................................................. 5
3.2.1 LR numerator calculation ............................................................................ 6
3.2.2 LR distribution and p-value calculation ..................................................... 9
3.2.3 LR denominator calculation ....................................................................... 11
3.3 Testing and Results ............................................................................................... 11
Code Optimization using various techniques............................................................. 16
4.1 Introduction .............................................................................................................. 16
4.2 Example of one code snippet ............................................................................ 17
4.3 Results ..................................................................................................................... 18
User interface design and development................................................................. 20
5.1 Introduction .............................................................................................................. 20
5.2 Design and Implementation .............................................................................. 20
5.3 Results ..................................................................................................................... 20
Appendix A .................................................................................................................... 24
REFERENCES.................................................................................................................. 26
List of Figures
Figure 1 Sample electropherogram with peak heights at each allele......................... 5
Figure 2 Calibration data of true peak mean at locus D7S820, distribution ax+b, R
square value .9929 .................................................................................................................. 12
Figure 3 Calibration data of reverse stutter peak mean at locus D7S820, distribution
a*ebx +c, R square value .9956 ............................................................................................. 12
Figure 4 Calibration data of dropout at locus D7S820, distribution a*ebx , R square
value .9946 mass ....................................................................................................................... 13
Figure 5 Graph showing results for MatchIt using 1p samples ..................................... 14
Figure 6 Graph showing results for MatchIt for 2p sample ............................................ 14
Figure 7 Graph showing results for MatchIt for 3p sample ............................................ 15
Figure 8 Reduction in running time of base code snippet with each optimization .. 20
Figure 9 Software efficiency percent gain in different versions...................................... 19
Figure 10 First tab of software - Calibration .................................................................. 19
Figure 11 Second window of Calibration tab with the resulting data ......................... 21
Figure 12 NOCit tab with the result .................................................................................... 22
Figure 13 MatchIt tab ........................................................................................................... 23
Introduction
Starting from mid 90s, Short Tandem Repeats (STRs) have been used in the field of human identification for forensic purposes [1]. STRs are repetitive sequences that are 1–7 base pairs in length and scattered throughout the human genome. An STR DNA profile developed from a biological sample (like saliva, semen, blood, etc.) collected at a crime scene is either compared with that of a person of interest (POI) or run against a database to check for a match. The Scientific Working Group on DNA Analysis Methods (SWGDAM) recommends that forensic reports include a statement regarding the assumption made about the number, or the minimum number of contributors, to the sample being investigated [2].
The number of contributors to a crime scene sample is generally unknown and must be estimated by the analyst based on the electropherogram obtained.
An assumption about the number of contributors is needed when determining whether a known should be excluded as a contributor to an item of evidence. Changing the number of contributors could lead to different conclusions about whether to include or exclude an individual as a contributor to the sample. Further, an assumption about the number of contributors to a sample is needed to calculate a match statistic, called the Likelihood Ratio [3]. A statistic that is commonly used internationally and is gaining acceptance in the United States.
The Likelihood Ratio (LR) is defined as:
\[
LR = \frac{Pr(E|H_p, n_p)}{Pr(E|H_d, n_d)},
\]
where $E$ is the evidence in the form of the electropherogram (epg); $H_p$ and $H_d$ are the hypotheses specified by the prosecution and the defense, respectively; and $n_p$ and $n_d$ are the number of contributors specified by the prosecution and the defense, respectively. The numerator is the probability of observing the evidence given the prosecution’s hypothesis and the denominator is the probability of observing the evidence given the defense’s hypothesis. The evidence shows support for the prosecution’s hypotheses if $LR > 1$; if $LR < 1$ the defense’s hypothesis is supported by the evidence. The calculation of a Likelihood Ratio depends upon an assumption about the number of contributors both in the numerator as well as the denominator, making it essential to have a good estimate about the number of contributors to accurately calculate a statistic that represents the information captured in the signal. Thus, utilizing a number of contributors that is not representative of the actual number that gave rise to a sample may affect the interpretation of the sample's profile. The p-value for the suspect is defined as the probability that a randomly picked person from the population would give rise to an LR at least as large as the one observed for the suspect.
$$p - value(s) = (LR(R) \geq LR(s))$$
Purpose
The purpose of the current study was to develop a software package that estimates the number of contributors to an evidence sample and subsequently calculates a LR for person of interest based on the given DNA sample and the number of contributors in an accurate, reliable and efficient manner. This work is specifically focused on the following three aspects of mixture interpretation:
1. Design and implementation of an algorithm for MatchIt
2. Code optimization to reduce running time
3. User interface design and development
Design and Implementation of MatchIt Algorithm
3.1 Introduction
The software is divided mainly in three parts as 1) Calibration, 2) NOCIt, and 3) MatchIt.
Characterization of the peak heights is done by using single source calibration profiles with known genotypes obtained from samples amplified from a wide range of input DNA masses. Absolute DNA (extracted from 28 individuals) quantification was performed using real-time PCR and the Quantifiler® Duo™ Quantification kit according to the manufacturer’s recommended protocol and one external calibration curve [4, 5]. The extracted DNA was amplified using the manufacturer’s recommended protocol for AmpFISTR® Identifiler® Plus Amplification Kit (Life Technologies, Inc.) [6]. A fragment analysis was performed using GeneMapper IDX v1.1.1 (Life Technologies, Inc.) and an RFU threshold of one. A threshold of 1 RFU was used in order to capture all peak height information (i.e., the allelic peaks, baseline noise and stutter peaks) in the signal. Known artifacts such as pull-up, spikes, -A, and artifacts due to dye dissociation were manually removed while generating the sample data. Refer to figure 1 for a sample electropherogram with peak heights at each allele. For a detailed description of how the calibration samples were created, refer to [7].
Figure 1 Sample electropherogram with peak heights at each allele
NOCIt is a computational tool that calculates the APP on the number of contributors to a DNA sample [7].
MatchIt uses single source samples with known genotypes and calculates a LR and a $p$-value for a specified POI on a question sample. It is a fully continuous method that works by modeling the peak heights observed in a calibration data set consisting of single source samples with known genotypes. It accounts for dropout and stutter (both reverse and forward), two common artifacts observed in low template samples [8]. Additionally, MatchIt also computes a $p$-value for the LR by sampling a large number of random genotypes from the population.
3.2 MatchIt Algorithm
The Likelihood Ratio (LR) is defined as:
$$LR = \frac{Pr(E|H_p, n_p)}{Pr(E|H_d, n_d)}.$$
In practice, \( n_p \) and \( n_d \) can be chosen by the prosecution and the defense to maximize their respective probabilities and there is no necessity for \( n_p \) to be equal to \( n_d \). However, we have developed MatchIt to use the same number of contributors in both the numerator and denominator to calculate the LR. It should be noted that the method could be extended to work on different assumptions on the number of contributors. Moving forward, we omit the notation \( n \) for the sake of brevity. We note that for purposes of this work, \( n_p = n_d \) in all cases presented herein, and we use the known, and thus the true \( n \) to test the capabilities of MatchIt.
For this study, we use the following hypotheses for \( H_p \) and \( H_d \):
\( H_p \): The evidence is a mixture of the genotype profile of a suspect (\( s \)) and the profiles of \( n - 1 \) other unknown, unrelated contributors, whom we term for the purpose of this paper as “the interference contributors”.
\( H_d \): The evidence is from \( n \) unknown individuals unrelated to the suspect.
In most cases, the value of the LR is very large (or very small) and it is easier to work with \( \log(\text{LR}) \). Hence we have:
\[
\log(\text{LR}(s)) = \log(Pr(E|R = s, U^{n-1})) - \log(Pr(E|U^n)),
\]
where \( U^i = \{U_1, ..., U_i\} \) are the random genotypes of \( i \) contributors and \( R \) is the random genotype of a single contributor, whether it be a true contributor, or non-contributor.
### 3.2.1 LR numerator calculation
Our algorithm assumes a constant mixture ratio at all the loci. The mixture ratio specifies the proportion of the total template mass contributed by each contributor to the sample. The
underlying mixture ratio of an evidence sample is unknown and needs to be described by a model in order to compute a continuous LR. A constant mixture ratio model assumes that the mixture ratio is the same at all the markers, whereas a variable mixture model accounts for the possibility of the mixture ratio being different at the various markers. Both models are reasonable and are used in existing continuous methods to compute the LR. Perlin et al [9] assign a uniform prior probability for the template mixture weight and construct its probability distribution by drawing individual locus weights using a multivariate normal distribution. Cowell et al [10] and Puch-Solis et al [11] use a constant mixture ratio model and implement a discrete approximation over the interval (0,1) by assigning a uniform prior. Taylor et al [12] use the variable model and assume the mixture weights to be independent across the loci. Since we adopt the constant mixture ratio approach, we integrate over all possible mixture ratios to calculate the probability of observing the evidence:
\[ \Pr(E|R = s, U^{n-1}) = \int_{\theta \in \Delta^{n-1}} \Pr(E|\theta = \theta, R = s, U^{n-1}) f_\theta(\theta), \]
where \( \theta \) is the vector with components \( \theta_i \), the mixture proportion of each contributor \( i \in \{1, \ldots, n_{\text{max}}\} \); \( \Delta^{n-1} = \{ (\theta_1, \ldots, \theta_n) \in \mathbb{R}^n | \sum_{i=1}^{n} \theta_i = 1, \theta_i > 0 \forall i \} \) is the unit \( n-1 \) simplex; and \( f_\theta \) is the probability density function of \( \theta \), which we assume to be uniform over \( \Delta^{n-1} \). For \( n = 1 \), \( \Delta^{n-1} \) consists of the single element \{1\}. For mixtures, we implement the integration over \( \Delta^{n-1} \) by dividing it into equal-sized subsets and representing each subset with its centroid, resulting in a discrete sum.
To do this, we performed k-means clustering in Python (Python Software Foundation, Beaverton, Oregon). k-means clustering is an algorithm used to partition observations into a set of clusters by repeated minimization of the distance from an observation to the centroid
of its cluster [13]. For \( n = 2 \), the space was divided into 9 equally sized clusters, while for \( n = 3 \), 12 clusters were used.
Let \( L \) be the set of all loci in the evidence sample, \( E_l \) be the evidence at locus \( l \), \( U_l^{n-1} \) be the genotype of the interference contributors at locus \( l \) and \( s_l \) be the genotype of the suspect at locus \( l \). The STR loci used for forensic DNA analysis are assumed to be in linkage equilibrium and independent of each other [14]. Hence we obtain:
\[
\Pr(E|\Theta = \theta, R = s, U^{n-1}) = \prod_{l \in L} \Pr(E_l|\Theta = \theta, R_l = s_l, U_l^{n-1}).
\]
The prosecution’s hypothesis states that the profile is made of the suspect’s contribution plus the contribution from \( n - 1 \) other random, unrelated contributors. Since there are many possibilities for the genotype of these interference contributors at each locus and going over each case would take a large amount of time, we calculate
\[
\Pr(E_l|\Theta = \theta, R_l = s_l, U_l^{n-1})
\]
using importance sampling.
Importance sampling is a Monte Carlo sampling algorithm in which, instead of sampling directly from the target distribution, samples are generated from a different distribution that is easier to sample from [15]. To take into account the fact that the samples have come from the ‘wrong’ distribution, weights are introduced to adjust the ‘importance’ of each sample. For the problem at hand, instead of sampling using the allele frequency distribution, we generate samples of the interference genotypes using the peak height distribution observed at the locus. The reason for sampling from the peak height distribution is that this method is faster and requires fewer samples for convergence than the method that samples from the allele frequency distribution.
Let $J$ be the number of interference samples used. Now we obtain:
$$Pr(E_l | \theta = \theta, R_l = s_l, U_l^{n-1}) = \sum_{i=1}^{J} Pr(E_l | U_l^{n-1} = u_l^{n-1}, \theta = \theta, R_l = s_l) w_i \sum_{i=1}^{J} w_i,$$
where $w_i = P(U_l^{n-1}) / Q(U_l^{n-1})$ is the weight of sample $i$; $P(U_l^{n-1})$ is the probability of the interference genotypes under the allele frequency distribution; and $Q(U_l^{n-1})$ is the probability of the interference genotypes under the peak height distribution. Since $u_l^{n-1}$ and $s_l$ establish the true peaks in the signal (and by extension the stutter and noise peaks), $Pr(E_l \lor U_l^{n-1} = u_l^{n-1}, \theta = \theta, R_l = s_l)$ is calculated using the parameters from the calibration data.
### 3.2.2 LR distribution and p-value calculation
Since the denominator of the LR is the same for all the random genotypes $R$, it is sufficient if we compare the numerator of the LR for $R$ and $s$.
$$p-value(s) = Pr(Pr(E|R, U^{n-1}) \geq Pr(E|R = s, U^{n-1})).$$
During testing of MatchIt, we observed that because of floating-point precision, $Pr(E \lor R, U^{n-1})$ evaluated to 0 for many of the random genotypes $R$ that fit the data poorly. As a result, we were able to eliminate those genotypes from the p-value calculation as a preliminary step. Formally, let $R$ be the set of all genotypes. We define $R_1 = \{ r \in R \lor Pr(E_l | R_l = r_l) \neq 0 \text{ for all loci } l \}$ and $R_2 = \{ r \in R \lor \exists \text{ locus } l \text{ s.t.} Pr(E_l | R_l = r_l) \approx 0 \}$, where $\approx 0$ means “evaluates to 0 using double-precision 64-bit floating-point arithmetic”.
Thus, we have $R = R_1 \cup R_2 \land R_1 \cap R_2 = \emptyset$. We see that for all $r \in R_2$, $Pr(E | R = r) \approx 0$.
We omit the notation on $U^{n-1}$ for the sake of brevity. We have:
\[ p\text{-value}(s) = Pr(R \in R_1) \sum_{r \in R_1} Pr(Pr(E|R = r) \geq Pr(E|R = s)) Pr(R = r| R \in R_1) \]
\[ + Pr(R \in R_2) \sum_{r \in R_2} Pr(E|R = r) \geq (Pr(E|R = s) Pr(R = r| R \in R_2) \]
We see that the second term is 0, provided \( Pr(E \lor R = s) \) is greater than 0. Hence we get:
\[ p\text{-value}(s) = Pr(R \in R_1) \sum_{r \in R_1} 1(Pr(E|R = r) \geq Pr(E|R = s)) Pr(R = r| R \in R_1), \]
where
\[ 1(Pr(E|R = r) \geq Pr(E|R = s)) = \begin{cases} 1, & \text{if } Pr(E|R = r) \geq Pr(E|R = s), \\ 0, & \text{otherwise}. \end{cases} \]
We have:
\[ Pr(R \in R_1) = \prod_{l \in L} \sum_{r_l \in \{r_l | Pr(E_l|R_l = r_l) \neq 0\}} Pr(R_l = r_l). \]
We compute the \textit{p-value} using Monte Carlo simulation. We generate \( M \) random genotypes \( r^1, ..., r^M \) according to the distribution \( Pr(R \lor R \in R_1) \) and calculate the p-value as:
\[ p\text{-value}(s) = Pr(R \in R_1) \frac{\sum_{i=1}^{M} 1(Pr(E|R = r^i) \geq Pr(E|R = s))}{M} \]
Increasing the value of \( M \) increases the accuracy of the p-value computed, but this also increases the run time and hence a tradeoff has to be achieved between the two. In this study, we have used 1 billion or 10^9 random genotypes to compute the p-value.
In order to facilitate the computation of the p-value, as an initial step
\[ Pr(E_l|\theta = \theta, R_l = g_l, U_l^{n-1}) \] is computed for all possible genotypes \( g_l \) at all loci \( l \) for all
values of $\theta$. Once this is done, for the p-value computation, 109 genotypes $r^I$ are generated based on the allele frequencies. Since we know $\Pr(E_l|R_l = r^I_l)$ for all loci $l$, we can compute $\Pr(E|R = r^I)$ as:
$$\Pr(E|R = r^I) = \int_{\theta \in \Delta} \prod_{l} \Pr(E_l|\theta = \theta, R_l = r^I_l, U_{l-1}) f_\theta(\theta).$$
### 3.2.3 LR denominator calculation
Let $\bar{R}$ be the genotype of an unknown contributor in the defense’s hypothesis. The denominator of the LR can be written as:
$$\Pr(E|U^n) = \sum_{\bar{r}} \Pr(E|\bar{R} = \bar{r}, U^{n-1}) \Pr(\bar{R} = \bar{r})$$
Since the number of possible values that $\bar{R}$ can take is large and summing over all of them is computationally intensive, we utilize the random genotypes $r^I$ that are sampled for the p-value computation to compute the denominator of the LR as follows:
$$\Pr(E|U^n) = \Pr(R_1) \frac{\sum_{l=1}^{M} \Pr(E|R = r^I_l)}{M}.$$
Figure 2 Calibration data of true peak mean at locus D7S820, distribution ax+b, R square value .9929
Figure 3 Calibration data of reverse stutter peak mean at locus D7S820, distribution a*e^{bx}+c, R square value .9956
We found that the amount of template DNA from the contributor had an impact on the LR – small LRs were associated with contributors having low template amounts and high levels of dropout and stutter. Since we used $10^9$ samples to calculate the $p$-value, the lowest possible $p$-value that can be achieved is $10^9$, and this was obtained in all the cases where the LR was greater than $10^8$.
All the graphs shown below in Figure 5, 6 and 7 were tested for various sample files with varying DNA mass. Each sample was tested for actual contributors and 3 other non-contributors. And results were more or less as expected, which is for actual contributor $p$-value is $-9$ (as we are considering one billion samples) and for non-contributors it is close to zero.
Figure 5 Graph showing results for MatchIt using 1p samples
Figure 6 Graph showing results for MatchIt for 2p sample
As the number of contributors increases chances of deflection in the correct also increases. This could also be seen in Figure 7, but the values are clear enough and do not hinder the integrity of the result.
Code Optimization using various techniques
4.1 Introduction
In general, a software could be optimized in three different aspects 1) reduce the running time, 2) use less memory or resources, and 3) draw less power [16]. In this paper our main focus was to reduce the running time of the software. As other two aspects are not that relevant in correspondence to the software. Memory or resources requirements and power usage for this software are as less as to what a personal computer is equipped with these days, which could also be phrased as – 2.3 GHz central processing unit, 1GHz RAM. Software requirements are - windows 7 or later, macintosh X or later, java version 8. There is a widely accepted “rule of thumb”[17] in speed optimization known as the Pareto Principle [18] phrasing that almost 90% of the execution time is spend executing only 10% of the code.
An optimization technique known as “Profile-guided optimization technique” [19] has been used to find the part of code, which is consuming maximum running time. This optimization technique is based on profiling the code, where a dynamic program analysis is done to measure the space (memory), running time of a program, or frequency and duration of function calls [20]. Software was run using a code profiler called “JProfiler” to analyze the running time (CPU cycles) of each class, routine, and line of code during the execution of the software. Then, various optimization techniques (mentioned in Appendix_A) were applied to the code, prioritizing the functions that had longer running times. Changes were accepted if they resulted in an improvement in the runtime performance and discarded otherwise.
4.2 Example of one code snippet
The following function was called around 100,000 times in the software in one execution.
```java
public double[] calcSlopeValue(String locusName, double massValue,
HashMap<String, double[]> meanSlope, HashMap<String, double[]> stdDevSlope) {
try {
double a_mean = meanSlope.get(locusName)[0];
double b_mean = meanSlope.get(locusName)[1];
double a_stddev = stdDevSlope.get(locusName)[0];
double b_stddev = stdDevSlope.get(locusName)[1];
double mean = (a_mean * massValue) + b_mean;
double stddev = (a_stddev * massValue) + b_stddev;
return new double[] {mean, stddev};
} catch (Exception e) {
return null;
}
}
```
We changed the input parameters from HashMaps to Arrays, which saved approximately 2% of the total running time (CPU cycles). Code became easy to understand and simple to use.
Modified code snippet is as follows:
public double[] calcSlopeValue(double[] aMean, double[] aStdDev, double massValue) {
try {
double mean = (aMean[0] * massValue) + aMean[1];
double stddev = (aStdDev[0] * massValue) + aStdDev[1];
return new double[] {mean, stddev};
} catch (Exception e) {
return null;
}
}
4.3 Results
Created Allele class which saved 70% of auto-boxing conversions from primitive data type to Wrapper class. After this implementation, saved 21.5% in running time.
Converted few HashMaps(get function) to Arrays which reduced overall running time by 14.2%.
Converted few HashMaps(put function, indexing) to ArrayLists which reduced overall running time by 7%.
Changed the implementation of two base methods (calcTwoExpValue and calcSlopeValue and calcExpValue), which had HashMap parameters. This technique implementation saved 10% of the overall running time.
Histogram shown in Figure 8 shows the reduction in time for the core calculation of the program after each optimization technique mentioned in the Table 1. There is an approximately 40% improvement in the overall running time for the core calculation.
Figure 8 Reduction in running time of base code snippet with each optimization
Graph shown in Figure 9 shows the percent gain in the overall run time of the software.
Figure 9 Software efficiency percent gain in different versions
User interface design and development
5.1 Introduction
Having an interface, which makes it easier for the user to handle the software, is an important part of a software package. For example, in calculating curve fitting data during the calibration process (see Section 3.2), it is important to develop a user interface that allows users to easily see the results of the calibration and to modify it interactively. One part of this work was to come up with such an easy-to-use interface for the software, and to display calculated calibration data in a proper manner.
5.2 Design and Implementation
Various versions of the software were developed, paying attention to several principles in user design [21]. The software interface is divided in three different tabs: (1) Calibration, (2) NOCI, and (3) MatchIt. The calibration tab helps the user to input the data files and parameters. The next window of the calibration tab can be used to input the parameters used for curve fitting. After the calculation, the user can easily navigate using a tree table on the left, and can see the data in an easy to read graphical form on the right side of the window.
Similarly, for the NOCI and MatchIt tabs, the user can input parameters for the calculation and select the corresponding calibration data generated in the previous step. Based on the parameters, the result is displayed in an easy to understand graphical form, which is displayed in the lower window of the respective tab.
5.3 Results
Calibration is the tab from which our software starts with, as it is usually the first step in the
whole process. Figure 8 shows start of the software where user can input the parameters and data files required to generate calibration data.

**Figure 10 First tab of software - Calibration**
After inputting the parameters on first window of the calibration tab, user is supposed to click on the “Next” button. This button will take the user to next window of calibration tab as shown below in Figure 9.
User can select parameters and values associated with each of the marked fields in the table and then have to click on the “Calibrate” button to start the calculation. After a few seconds, calculated calibration data will be displayed in a tabular form. User can select any row from the displayed tree table which will display the corresponding graph on the right.
Once the calibration data is generated, user can either save it or can directly go to any of the next two tabs NOCIt and MatchIt. Below Figure 10 shows the NOCIt tab of the software where user can select any number of rows, each with the appropriate parameters to start NOCIt calculation. Result will be shown as a histogram on the bottom of the window.
MatchIt tab is very similar to NOCIt tab where user can select various rows with appropriate parameters and will get the result in the form of a histogram on the bottom of the window.
Figure 11 shows the MatchIt tab without a result.

**Figure 13** MatchIt tab
### Appendix A
<table>
<thead>
<tr>
<th>Optimization Technique</th>
<th>Example</th>
<th>Reason and Improvement</th>
</tr>
</thead>
<tbody>
<tr>
<td>Common sub-expression elimination</td>
<td>a = b * c + g; d = b * c * e; To tmp = b * c; a = tmp + g; d = tmp * e;</td>
<td>Saves various operations</td>
</tr>
<tr>
<td>Code motion</td>
<td>for(int x : array) { value += func(3) * x; To tmp = func(3); for(int x : array) { value += tmp * x;</td>
<td>Saves multiple function calls</td>
</tr>
<tr>
<td>Use arrays instead of HashMaps</td>
<td>HashMap<Integer,Double> valueMap = new HashMap<>(); To double[] valueMap = new double[size];</td>
<td>Saves conversion from primitive data-type to corresponding object. Overall calculation becomes much more efficient.</td>
</tr>
<tr>
<td>Unrolling loops</td>
<td>for (int j=0; j< 2-l; j++) { s = "0"+s; } TO for (int j=0; j< 2-l; j+=2) {</td>
<td>Saves multiple integer operations</td>
</tr>
<tr>
<td>Method</td>
<td>Description</td>
<td></td>
</tr>
<tr>
<td>---------------------------------------------</td>
<td>-----------------------------------------------------------------------------</td>
<td></td>
</tr>
<tr>
<td>Removal of dead or unreachable code</td>
<td>Helps in decreasing memory utilization Saves unnecessary memory loads</td>
<td></td>
</tr>
<tr>
<td>Constant folding</td>
<td>$x = 2.0 \times x \times 4.0$ to $x = 8 \times x$ Saves a floating point operation</td>
<td></td>
</tr>
</tbody>
</table>
| Loop invariant optimization | While $x > 0$
| | $x = x - (y+z)$; to $t = y+z$; while $(x > 0)$ $x = x - t$; Saves multiple integer addition operations |
| Dead-variable elimination | Checked, analyzed and removed unnecessary dead variables Saves a memory operation |
| Changed LinkedInHashMap<Integer,Integer> to ArrayList<Integer> | LinkedInHashMap<Integer,Integer> value = new LinkedInHashMap<>(); To ArrayList<Integer> value = new ArrayList<>(); Saves conversion from primitive data-type to corresponding object. Overall calculation becomes much more efficient. |
| Changed ArrayLists to Arrays | ArrayList<Integer> value = new ArrayList<>(); To int[] value = new int[size]; Saves conversion from primitive data-type to corresponding object. Overall calculations becomes much more efficient. |
REFERENCES
2. SWGDAM Interpretation Guidelines for Autosomal STR Typing
|
{"Source-Url": "https://rucore.libraries.rutgers.edu/rutgers-lib/48800/PDF/1/play/", "len_cl100k_base": 7681, "olmocr-version": "0.1.53", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 67122, "total-output-tokens": 10081, "length": "2e12", "weborganizer": {"__label__adult": 0.0008516311645507812, "__label__art_design": 0.0009632110595703124, "__label__crime_law": 0.008880615234375, "__label__education_jobs": 0.0127716064453125, "__label__entertainment": 0.00017964839935302734, "__label__fashion_beauty": 0.00048661231994628906, "__label__finance_business": 0.0005183219909667969, "__label__food_dining": 0.0007734298706054688, "__label__games": 0.0013170242309570312, "__label__hardware": 0.0024280548095703125, "__label__health": 0.00220489501953125, "__label__history": 0.0010805130004882812, "__label__home_hobbies": 0.00030159950256347656, "__label__industrial": 0.00121307373046875, "__label__literature": 0.0007052421569824219, "__label__politics": 0.0009450912475585938, "__label__religion": 0.0006890296936035156, "__label__science_tech": 0.2442626953125, "__label__social_life": 0.0003712177276611328, "__label__software": 0.016021728515625, "__label__software_dev": 0.701171875, "__label__sports_fitness": 0.000644683837890625, "__label__transportation": 0.0008339881896972656, "__label__travel": 0.0002892017364501953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36909, 0.0238]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36909, 0.69212]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36909, 0.8252]], "google_gemma-3-12b-it_contains_pii": [[0, 457, false], [457, 1379, null], [1379, 2430, null], [2430, 5297, null], [5297, 7033, null], [7033, 8535, null], [8535, 9856, null], [9856, 10395, null], [10395, 11705, null], [11705, 12541, null], [12541, 14258, null], [14258, 16419, null], [16419, 18242, null], [18242, 20073, null], [20073, 21521, null], [21521, 22459, null], [22459, 22679, null], [22679, 23444, null], [23444, 23562, null], [23562, 23771, null], [23771, 25446, null], [25446, 26384, null], [26384, 27537, null], [27537, 27770, null], [27770, 29366, null], [29366, 30177, null], [30177, 30531, null], [30531, 30816, null], [30816, 32195, null], [32195, 33603, null], [33603, 35815, null], [35815, 36909, null]], "google_gemma-3-12b-it_is_public_document": [[0, 457, true], [457, 1379, null], [1379, 2430, null], [2430, 5297, null], [5297, 7033, null], [7033, 8535, null], [8535, 9856, null], [9856, 10395, null], [10395, 11705, null], [11705, 12541, null], [12541, 14258, null], [14258, 16419, null], [16419, 18242, null], [18242, 20073, null], [20073, 21521, null], [21521, 22459, null], [22459, 22679, null], [22679, 23444, null], [23444, 23562, null], [23562, 23771, null], [23771, 25446, null], [25446, 26384, null], [26384, 27537, null], [27537, 27770, null], [27770, 29366, null], [29366, 30177, null], [30177, 30531, null], [30531, 30816, null], [30816, 32195, null], [32195, 33603, null], [33603, 35815, null], [35815, 36909, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36909, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36909, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36909, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36909, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36909, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36909, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36909, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36909, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36909, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36909, null]], "pdf_page_numbers": [[0, 457, 1], [457, 1379, 2], [1379, 2430, 3], [2430, 5297, 4], [5297, 7033, 5], [7033, 8535, 6], [8535, 9856, 7], [9856, 10395, 8], [10395, 11705, 9], [11705, 12541, 10], [12541, 14258, 11], [14258, 16419, 12], [16419, 18242, 13], [18242, 20073, 14], [20073, 21521, 15], [21521, 22459, 16], [22459, 22679, 17], [22679, 23444, 18], [23444, 23562, 19], [23562, 23771, 20], [23771, 25446, 21], [25446, 26384, 22], [26384, 27537, 23], [27537, 27770, 24], [27770, 29366, 25], [29366, 30177, 26], [30177, 30531, 27], [30531, 30816, 28], [30816, 32195, 29], [32195, 33603, 30], [33603, 35815, 31], [35815, 36909, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36909, 0.05512]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
2845d7281844c6ae371624a9fdbde8d0b521d480
|
Enabling Connectors in Hierarchical Component Models
Julien Bigot, Christian Pérez
To cite this version:
HAL Id: ensl-00456961
https://ens-lyon.hal.science/ensl-00456961
Preprint submitted on 16 Feb 2010
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Enabling Connectors in Hierarchical Component Models
Julien Bigot, Christian Pérez
February 2010
Research Report N° RRLIP2010-9
Enabling Connectors in Hierarchical Component Models
Julien Bigot Christian Pérez
February 2010
Abstract
The continual growth of computing and storage capabilities enable numerical applications to integrate more and more phenomena in their computations at the price of increased complexity. Hierarchical component models appear as an interesting approach to handle such complexity. However defining and implementing efficient interactions between hierarchical components is a difficult task, especially in the case of parallel and distributed applications. Connectors from Architecture Description Languages (ADL) are a promising solution to this problem. However, they have only been introduced in flat component models.
This paper describes HLCM, a model supporting both connectors and component hierarchy. This is achieved by describing potential interaction of components using the new concept of open connections. Complex interactions such as data sharing and parallel interactions are successfully supported by HLCM.
An implementation, based on model transformation and on CCM, illustrates its feasibility and benefits.
Keywords: Software Components, Connectors, Hierarchy, Parallel/Distributed Computing, Model-Driven Engineering
Résumé
La croissance continue des capacités de calcul et de stockage permet aux applications numériques d’intégrer un nombre croissant de phénomènes dans leurs calculs au prix d’une complexité accrue. Les modèles de composants hiérarchiques apparaissent comme une approche intéressante pour gérer cette complexité. Cependant, définir et implémenter des interactions efficaces entre composants hiérarchiques est une tâche difficile, d’autant plus dans le cas d’applications parallèles et distribuées. Les connecteurs issus des langages de description d’architecture (ADL) offrent une solution prometteuse à ce problème. Ils n’ont cependant été introduits que dans des modèles de composants à plat.
Ce papier décrit HLCM, un modèle de composants qui supporte à la fois les connecteurs et la hiérarchie. Ce résultat est obtenu en décrivant les interactions potentielles des composants à l’aide du concept de connexion ouverte.
Les interactions complexes telles que le partage de données ou les interactions parallèles sont gérées avec succès par HLCM. Une mise en œuvre basée sur une transformation de modèle et sur CCM illustre sa faisabilité et ses bénéfices.
Mots-clés: Composants logiciels, Connecteurs, Hiérarchie, Calcul parallèle et distribué, Ingénierie dirigée par les modèles
1 Introduction
Component-based software engineering [14] is an interesting approach to simplify the development of complex applications such as scientific simulations as it improves code modularity and re-use as well as a better identification of code interactions and dependencies. In this paradigm, pieces of code are embedded into a component whose interactions with the environment are identified by a set of ports specifying both the services used and offered. Component-based applications are described by an assembly of component instances interconnected through their ports.
Scientific applications usually require a large amount of computing power and/or storage, typically delivered by complex hardware resource architectures, such as parallel and distributed infrastructures. They are therefore making use of algorithms thought to express parallel constructs. Component models only providing standard use/provide interactions are not satisfactory as they imply a strong binding between assemblies and hardware resources. An approach to solve this problem is to let these interactions be described at a higher level of abstraction, such as parallel-to-parallel component interactions, and to let their implementation be chosen when resources are known.
Supporting new kinds of interactions in component models that have not been designed for this purpose is a difficult task. An interesting concept enabling this support is brought by connectors from Architecture Description Languages (ADLs). They provide a generic mechanism to describe interactions between components. Though their introduction in software component models has been studied, it has been limited to flat component models.
This paper aims at studying the possibilities and the benefits in using connectors within hierarchical component models so as to efficiently support interactions between components. The difficult issue is to define a mechanism to let connectors cross composite definition. This paper proposes a generic hierarchical component model, named HLcm, which support connectors. It relies on the concept of open connections to specify interactions amongst components. Moreover, HLcm provides bundle ports and connection transformers to enable complex interactions. An implementation, restricted to static application, shows the feasibility of the model.
The remainder of this paper is organized as follow. Section 2 deals with the context while Section 3 presents HLcm. Examples using this model are described and discussed in Section 4. A proof-of-concept implementation is presented in Section 5. Section 6 draws some conclusions and presents some perspectives.
2 Context
This section presents an overview of related work on component models with support for hierarchy, HPC dedicated features and connectors. It discusses the advantages and limitations of each approach by focusing on two motivating examples: parallel code coupling through shared memory or (parallel) method calls. Finally, it studies the problems arising when combining hierarchy and connectors in a unique model.
Hierarchy in component models. Several component models support component hierarchy, such as Fractal [9] and SCA [12]. They support the concept of composite: a component whose implementation is an assembly of component instances interacting together. Ports exposed by composites are implemented by these internal instances. In SCA, this is achieved by the mean of promotion: ports of composites are defined as aliases of compatible internal instances ports. In Fractal, the concept of component membrane offering two views is used for this purpose. The first view describes the set of ports exposed by the composite while the second one is connected to its internal instances to provide its implementation.
Hierarchy is required to use component at multiple level of granularity. For example, it enables the description of a parallel component as an assembly of sequential components whose communications are handled by the model and whose placement can be handled by a dedicated mechanism. The efficient mapping of logical interactions onto physical ones is however highly dependent on
the placement of components on hardware resources. Therefore, such a mapping should not be embedded into an application assembly by application developer.
**Dedicated HPC interactions.** To deal with high performance computing, some interactions have thus been proposed as extensions to component models. $M \times N$ method calls from, to and between parallel (SPMD) components are for example available in CCA [2], GCM [5] (based on Fractal), and GRIDCCM [13] (an extension to CCM). Data sharing between components has also been proposed as an extension to CCA and CCM [3], and MPI-like collective communications as an extension to CCM [6]. Finally, some interactions are supported as part of more generic extensions such as the master/workers paradigm [8], and more generally (parallel) algorithmic skeletons [1].
The existence of these extensions demonstrates a clear need for HPC dedicated interactions, whose number is not known. Moreover, their implementation is complex and typically results in incompatible component models. This is due to the fact that component models were not designed with the support for additional types of interactions. Models based on Fractal such as GCM can partially support this by intercepting connections in the membrane to modify their behavior. This is however limited to the local mapping of new interactions on existing ones preventing optimized implementations relying on a global knowledge of the participants to the connection.
**Component models with connectors.** The concept of connector originates from ADLs. Connectors are first class entities similarly to components used to describe their interactions [11]. They have already been introduced in component models, for example in the SOFA component model [4] or in [10]. In these models, connectors contain roles (or plugs) fulfilled by ports of component instances to form a connection. Unlike components, connectors are intrinsically generic and their implementation can vary in function of the quantity, type and locality of the ports taking part in the connection.
Connectors make it possible to efficiently support complex interactions such as $M \times N$ method calls as there is a global knowledge of the participants when generating their implementation. Connectors have however only been introduced in flat component models until now. As explained before, hierarchy is a strong requirement to support multiple levels of granularity. A model supporting both features would be valuable.
**Analysis.** In order to understand the implications of the interactions between hierarchy and connectors, let us study the implementation of two motivating examples in an hypothetical model with both features. As discussed earlier, the parallel components would be composites containing instances of sequential components. Interactions between those instances could easily be supported by connectors providing MPI-like interactions for example. A first example is the connection of two parallel components with shared memory. It might require access to the shared memory by all the internal instances. The composite can either expose the memory sharing ports of its internal instances as a set of independent ports or group them in an internal connection. While the first solution fails to express the fact that the ports are part of a single interaction, the second one prevents interactions with instances outside the composite.
Similarly for the second example, in the case of a connection by method call, the composite can either expose the ports of its internal instances or group them so as to expose a single port making a sequential call. The first solution fails again to express that the ports are part of a single interaction and the second one implies a bottleneck in the case of parallel to parallel connection. Additionally, it should be possible for parallel components to be connected to sequential components. Relying on a distinct connector implementation for each case implies a quadratic number of implementations.
To summarize, a component model supporting both connectors and hierarchy should make it possible for connections to logically cross composite definitions. In addition, it should be possible to define a new type of ports without having to implement too many new connectors.
Enabling Connectors in Hierarchical Component Models
connector UP {
role user;
role provider;
}
Figure 1: Example of declaration of a connector UP that supports Use/Provide interactions.
component MyComponent exposes {
UP { user PT; } aC;
} ...
component MyHlcmPrimitive exposes {
UP { provider CcmFacet<A>; } a;
UP { user CcmReceptacle<B>; } b;
} ccm (''MyCcmComponent'')
Figure 3: Example of a Ccm component described in CORBA IDL3 (left) and its corresponding Hlcm component (right). CcmFacet and CcmReceptacle are two generic port types natively supported in Hlcm/CCM.
3 HLCM: a High Level Component Model
This section introduces HLCM, a generic component model with support for hierarchy and connectors. HLCM relies on an underlying execution model for the definition of some of its concepts (i.e. primitive components and connectors). For example HLCM/CCM uses CCM as its underlying execution model; its implementation is described in Section 5. First, the structural elements of HLCM are described and then its behavior is illustrated with an algorithm mapping an HLCM application to a primitive one.
3.1 Structural Elements of HLCM
The basis of HLCM is a standard hierarchical component model. Components expose a set of named interaction points and have an implementation. This implementation can be either primitive or composite. Primitive implementations are provided by the underlying execution model. Composite implementations are provided by an assembly of component instances and connections. HLCM supports genericity [7]; examples make use of a notation similar to JAVA generics.
As in other component models supporting connectors, interactions between components are described by connections that are instances of connectors. Connectors are first class entities that define a type of interaction. They contain a set of roles. Roles are named and have a multiplicity: either single (default) or unbounded. An examples of connector is shown in Figure 1. Roles in connections are filled by ports, roles of unbounded multiplicity can be filled by multiple ports.
A specificity of HLCM is that the interaction points of components are not ports but connections. These connections have some of their roles internally fulfilled by the implementation of the component but not necessarily all. Some roles will be fulfilled externally when connecting the component as will be explained hereafter. Connections allowing external role fulfillment are called open connections. An example of component exposing an open connection is shown in Figure 2. Ports can be either primitive ports or bundle ports. Bundle ports contain a set of named open connections.
**Primitive Components.** The definition of primitive component implementations depends on the targeted model. In HLCM/CCM, primitive components are implemented by CCM components. CCM components expose ports whereas their HLCM/CCM counterparts expose connections. Ports of CCM components are thus wrapped in HLCM connections of the same name as shown in Figure 3.
**Composite Components.** A composite component is described by an assembly of component instances and connections. A composite exposes a connection by making an alias to one of its
component MyComposite exposes {
UP { provider CcmFacet<A>; } a; // Exposition of c1.a as a
} composite {
A: this.a = c1.a;
// Two internal component instances
MyHlcmprimitive c1; // Interaction between c1.b and c2.a
MyHlcmprimitive c2;
X: UP cnab;
Y: cnab |= c1.b; cnab |= c2.a;
...
}
Figure 4: Example of a composite implementation containing two internal component instances c1 and c2. It exposes the connection c1.a as a using the alias operator =. It lets c1 and c2 interact through a connection cnab using the merge operator |=.
generator UPLog<interface UI, interface PI> with {
UI super PI; // constraints
} implements UP {
provider CcmFacet<PI>;
user CcmReceptacle<UI>;
} {
LoggerProxy<UI> proxy
UP up1; up1.user += this.user; up1 |= proxy.clientSide;
UP up2; up2.provider += this.provider; up2 |= proxy.serverSide;
}
Figure 5: Example of composite generator inserting a proxy component, which constraints the user type to be a parent of the provider interface type. The += operator fulfills a role with a port. For example, the role user of the connection up1 is fulfilled with the port user of the considered connection.
Generators. Generators are implementations of connectors. A connector can be implemented by several generators. A generator implements a specialization of a connector, that is to say a connector with constraints on the number, the type and the locality of the ports fulfilling its roles.
There are two kinds of generators: primitive and composite. Primitive generators specify the interactions directly supported by the underlying execution model. Composite generators generate an assembly in which the ports fulfilling the roles of the connector are made part of the connections as illustrated in Figure 5. This assembly can be parametrized by the number, type and locality of the ports fulfilling the roles of the connector.
Connection Transformers Connection transformers provide a functionality similar to inheritance in object oriented models. They make it possible to use a connection of a given type where another type was expected. The definition of a connection transformer is an assembly that uses the available connection and exposes a connection of the expected type instead as illustrated in Figure 6.
3.2 Behavior of HLCM Elements
The behavior of an HLCM application is defined through an equivalence with a primitive application, i.e. an application described in the underlying execution model. This means that it is fully defined by the combination of the definition of the behavior of applications in the underlying execution model and a mapping algorithm. Let us now further discuss this mapping algorithm.
An HLCM application is defined by the set of HLCM elements it contains: components, connectors, generators, port types and connection transformers and by the component used as the
Algorithm 1 Transforming an abstract HLCM application into a concrete one.
Input:
• An HLCM application
Output:
• A primitive application or an error
while composite component instances or unimplemented connections remain do
Replace the composite component instances by the content of their assembly and merge their exposed connections to those they are bound to;
Choose a set of connection transformers and generators whose constraints can be fulfilled to implement the connections or rollback or return an error;
Replace the composite connections by the content of their assembly;
end while
root of the application. To map it into a primitive application, it should be transformed into an assembly which only contains primitive components, primitive ports, and primitive connections.
Such a transformation can be achieved by applying Algorithm 1 that replaces composite instances by the content of their assembly and chooses the generators to use for the implementation of connections. It is non-deterministic as it does not specify how the choice of connection implementations is made. If no valid choice can be made at a given point, either a rollback is done or an error is returned. Any assembly obtained by applying this algorithm is defined as providing a valid behavior of the application.
The difficult part when implementing this algorithm lies in the choice of connection implementations. The identification of the valid combinations of connection transformers and generators that might be used to implement a given connection is a complex problem. As the amount of generators and connection transformers applicable to a given connector is expected to remain rather small, a naive implementation trying all combinations seems however acceptable.
It must be noted however that the choice of the implementation is not a self-contained problem. Locality constraints introduce dependencies between these choices. For example in a situation where two component instances are connected by two distinct connections, their locality constraints must be compatible. In the general case, this is expected to be NP-hard as most planning problems.
4 Evaluation of HLCM to Support HPC Interactions
This section evaluates the use of HLCM/CCM to implement the two motivating examples introduced in Section 2: interactions through shared memory and (parallel) method calls between parallel components.
```
transformer PushPull
supports UP { user CcmReceptacle<DataPush>; } input
as UP { provider CcmFacet<DataPull>; } output
{
CacheComponent c;
UP cnx;
cnx |= input;
cnx |= c.pushSide;
output = c.pullSide;
}
```
Figure 6: Example of connection transformer describing how a UP connection (input) whose user role is filled with a CcmReceptacle<DataPush> port can be seen as a UP connection (output) whose provider role is filled with a CcmFacet<DataPull> port. It does so by inserting a component instance acting as a cache.
Enabling Connectors in Hierarchical Component Models
connector SharedMem {
role access[];
}
interface CDataAccess {
CPointer get_data();
long get_size();
...
}
Figure 7: The SharedMem connector declaration with an unbounded role access.
generator LocalSharedMem<Integer N> implements SharedMem {
for (Integer i in [1..N]) { access[i] CcmFacet<CDataAccess>; }
} with {
// locality constraints
for (Integer i in [1..N-1]){access[i].process == access[i+1].process;}
} composite {
LocalMemoryStore store;
for (Integer i in [1..N]) {
UP cnx[i]; cnx[i].user += access[i]; cnx[i] |= store.access;
}
}
Figure 8: IDL declaration of the CDataAccess interface.
Figure 9: Definition of the LocalSharedMem generator supporting local SharedMem connections. Its implementation relies on an instance of a LocalMemoryStore component that embeds the data accessed by all components.
Shared memory interaction. In order to support memory sharing inspired by [3], let us define a SharedMem connector whose declaration is given in Figure 7. It contains a single role access of unbounded multiplicity.
From the point of view of primitive components, the access to a SharedMem connection is done through a CDataAccess interface whose IDL description is given in Figure 8. CPointer is a valuetype holding a native reference to the actual data. It can then be only used between instances located in the same process.
Figure 9 presents a generator for SharedMem connections based on a local centralized implementation. The N ports fulfilling its access role are of type CcmFacet<CDataAccess>.
Figure 10 describes another generator for SharedMem connections based on a distributed implementation. For each accessor, it instantiates a local DsmNodeComponent component which is interconnected with all other (distributed) DsmNodeComponent instances. Each DsmNodeComponent is constrained to be colocated to the same process as its associated accessor, because of the use of
generator DistributedSharedMem<Integer N> implements SharedMem {
for (Integer i in [1..N]) { access[i] CcmFacet<CDataAccess>; }
} composite {
for (Integer i in [1..N]) {
DsmNodeComponent node[i];
LocalUP cnx[i]; cnx[i].user += access[i]; cnx[i] |= node[i].access;
}
for (Integer i in [1..N]) { for (Integer j in [1..N]) {
UP in[i,j]; in[i,j] |= node[i].from; in[i,j] |= node[j].to;
}
}
Figure 10: Definition of the DistributedSharedMem generator supporting SharedMem amongst distributed component instances. It is made of a set of DsmNodeComponent instances, one for each accessor. Each instances is connected to all of them through two dedicated UP connections, one in each direction.
Enabling Connectors in Hierarchical Component Models
Figure 11: A SharedMem connection with four accessors (A1 to A4) implemented by the DistributedSharedMem generator.
Figure 12: A parallel UP connection implemented by the MxN generator. A proxy instance is inserted for each participant. Each proxy instance is connected to all those of the opposite side.
bundle ParallelCcmFacet<Integer N, interface I> {
for (Integer i in [1..N]) { UP { provider CcmFacet<I>; } part[i]; }
}
Figure 13: Definition of the ParallelCcmFacet bundle port type. It contains N UP connections called part whose provider role if fulfilled by a CcmFacet<I> port.
the LocalUP connector. An example of a connection implemented by this generator is presented in Figure 11.
Parallel method call interaction. Unlike shared memory, the support for parallel method calls [13] does not require the introduction of a new connector, the UP connector already supports method calls. It only requires the support of new type of ports fulfilling its roles: the ParallelCcmFacet whose definition is presented in Figure 13 and the symmetrical ParallelCcmReceptacle.
A MxN generator needs to implement UP connections whose roles are fulfilled by these two ports. It is quite similar to the DistributedSharedMem generator. An example of connection implemented by the MxN generator is presented in Figure 12. This enables an efficient support of M x N connections with data redistribution on the user side, the provider side or even both.
The support for UP connections with only one of the role filled by a parallel port is implemented thanks to two transformers. The Scatter transformer whose definition is presented in Figure 14 supports a connection whose user role is filled by a ParallelCcmReceptacle as if they was filled by a sequential CcmReceptacle. It contains a component in charge of distributing the data connected to all the part sub-connections of the bundle and exposing an open connection with a sequential CcmReceptacle used as result of the transformer. A Gather transformer supports the symmetrical case.
Discussion. As can be seen with these two examples, HLCM easily and efficiently supports the implementation of both shared memory and parallel method calls as connectors without having to modify either the model or its implementation. Efficiency is obtained because generated concrete
transformer Scatter<Integer N>
supports UP { user ParallelCcmReceptacle<N, MatrixPart> } input
as UP { user CcmReceptacle<Matrix>; } output
{
Distributor<N> dist;
for ( Integer i in [ 1 .. N ] )
{ UP cnx[i]; cnx[i] |= input.user.part[i]; cnx[i] |= dist.in[i]; }
output = dist.out;
}
Figure 14: Definition of the Scatter transformer.
Enabling Connectors in Hierarchical Component Models
Applications are the same as with the dedicated shared data and parallel extensions. In addition, the concept of open connections makes it possible for connections to logically cross the definition of composites, thus enabling support for hierarchy.
Another interesting point are transformers that make it possible for parallel ports to be used as their sequential counterpart. This enables the support of future implementations such as a load balanced facet designed to support the master/worker paradigm for example.
A current limitation of HLCM/CCM is the lack of genericity in CCM itself. As a result, primitive components and generators using them are limited to specific data types. Using a generic model as backend would solve this.
It would be also interesting for components to be allowed to have multiple implementations, similarly to generators for connectors. This would however make the transformation algorithm more complex as the choices of implementations for components and connectors would be completely dependent of each other.
5 A Proof-of-Concept Implementation of HLCM
We have developed a proof-of-concept implementation of HLCM/CCM based on a Model Driven Engineering (MDE) approach. It transforms an HLCM/CCM application into a plain CCM assembly in three steps. First the HLCM/CCM files are parsed to create a model instance, then this model instance is transformed according to Algorithm 1, and finally the result of this transformation is dumped into a CCM CAD file. This implementation relies on the tools provided as part of the Eclipse Modeling Framework (EMF).
A meta-model of HLCM/CCM has been written in the Ecore language. It contains about 100 meta-classes amongst which about 10 are specific to CCM. A parser creating instances of this model from HLCM/CCM files has been implemented based on the Xtext framework.
The implementation of Algorithm 1 required around 1200 lines in JAVA. It works on instances of a second model describing instantiated HLCM/CCM assemblies adding 15 Ecore classes to the first model. After the transformation, the assembly contains only primitive component instances and connections and can be dumped to its CCM CAD counterpart in 100 lines of JAVA.
This proof of concept implementation has been successfully used to transform the two examples described in Section 4. A typical transformation takes less than five seconds on a standard laptop amongst which more than three seconds are spent in initialization and parsing.
The choice of connection implementations is still a random choice amongst the set of generator requiring the minimal number of connection transformations. Smarter choices would require performance information on components and connectors as well as heuristics to take them into account.
Locality constraints expressible in generators are currently limited to process collocation. This simplifies the problem of placement since these constraints can not lead to any contradictions. Moreover, they can be expressed in the output CCM CAD file. Fully supporting locality constraints would require a resource model as well as the use of a constraint solver in the transformation algorithm.
Another limitation of this implementation is that it is restricted to static applications. The choice of a compilation prevents the support of dynamic modifications of the assembly. Such support would require a deeper integration between the transformation algorithm and the target component model.
6 Conclusion
Component models appear very interesting for complex numerical scientific applications targeted to be run on complex parallel and distributed infrastructures. While advanced component models are proposed to ease the description of applications, the implementation of such models as well as the possibility to optimize an application to a particular infrastructure are still difficult tasks.
This paper has studied the feasibility and the benefit of using connectors in hierarchical component models. It first shows that it is feasible based on the definition of HLCM as well as a proof-of-concept implementation based on model transformation. Moreover, it shows that simple and efficient implementations of parallel interactions (shared data and parallel method calls) can be defined.
There are two main perspectives. First, though HLCM supports dynamicity, an efficient implementation supporting it remains to be done. Second, the optimization of an HLCM application with respect to available resources can be improved, with the support of both multiple component implementations and performance information on primitive components and connections.
References
|
{"Source-Url": "https://ens-lyon.hal.science/ensl-00456961/file/0-main.pdf", "len_cl100k_base": 6229, "olmocr-version": "0.1.49", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 31191, "total-output-tokens": 7926, "length": "2e12", "weborganizer": {"__label__adult": 0.00023877620697021484, "__label__art_design": 0.0003540515899658203, "__label__crime_law": 0.00025582313537597656, "__label__education_jobs": 0.0005254745483398438, "__label__entertainment": 5.3822994232177734e-05, "__label__fashion_beauty": 0.00011652708053588869, "__label__finance_business": 0.0001684427261352539, "__label__food_dining": 0.000255584716796875, "__label__games": 0.0003888607025146485, "__label__hardware": 0.000705718994140625, "__label__health": 0.0003249645233154297, "__label__history": 0.0002213716506958008, "__label__home_hobbies": 6.604194641113281e-05, "__label__industrial": 0.0003268718719482422, "__label__literature": 0.00020432472229003904, "__label__politics": 0.00022804737091064453, "__label__religion": 0.000392913818359375, "__label__science_tech": 0.024078369140625, "__label__social_life": 7.474422454833984e-05, "__label__software": 0.0084228515625, "__label__software_dev": 0.9619140625, "__label__sports_fitness": 0.00021731853485107425, "__label__transportation": 0.0003979206085205078, "__label__travel": 0.0001786947250366211}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34458, 0.01798]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34458, 0.52573]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34458, 0.87432]], "google_gemma-3-12b-it_contains_pii": [[0, 855, false], [855, 986, null], [986, 3516, null], [3516, 7690, null], [7690, 12003, null], [12003, 15233, null], [15233, 18142, null], [18142, 21104, null], [21104, 23839, null], [23839, 26577, null], [26577, 30517, null], [30517, 33807, null], [33807, 34458, null]], "google_gemma-3-12b-it_is_public_document": [[0, 855, true], [855, 986, null], [986, 3516, null], [3516, 7690, null], [7690, 12003, null], [12003, 15233, null], [15233, 18142, null], [18142, 21104, null], [21104, 23839, null], [23839, 26577, null], [26577, 30517, null], [30517, 33807, null], [33807, 34458, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34458, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34458, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34458, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34458, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34458, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34458, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34458, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34458, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34458, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34458, null]], "pdf_page_numbers": [[0, 855, 1], [855, 986, 2], [986, 3516, 3], [3516, 7690, 4], [7690, 12003, 5], [12003, 15233, 6], [15233, 18142, 7], [18142, 21104, 8], [21104, 23839, 9], [23839, 26577, 10], [26577, 30517, 11], [30517, 33807, 12], [33807, 34458, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34458, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
2dfed1a0c73d3f4b68cb488362583cdf6b137223
|
STRUCTURAL COMPLEXITY ATTRIBUTE CLASSIFICATION FRAMEWORK (SCACF) FOR SASSY CASCADING STYLE SHEETS
John Gichuki Ndía¹, Geoffrey Muchiri Muketha¹ and Kelvin Kabeti Omieno²
¹School of Computing and Information Technology, Murang’a University of Technology, Kenya
²School of Computing and Information Technology, Kaimosi Friends University College, Kenya
ABSTRACT
Several researchers have proposed the various classes of software attributes to guide in the derivation of metrics for software products. These existing classifications have targeted traditional software paradigms such as procedural and object-oriented software. Sassy cascading style sheets (SCSS) has unique features since it combines Cascading style sheets (CSS) features with traditional software features such as variables, functions and control flows. Due to this uniqueness, there arises a need to develop a new classification scheme that can be effectively used to classify all the possible structural attributes for Sassy cascading style sheets. The aim of this paper, therefore, is to develop and validate a comprehensive software complexity attributes classification framework for SCSS. The new framework was validated through an online expert opinion survey, where thirteen SCSS experts were involved. Results show that the proposed framework is complete and effective to guide metrics researchers in defining new metrics for SCSS.
KEYWORDS
Cascading Style Sheets, SCSS Complexity classification framework, Software attributes, Structural complexity
1. INTRODUCTION
The practice of defining software metrics has been continuing over the years for different kinds of software domains such as procedural, object-oriented, and web-based domains among others. These metrics are based on software attributes, for example the popular McCabe’s Cyclomatic Complexity metric is based on the control flow attribute of software [1], while some of the Chidamber and Kemerer metrics such as the Depth of Inheritance Tree (DIT) and Number of Children (NOC) are based on inheritance attribute [2]. Therefore, it is prudent to first identify the attributes to be measured for that software before attempting to derive new metrics. The software attribute is defined as the feature or property of a product [3].
Fenton and Bieman [5] in effort to create an industry standard for determining the process of defining metrics have identified three major stages, including identification of entity to measure (e.g. project, product and process), identification of the entity’s attributes that need to be measured, and then deriving metrics for each of the attributes. Several researchers have proposed classification schemes for software attributes to aid metrics definition [4]-[11].
Some of the existing software attributes classification schemes provide a general treatment of complexity [4]-[6] while others focus more on structural complexity [7]-[8]. While Daudi and Kadir [7] classified complexity attributes for service-oriented architecture (SOA) and Muketha
classified complexity attributes for business process models, there has been little effort to classify structural complexity attributes for the stylesheets’ domain.
Sassy Cascading Style Sheets (SCSS) is an extension of Cascading Style Sheets (CSS) and it combines CSS features and traditional software features such as the use of variables, mixins, functions, and control flows [12]. This uniqueness of SCSS software means that the existing classification schemes cannot be used to sufficiently identify the structural attributes for SCSS.
The methodology employed in this study was to first identify existing classification schemes, their limitations, and then extend one of them to come up with a classification scheme for SCSS. Muketha’s classification [8], was adopted for the extension as it is the most closely related to this study. An online expert’s opinion survey was conducted to collect data, and the data were analyzed using descriptive statistics to validate the proposed framework.
The rest of this paper is structured as follows. Section two presents the existing classification schemes, section three presents structural complexity, section four presents the new classification framework for SCSS structural complexity, section five presents validation results, and section six presents the conclusions and future work.
2. Existing Classification Schemes
Several studies have attempted to classify software complexity attributes and are therefore closely related to the work presented in this paper. Fenton and Pfleeger [5] and Fenton and Bieman [4], proposed three categories for deriving the attributes to measure namely; process, product, and resources. The product category which is the focus of this study further classified attributes as internal or external attributes. Internal attributes are those that can be measured directly such as the size of code while external attributes are measured indirectly, such as reliability and maintainability. The limitation of this classification is that the modularity of the attributes such as control flow, data flow, cohesion, and coupling is not known.
In another study [5] they identified four ways of categorizing software attributes into the product, process, people, and value to the customer. In this classification scheme, structural complexity falls under the product category. Structural complexity is further divided into control flow complexity, data complexity, and size attributes. The limitation of this classifications scheme is like the Fenton and Bieman classification, in that, the level of modularity of the attributes is not provided, meaning we can’t tell whether all the possible attributes of software are captured.
Daud and Kadir [7] have classified software structural attributes into static and dynamic attributes. These authors identified three structural attributes, coupling, cohesion and complexity which fall under both static and dynamic. These attributes are the most popular in measuring service-oriented architecture (SOA). The limitation of this classification is that it identified the attributes from the literature and not from the structural properties of SOA. Meaning that the attributes identified may not fully represent SOA structural complexity.
Mens [10] identified four major dimensions of software complexity, including theoretical complexity, the complexity of use, organizational complexity and structural complexity. Theoretical complexity was further divided into computational and algorithmic complexity, complexity of use was divided into functional and usability, while structural complexity was divided into module level and system level. This classification scheme does not show what attributes can be derived from module level and system level hence it’s not comprehensive.
Henderson-Sellers [11] categorized software complexity into computational complexity, psychological complexity, and representational complexity. The author further divided psychological complexity into structural complexity, programmer characteristics and problem complexity. Structural complexity was further divided into intra and inter-module categories. The intra-module category is further divided into size, control flow, and cohesion attributes while the inter-module category is specialized into the coupling attribute. This classification scheme is limited in that it overlooks some new dimensions of structural complexity evident in SCSS software.
The part of structural complexity in the Henderson-Sellers classification scheme has been extended by introducing the hybrid category to the existing inter and intra-module categories [8]. The hybrid attribute category combines features from intra-module and inter-module attributes. Muketha’s work is limited in that it overlooks some new dimensions of structural complexity introduced in SCSS software. However, this study extended Muketha’s framework because it’s more recent and comprehensive in the context of structural complexity. Figure 1 illustrates the classification framework. The inter-module attributes focused on an individual process which is equivalent to a module, inter-module attributes focused on the interaction of two process modules while hybrid attributes combine the features of both intra-module and inter-module attributes.
![Figure 1. Structural complexity attributes classification [8]](image)
3. STRUCTURAL COMPLEXITY
Structural complexity is defined as how the program elements are organized and interact within the software system [12], [13]. It is concerned with the measurement of internal attributes and is assessed by the difficulty of performance of tasks such as the writing of codes, modifying and testing of software [10], [14]. The identification of the right attributes for a given software can help in the evaluation and improvement of a software product [15].
3.1. Structural Complexity Properties for Software
Many authors consider size, length, coupling, and cohesion as part of structural complexity [8], [11], [16]. For instance, the lines of code (LOC) metric, also called the physical lines of code, has been used as a size measure, and to some extent, as a complexity measure. The related logical lines of code (LLOC) metric, has been found to have higher accuracy when compared to LOC because it eliminates comment lines, auto-generated code lines, header files, ineffective code lines, compiler directives, labels, and empty case statements [16]. For example, Adewumi et al.
[17] proposed size in terms of lines of rules for cascading style sheets while Misra and Cafer [18] considered size in terms of lines of JavaScript code on condition that the only lines to be factored were those that consisted of variable(s) or operators.
The concept of inheritance has been recognized as one of the most important features of software reuse. In object-oriented languages, inheritance supports class hierarchy design and captures the is-a relationship between a class and sub-class [19]. Inheritance has been studied in object-oriented languages extensively [19]-[22]. Though inheritance supports reuse, it can increase complexity if not used in the proper range [21]. Style sheets provide a unique way of supporting inheritance because there are no classes and sub-classes as provided for in the object-oriented domain.
Nesting complexity has also been studied as an important property. Nesting reflects the level of nesting within constructs or control structures [23]. Constructs are such as if, case, for, while, and do-until can be nested. A statement that is at the innermost level is harder to understand, meaning that it contributes more to complexity than other statements [24]. In SCSS, nesting occurs with selectors, that is, the more the selectors are deeply nested the more complex an SCSS code becomes [25].
Coupling has been defined as the measure of the strength of association established by a connection from one module to another [26]. It has been argued that the stronger the coupling between modules, the more difficult these modules are to understand, change and correct, resulting in more complex software. Coupling has been studied in the domain of procedural programming [26] and object-oriented programming [2], [27], [28]. While coupling as a complexity measure has been studied in procedural and object-oriented languages it has not been addressed in the stylesheets’ domain.
The aspect of cohesion is discussed extensively in the procedural and object-oriented domain. Cohesion is defined as the ‘single-mindedness’ or ‘relatedness’ of a module component [29]. When a module is highly cohesive, it means, all the defined elements in a module perform a single task. Therefore, it’s the goal of software designers to make a program as cohesive as possible.
The Complexity of code can be expressed through control structures, and therefore, a program which implements control structures is regarded as more complex in comparison to the program without control structures [24]. The complexity of a program is directly proportional to the cognitive weights of Basic Control Structures [18]. For example, iterative control structures like for loop, while, and do…while are more complex than decision making control structures such as if…then…else.
3.2. Structural Properties for SCSS
SCSS is a web-based language that is implemented in Syntactically Awesome Style Sheets (SASS) pre-processor. Its purpose is to style web documents written in Hypertext mark-up language (HTML) and Extensible mark-up language (XML) [30]. SCSS combines the characteristics of CSS, such as the use of selectors, rule blocks, and declarations with those of traditional software such as inheritance, nesting, and coupling [30]. The combination of these features makes the front web developers create more efficient and maintainable code.
SCSS provides a unique way of supporting inheritance through selector inheritance. The selectors are extended in an SCSS rule block by use of @extend directive. This means that all the attributes of the inherited selector are implemented in the rule block that the selector has been extended.
Figure 2 has a code that illustrates the use of selector inheritance. The code has two rule block which has a selector named .alarm and is inherited by .alarm-positive selector. This means that the .alarm-positive selector will have five attributes or declarations i.e. padding, font size, text align, color and background.
```
.alarm{
padding: 15px;
font-size: 1.2em;
text-align center;
color: $color-accent;
}
.alarm-positive {
@extend .alarm;
background: #9c3;
}
```
Figure 2. Selector inheritance
SCSS allows nesting of rules inside each other instead of repeating selectors in separate declaration [31]. Figure 3 illustrates nesting by placing the message rule block inside infobox rule block.
```
.infobox {
width: 200px;
}
.message {
border: 1px solid red;
}
.infobox {
width: 200px;
.message {
border: 1px solid red;
}
}
```
Figure 3. Nesting of rules
SCSS consists of rule blocks, a rule block consists of properties and values which together form a declaration or an attribute. The more the number of components defined in a CSS rule block, the more complex it is [17]. SCSS has several components that contribute to rule block complexity, for-example, attributes or declarations, operators, variables, function calls, control directives, include directive and extend directive.
In SCSS, coupling is manifested when the declared properties such as mixins and variables are used in several places of the code, meaning that the properties can be changed without realizing you are affecting multiple objects at once or not noticing which elements are being affected by the changes.
In stylesheets, cohesion is viewed as the rule blocks having a single attribute [17]. The more the SCSS rule blocks with a single attribute, the lesser its complexity, thus increasing the maintainability of the code.
4. A New Classification Framework for SCSS Structural Complexity
The proposed classification framework extends the work of Muketha [8] with the incorporation of new attributes found in SCSS.
4.1. Architecture of the Proposed Framework
In the proposed framework, intra and inter-module, as well as hybrid attributes, have been redefined and re-interpreted, and a new category called extra-module attribute added.
In the context of SCSS, the intra-module focuses on attributes that can be derived from a single rule-block which is equivalent to a module. Two main categories were identified, size and control flow complexity. In the size category, the features that can be used to determine SCSS code size are identified namely the number of declarations or attributes, number of operators and number of rule blocks. In order to determine the control-flow complexity of SCSS, control directives i.e. @for, @if, @each, etc. must be identified in the code.
Inter-module in SCSS focuses on the interaction of the various rule-blocks. In the proposed framework, the inter-module has been divided into inheritance complexity and nesting complexity categories. Inheritance complexity in SCSS happens when the styles or values are shared by using extend directive, this is known as selector inheritance. SCSS nesting complexity occurs when the rules are put inside each other.
The hybrid attribute combines features of at least two categories of structural complexity, for example, intra-module and inter-module [8]. In SCSS the hybrid attribute has one category named association complexity. This kind of complexity is led to by different features, found in different categories of SCSS structural complexity being implemented in a single rule block. For example, the sharing of variables and mixins by rule blocks leads to information flow complexity, while the use of extend directive in a rule block leads to inheritance complexity. Information flow complexity falls in the extra-module attribute category while inheritance complexity falls under inter-attribute complexity. The combination of these two categories leads to a hybrid attribute. In the framework, the example given under association complexity has @extend (derived from the inter-module category) and @include (derived from extra-module category).
Extra-module attribute focuses on the interaction of modules via an external module. In SCSS the Extra-module attribute focuses on rule-blocks interacting with mixins and/or global variables. These mixins and global variables are defined outside of SCSS rule blocks. When several rule blocks are sharing the same mixin and global variable, then the rule blocks are deemed to be coupled with each other.
This implies that a change in the values of a mixin and a variable will affect all the rule blocks that are sharing the mixin and global variable. Figure 4 below illustrates the proposed structural complexity attribute classification framework for SCSS.
4.2. Application of the Framework
This section aims to providing an interpretation of the proposed framework through a real-life scenario (Appendix).
The intra-module attribute is the first category of the SCSS structural complexity, and it considers complexity in terms of size and control flow complexity. The size of the SCSS file can be determined based on the number of attributes, number of operators or number of rule blocks. For example, to determine the size of the file provided in the Appendix based on the number of rule blocks, count all the rule blocks, where each rule block is recognized by an opening brace ( { ) and a closing brace ( } ). The control flow complexity of SCSS code is determined by the control directives implemented in the code. In the SCSS code provided, the @for directive has been implemented, meaning that the measurement for the control flow complexity can be determined.
The inter-module attribute category has described inheritance and nesting complexity. Inheritance complexity in SCSS is introduced by the use of @extend directive. In the file provided, the extend directive has been used in h2 element selector to inherit p element selector. The nesting complexity in SCSS considers nesting of rules. In the code provided the @media directive has modal dialog class selector inside it.
In the hybrid attribute category, a form of complexity known as association complexity is identified. In the SCSS code provided this kind of complexity is demonstrated in the p element rule block. To determine the complexity of p rule block the number of attributes that fall under the intra-module category are identified. In the same rule block, there is the use of mixin (PlayfairDisplay-Regular) and variable (color1) which leads to coupling, meaning that the extra-module category has been used. Lastly, the extend directive has been used in the p rule block, which introduces inheritance complexity under the inter-module category.
The final category known as the extra-module category is illustrated. The information flow complexity which is a result of coupling through the use of mixins and global variables is demonstrated in the SCSS code provided. The span, h3, and h4 element selectors make use of a mixin named Raleway-Medium, while h1 and h2 element selectors make use of color2 variable. This means that if you change the values of Raleway-Medium mixin you affect span, h3, and h4 element selectors. Furthermore, if you change the value of color2 variable you affect the h1 and h2 element selectors.

5. VALIDATION RESULTS
This section presents the evaluation results obtained from an expert opinion survey. An expert opinion survey technique is used to identify problems, give clarity to issues under study and evaluate products [32].
5.1. Goal of the Study
The goal of the study was to evaluate the relevance and comprehensiveness of the framework from the point of view of SCSS experts.
5.2. Context Definition
SCSS experts were invited from all over the world to participate in the online survey. The Survey Monkey platform was used to host the study questionnaires. A total of 13 experts participated in the survey and were identified through snowball sampling technique.
5.3. Survey Operation
The respondents were provided with the SCSS attributes classification framework, a write-up explaining how to interpret the framework and a questionnaire.
5.4. Reliability of the Research Instrument
To ensure the reliability of the instrument, pretesting was carried out and Cronbach’s alpha was used as the measure of reliability. As a rule of thumb, alpha values at closer ranges to 1 are considered more internally reliable [33]. As shown in Table 1, relevance achieved a Cronbach alpha of 0.894 while comprehensiveness achieved a Cronbach alpha of 0.854. Therefore, the instrument can be considered reliable since its reliability values exceeded the prescribed threshold of 0.7 [34].
<table>
<thead>
<tr>
<th>Scale</th>
<th>Cronbach’s Alpha</th>
</tr>
</thead>
<tbody>
<tr>
<td>Relevance of the Framework</td>
<td>0.894</td>
</tr>
<tr>
<td>Comprehensiveness of the Framework</td>
<td>0.854</td>
</tr>
</tbody>
</table>
5.5. Analysis and Interpretation
Feedback from the respondents was received and thereafter checked for completeness. All questionnaires were found to be completed satisfactorily, and therefore were accepted for data analysis.
5.5.1. Respondents Demographics
The researchers first sought to establish the characteristics of the respondents, and so characteristics such as the level of education, years of industrial experience, level of knowledge for software engineering processes and level of knowledge of SCSS was considered from all respondents.
• **Level of education for respondents**
Findings indicate that 11 (84.6%) of the respondents are bachelor’s degree holders while the remaining 2 (15.4%) respondents have master’s degree qualifications. These results indicate that all the SCSSS experts involved in this study have attained at least the bachelor’s degree, implying that they can study the framework and respond accordingly. These findings are shown in Table 2.

<table>
<thead>
<tr>
<th>Level of Education</th>
<th>Frequency</th>
<th>Percent (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Bachelors</td>
<td>11</td>
<td>84.6</td>
</tr>
<tr>
<td>Masters</td>
<td>2</td>
<td>15.4</td>
</tr>
</tbody>
</table>
• **Years of industrial experience**
This research sought to find the number of years the respondents have worked in the industry. It was observed that 2 of the respondents had an experience of between 2-3 amounting to 15.4% while the rest of the respondents had 4 years of experience or higher. This implies that the respondents in this study are highly experienced in the software engineering field and can be considered as experts.

<table>
<thead>
<tr>
<th>Years of Industrial Experience</th>
<th>Frequency</th>
<th>Percent (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>2-3 Years</td>
<td>2</td>
<td>15.4</td>
</tr>
<tr>
<td>4-5 Years</td>
<td>6</td>
<td>46.2</td>
</tr>
<tr>
<td>6-7 Years</td>
<td>2</td>
<td>15.4</td>
</tr>
<tr>
<td>Above 7 Years</td>
<td>3</td>
<td>23.1</td>
</tr>
</tbody>
</table>
• **Level of knowledge in software engineering process**
An analysis of respondent’s level of knowledge was also conducted as indicated in Table 4. Findings indicate that 12 respondents representing 92.3% had high level of knowledge while 1 respondent representing 7.7% had a very high knowledge of software engineering processes. These findings imply that all participants can be trusted for analysis and opinions on the state of artefacts that are intended for use in the software engineering process.

<table>
<thead>
<tr>
<th>Level of Knowledge for Software Engineering Processes</th>
<th>Frequency</th>
<th>Percent (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>High</td>
<td>12</td>
<td>92.3</td>
</tr>
<tr>
<td>Very High</td>
<td>1</td>
<td>7.7</td>
</tr>
</tbody>
</table>
Level of knowledge for SCSS
Since the proposed framework focuses only on the structural complexity of code developed using the SCSS language, all respondents are expected to be knowledgeable SCSS programmers. Findings indicate that 8 respondents had a high level of knowledge and this corresponding to 61.5%, 3 respondents corresponding to 23.1% had moderate level of knowledge, and 2 respondents corresponding to 15.4% had a very High level of knowledge. This implies that the data collected from all the respondents can be deemed as valid. The respondents result with moderate level of knowledge are also acceptable because they can be regarded as having considerable level of SCSS knowledge in addition to their software engineering knowledge, which is acceptable for this study. These findings are shown in Table 5.
<table>
<thead>
<tr>
<th>Level of knowledge for SCSS</th>
<th>Frequency</th>
<th>Percent (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Moderate</td>
<td>3</td>
<td>23.1</td>
</tr>
<tr>
<td>High</td>
<td>8</td>
<td>61.5</td>
</tr>
<tr>
<td>Very High</td>
<td>2</td>
<td>15.4</td>
</tr>
</tbody>
</table>
5.5.2 Relevance of the framework
The researchers sought to know if the developed framework is relevant for the industry experts to identify the attributes that lead to SCSS complexity. Table 6. shows computed means from a Likert scale of 1 to 5 – Don’t Agree, Slightly Agree, Agree, Strongly Agree and Very Strongly Agree. Findings show that the respondents agree that there is a great need for a classification framework with a mean of 3.46, which falls between agree and very strongly agree (i.e. between 3 and 4 in the Likert scale).
The respondents also agreed that the framework is useful for the process of identification of SCSS attributes as indicated by the mean of 3.62. these findings are shown in Table 6. Standard deviation was interpreted as low if the value is less than or equal to 1, while values greater than 1 are high. When the value is low it implies that the respondents didn’t differ much in their opinion and high values indicate respondents considerably differed in their opinion. The standard deviation values shown in Table 6 indicates that the respondents didn’t vary considerably.
<table>
<thead>
<tr>
<th>Table 6. Relevance of the framework</th>
</tr>
</thead>
<tbody>
<tr>
<td>Need for the Framework</td>
</tr>
<tr>
<td>Mean</td>
</tr>
<tr>
<td>Standard Deviation</td>
</tr>
</tbody>
</table>
5.5.3 Comprehensiveness of the framework
In a Likert scale of 1 to 5 – Don’t Agree, Slightly Agree, Agree, Strongly Agree and Very Strongly Agree, respondents were asked of their opinions on whether the proposed framework is comprehensive or not. Findings show that global variables and declarations least contribute to SCSS complexity with a mean of 2.54 and 2.85 respectively. These values fall within the range of slightly agree and agree (i.e. between 2 and 3 in the Likert scale).
This implies that SCSS programmers somehow agree that the two features cause complexity in SCSS and should not be overlooked. Findings also show that all other remaining features fall in the range of agree and strongly agree (i.e. between 3 and 4 in the Likert scale). These mean values imply that the respondents agree that the concerned features contribute to SCSS complexity. The standard deviation values are high, but this is a result of the small sample size. Sullivan, [35] argued that the standard deviation of the means decreases as the sample size increases. Therefore, the high standard deviation can be explained and doesn’t make the results unreliable. These results are shown in Table 7.
<table>
<thead>
<tr>
<th>SCSS features</th>
<th>Mean</th>
<th>Standard Deviation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Global Variables</td>
<td>2.54</td>
<td>1.127</td>
</tr>
<tr>
<td>Declaration</td>
<td>2.85</td>
<td>1.214</td>
</tr>
<tr>
<td>Operator</td>
<td>3.00</td>
<td>1.000</td>
</tr>
<tr>
<td>Control Directives</td>
<td>3.31</td>
<td>1.032</td>
</tr>
<tr>
<td>Function</td>
<td>3.54</td>
<td>1.050</td>
</tr>
<tr>
<td>Mixins</td>
<td>3.38</td>
<td>1.193</td>
</tr>
<tr>
<td>Extends</td>
<td>3.15</td>
<td>1.519</td>
</tr>
<tr>
<td>Nesting</td>
<td>3.46</td>
<td>1.561</td>
</tr>
</tbody>
</table>
Finally, respondents were asked whether they agree that the SCSS features identified in Table 7. wholly represents all the possible features that need to be considered when analyzing the complexity of code written in SCSS language. Findings show that 12 respondents agree corresponding to 92.3% while 1 respondent corresponding to 7.7% disagree. The findings, shown in Table 8, imply that the proposed framework is adequate as an indicator of features that cause structural complexity in SCSS code.
<table>
<thead>
<tr>
<th>Adequate Features</th>
<th>Frequency</th>
<th>Percent (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Yes</td>
<td>12</td>
<td>92.3</td>
</tr>
<tr>
<td>No</td>
<td>1</td>
<td>7.7</td>
</tr>
</tbody>
</table>
6. Conclusion and Future Work
In this paper, a new SCSS structural complexity attribute classification framework is proposed. The framework was validated through an expert’s opinion survey. The experts agreed overwhelmingly that the framework is relevant, comprehensive and adequate, and therefore it
fully identifies the features and attributes that contribute to the structural complexity in SCSS code. This implies that the framework can be used to define structural complexity metrics for SCSS, which can then be used to show the level of complexity for SCSS code and subsequently inform the SCSS designers and programmers of the improvements that should be done on the code to improve its maintainability.
The limitation of the framework is that it’s only applicable to SCSS software. Closely related CSS pre processors software’s cannot use the framework to identify their structural properties. However, the framework is the first to be developed for the Cascading Style Sheets domain and therefore can be used as a guide for the development of frameworks for similar software.
The new proposed framework herein referred to as the SCSS attribute classification framework for SCSS was successfully applied to define the structural complexity metrics for SCSS [36]. However, future improvements are required to make it useful for regular CSS and CSS pre processors such as Less and stylus.
REFERENCES
APPENDIX
@ mixin Raleway-SemiBold {
font-family: 'Raleway-SemiBold';
}
@ mixin Raleway-Medium {
font-family: 'Raleway-Medium';
}
@ mixin PlayfairDisplay-Regular {
font-family: 'PlayfairDisplay-Regular';
}
$sColor1: #f4f4f4;
$sColor2: #000;
p {
font-size: 5px + (6px * 2);
font-color: $sColor1;
@include PlayfairDisplay-Regular;
}
span{
width: 60px;
height: 45px;
position: absolute;
@include Raleway-Medium;
}
@for $i from 1 through 4 {
.p#{$i} { padding-left: $i * 10px; }
}
@function remy ($pxsize) {
@return ($pxsize/16) + rem;
}
h1 {
font-size: remy(32);
font-color: $sColor2;
}
h2{
@extend p;
font-color: $sColor2
}
AUTHORS
John Gichuki Ndia is a Tutorial Fellow at the Department of Information Technology at Murang’a University of Technology, Kenya. He earned his Bachelor of Information Technology from Busoga University in 2009, and his MSc. in Data Communications from KCA-University in 2013. He is currently pursuing the PhD in Information Technology at Masinde Muliro University of Science and Technology. His research interests include Software quality, software metrics and network security. He is a member of the International Association of Engineers (IAENG) society of Software Engineering.
Geoffrey Muchiri Muketha is Associate Professor and Dean of the School of Computing and Information Technology, Murang’a University of Technology, Kenya. He received his BSc. in Information Science from Moi University in 1995, his MSc. in Computer Science from Periyar University in 2004, and his PhD in Software Engineering from Universiti Putra Malaysia in 2011. He has many years of experience in teaching and supervision of postgraduate students. His research interests include software and business process metrics, software quality, verification and validation, empirical methods in software engineering, and component-based software engineering. He is a member of the International Association of Engineers (IAENG).
Kelvin Kabeti Omieno is a Senior Lecturer and Dean, School of Computing and Information Technology, Kaimosi Friends University College, Kenya, A Constituent College of Masinde Muliro University of Science and Technology. He holds a PhD in Business Information Systems of Jaramogi Oginga Odinga University of Science & Technology. He has MSc in Information Technology and Bachelor of Science in Computer Science from Masinde Muliro University of Science and Technology. He has been involved in several research projects of ICTs for Development, Data Analytics, Computational Grid, Machine Learning, Health Informatics, E-learning systems and E-waste management in Kenya. Besides, he has published widely in journals and conference proceedings in Information technology and ICTs for development. He is a professional member of the Association for Computing Machinery (ACM), the largest association of computing professionals globally and is a reviewer with two International Journals.
|
{"Source-Url": "http://aircconline.com/ijsea/V11N1/11120ijsea05.pdf", "len_cl100k_base": 7316, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 39847, "total-output-tokens": 9787, "length": "2e12", "weborganizer": {"__label__adult": 0.00032138824462890625, "__label__art_design": 0.00038695335388183594, "__label__crime_law": 0.00029206275939941406, "__label__education_jobs": 0.0011444091796875, "__label__entertainment": 5.97834587097168e-05, "__label__fashion_beauty": 0.0001398324966430664, "__label__finance_business": 0.00018596649169921875, "__label__food_dining": 0.0002570152282714844, "__label__games": 0.0005154609680175781, "__label__hardware": 0.0004701614379882813, "__label__health": 0.00030159950256347656, "__label__history": 0.00018358230590820312, "__label__home_hobbies": 6.407499313354492e-05, "__label__industrial": 0.00021016597747802737, "__label__literature": 0.00028324127197265625, "__label__politics": 0.00018310546875, "__label__religion": 0.00032067298889160156, "__label__science_tech": 0.006984710693359375, "__label__social_life": 8.219480514526367e-05, "__label__software": 0.005451202392578125, "__label__software_dev": 0.9814453125, "__label__sports_fitness": 0.0001970529556274414, "__label__transportation": 0.0002548694610595703, "__label__travel": 0.00013756752014160156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40319, 0.04043]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40319, 0.32216]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40319, 0.89527]], "google_gemma-3-12b-it_contains_pii": [[0, 3025, false], [3025, 6832, null], [6832, 9525, null], [9525, 13182, null], [13182, 14780, null], [14780, 18123, null], [18123, 20623, null], [20623, 22760, null], [22760, 25191, null], [25191, 27718, null], [27718, 30435, null], [30435, 33462, null], [33462, 36464, null], [36464, 38024, null], [38024, 40319, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3025, true], [3025, 6832, null], [6832, 9525, null], [9525, 13182, null], [13182, 14780, null], [14780, 18123, null], [18123, 20623, null], [20623, 22760, null], [22760, 25191, null], [25191, 27718, null], [27718, 30435, null], [30435, 33462, null], [33462, 36464, null], [36464, 38024, null], [38024, 40319, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40319, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40319, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40319, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40319, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40319, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40319, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40319, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40319, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40319, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40319, null]], "pdf_page_numbers": [[0, 3025, 1], [3025, 6832, 2], [6832, 9525, 3], [9525, 13182, 4], [13182, 14780, 5], [14780, 18123, 6], [18123, 20623, 7], [20623, 22760, 8], [22760, 25191, 9], [25191, 27718, 10], [27718, 30435, 11], [30435, 33462, 12], [33462, 36464, 13], [36464, 38024, 14], [38024, 40319, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40319, 0.17284]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
c315bbe622904e4594888a04bb7013a7afd2f730
|
An Efficient Trie Hashing Method Using
a Compact Binary Trie
Masami Shishibori, Makoto Okada, Toru Sumitomo and Jun-ichi Aoe
Department of Information Science & Intelligent Systems
Faculty of Engineering
Tokushima University
2-1 Minami-Josanjima-Cho
Tokushima-Shi 770
Japan
e-mail: {bori, aoe}@is.tokushima-u.ac.jp
Abstract. In many applications, information retrieval is a very important research field. In several key strategies, the binary trie is famous as a fast access method to be able to retrieve keys in order. However, if the binary trie structure is implemented, the greater the number of the registered keys, the larger storage is required, as a result, the binary trie can not be stored into the main memory. In order to solve this problem, the method to change the binary trie into a compact bit stream have been proposed, however, searching and updating a key takes a lot of time in large key sets. This paper proposes the method to improve the time efficiency of each process by introducing a new hierarchical structure. The theoretical and experimental results show that this method provides faster access than the traditional method.
Key words: information retrieval, trie hashing, binary trie, data structures, pre-order bit stream
1 Introducion
In many natural language processing and information retrieval systems, it is necessary to be able to adopt a fast digital search, or trie search for looking at the input character by character. In digital search methods, trie method [1], [2], [3], [4] is famous as one of the fastest access methods, and trie searching is frequently used as a hash table of trie hashing [5] indices in information retrieval systems and dictionaries in natural language processing systems. Although hash and B-tree strategies are based on comparisons between keys, a trie structure can make use of their representation as a sequence of digits or alphabetic characters. A trie can search all keys made up from prefixes in an input string, in only one time scanning, since a trie advances the retrieval character by character, which makes up keys. From this reason, the trie is called the Digital Search-tree (DS-tree). Especially, DS-tree whose nodes have only two arcs labelled with 0 and 1 is called a Binary Digital Search-tree (BDS-tree) [5], [6].
In the case when the binary trie, that is BDS-tree, is implemented as the index of information retrieval application, if the key sets to be stored are large, it is too big to store into main memory. Therefore, it is very important to compress the binary trie into a compact data structure. Then, Jonge et al. [5] proposed the method to compress the binary trie into a compact bit stream, which is called the pre-order bit stream, by traversing the trie in pre-order. However, the bigger the binary trie, the longer the pre-order bit stream is, as a result, the time cost to retrieve keys located toward the end of the bit stream is high.
This paper proposes a new method able to avoid the increase of the time-cost even if the dynamic key sets become very big. The data structures compressed by this method have two distinctive features: (1) they store no pointers and require one bit per node in the worst case, and (2) they are divided into the small binary tries, and their small tries are connected by pointers.
2 A Compact Data Structure for Binary Tries
In the BDS-tree, the binary sequence, which is obtained from the translation of the characters into their binary code, is used as the value of the key, namely, the left arc is labeled with the value ‘0’ and the right arc with the value ‘1’. If each of leaves in the BDS-tree points the record of only one key, the depth of the BDS-tree becomes very deep. So, each leaf has the address of the bucket, where some corresponding keys to the path are stored. We will use $B_{SIZE}$ to denote the number of keys and their records that can be stored in one bucket. For example, let us suppose that the following key set $K$ is inserted into the BDS-tree.
$$K = \{\text{air, art, bag, bus, tea, try, zoo}\}$$
If the binary sequence, obtained from the translation of the internal code of each character, where internal codes of a, b, c, z are 0, 1, c, 25 respectively, into binary numbers of 5 bits, is used, the corresponding bit strings to be registered are below.
\[
\begin{align*}
\text{air} & \rightarrow 0/8/17 \rightarrow 00000 01000 10001 \\
\text{art} & \rightarrow 0/17/19 \rightarrow 00000 10001 10011 \\
\text{bag} & \rightarrow 1/0/6 \rightarrow 00001 00000 00110 \\
\text{bus} & \rightarrow 1/20/18 \rightarrow 00001 10100 10010 \\
\text{tea} & \rightarrow 19/4/0 \rightarrow 10011 00100 00000 \\
\text{try} & \rightarrow 19/17/24 \rightarrow 10011 10001 11000 \\
\text{zoo} & \rightarrow 25/14/14 \rightarrow 11001 01110 01110
\end{align*}
\]
If $B_{SIZE}$ is 2, the corresponding BDS-tree for the key set $K$ is shown in Figure 1. In order to compress the BDS-tree, we applied the particular leaf which does not have any addresses for the bucket. This leaf will be called dummy leaf. Using the dummy leaf, the following advantages are derived. First, it satisfies the property of binary trees that the number of leaves is one more than the number of internal nodes. This property underlies the search algorithm using the compact data structure. Next, if the search terminates in a dummy leaf, the search key is regarded as a key that does not belong to the BDS-tree, and no disk access at all will be needed.
When the BDS-tree is implemented, the larger the number of the registered keys, the greater the number of the nodes in the tree is, and more storage space is required. So, Jonge et al. [5] proposed the method to compress the BDS-tree into a very compact bit stream. This bit stream is called pre-order bit stream. The pre-order bit stream consists of 3 elements: treemap, leafmap and B_TBL. The treemap represents the state of the tree and can be obtained by a pre-order tree traversal, emitting a ‘0’ for every internal node visited and a ‘1’ for every bucket visited. The leafmap represents the state (dummy or not) of each leaf and by traversing in pre-order the corresponding bit is set to a ‘0’ if the leaf is dummy, otherwise the bit is set to a ‘1’. The B_TBL stores the addresses of each bucket. Figure 2 shows the pre-order bit stream corresponding to the BDS-tree of Figure 1. Then, in order to understand the relation between the BDS-tree and the pre-order bit stream easily, we indicate above the treemap the corresponding internal node and leaf number (in the case of the dummy leaf, the symbol is a “d”) within the round “( )” and square “[ ]” brackets, respectively.
The search using the pre-order bit stream proceeds bit by bit from the first bit of treemap, so that the search is traversed the BDS-tree in pre-order. The search algorithm using the pre-order bit stream is presented below, where it uses the following variables and functions:
$s_key$: The bit string of the key to be searched.
$keypos$: A pointer to the current position in $s_key$.
$treepos$: A pointer to the current position in treemap.
$leafpos$: A pointer to the current position in leafmap.
$bucketnum$: The corresponding bucket number.
$\text{SKIP\_COUNT}(\cdot)$: Skips the left partial tree, and returns the number of the leaf within the partial tree.
$\text{FIND\_BUCKET}(\cdot)$: Returns the corresponding bucket number of $s_key$.
[An Algorithm to search in the BDS-tree]
Input: $s_key$;
Output: If $s_key$ can be found, then the output is TRUE, otherwise FALSE;
Step(1): {Initialization}
\begin{align*}
\text{keypos} & \leftarrow 1, \text{treepos} \leftarrow 1, \text{leafpos} \leftarrow 1;
\end{align*}
**Step(S-2):** {Skipping the left subtree}
If the bit of $s\_key$ pointed to by $keypos$ is a '1',
then $leafpos \leftarrow leafpos + \text{SKIP\_COUNT}()$;
**Step(S-3):** {Advance to the right subtree}
$keypos \leftarrow keypos + 1; \text{treepos} \leftarrow \text{treepos} + 1;$
**Step(S-4):** {Loop invariant until reaching the leaf}
If the bit of $\text{treemap}$ pointed to by $\text{treepos}$ is a '0', return to Step(S-2);
**Step(S-5):** {Verification of $leafmap$}
If the bit of $leafmap$ pointed to by $leafpos$ is a '0', FALSE is returned;
**Step(S-6):** {Verification of $B\_TBL$}
$\text{bucketnum} \leftarrow \text{FIND\_BUCKET}()$;
If the bucket indicated by $\text{bucketnum}$ contains the key, return TRUE, otherwise return FALSE;
Regarding the above algorithm, since a left subtree in $\text{treemap}$ is represented following the 0 bit of its parent node, when advancing to the left subtree, the Step(S-2) is not executed, however when advancing to the right subtree, the Step(S-2) to skip the left subtree is added. This skipping process utilizes the binary tree's property that the number of leaves is one more than the number of internal nodes in any binary subtree. Using this property, the function $\text{SKIP\_COUNT}()$ can search for the end position of the left subtree and get the number of leaves in the left subtree. Namely, this function advances $\text{treepos}$ until the number of 1 bits is one more than the number of 0 bits, and returns the number of 1 bits (leaves). Moreover, the value obtained by counting the number of 1 bits in $\text{leafmap}$ from the first bit to the one pointed to by $leafpos$ indicates which slot in $B\_TBL$ contains the required bucket address.
For example, to retrieve $\text{key} = \text{"zoo"}$ ($s\_key = \text{"11c"}$) in Figure 2, the following steps are performed:
---
**Figure 2:** An example of the pre-order bit stream.
Step(S-1): keypos=treepos=leafpos=1; Since the first bit of s_key is a ‘1’, the
subtree whose root is node 2 is skipped by SKIP_COUNT().
Step(S-2): leafpos=leafpos+SKIP_COUNT()=6;
Step(S-3): keypos=2; treepos=11;
Step(S-4): Since the 11-th bit of treemap is a ‘0’, return to Step(S-2);
Step(S-2′4): Since the 2-th of s_key is a ‘1’, the subtree whose root is node 6 is
skipped; leafpos=leafpos+SKIP_COUNT()=7; treepos=13;
Step(S-5): Since the 7-th bit of leafmap is a ‘1’, B_TBL is verified;
Step(S-6): Since key “zoo” is stored in the bucket 4, TRUE is returned;
3 Improvement by Using Hierarchical Structures
The BDS-tree represented by the pre-order bit stream is a very compact binary trie,
however, the more keys are stored in the tree, the longer the bit strings (treemap and
leafmap) are. As a result, the time-cost for each process is high. For example, as
for the retrieval, the worst case is when search process is done toward the rightmost
leaf in the BDS-tree as shown in Figure 3. In this case, if the rightmost leaf keeps
the address of the bucket of the searching key, all bits in treemap (leafmap also) of
the pre-order bit stream must be scanned. Similarly, in the case when an arbitrary
key is inserted in the bucket corresponding to the leftmost leaf, suppose the bucket is
divided and merge, all bits after the bit corresponding to the leftmost leaf in treemap
of the pre-order bit stream have to be shifted. In this paper, the method to solve the
problem stated above is proposed.
This method separates the BDS-tree into smaller BDS-trees of a certain depth.
This depth is called the separation depth, and these small trees are called separated
trees. These separated trees are numbered and connected by pointers. The BDS-tree
separated in this way is called a Hierarchical Binary Digital Search tree (HBDS-tree).
The HBDS-tree obtained based on the BDS-tree of Figure 4 -(a), with a separation
depth of 2, is shown in Figure 4 -(b). In this case when rightmost leaf is searched, if
we use the BDS-tree as shown in Figure 4 -(a), all internal nodes and leaves must be
scanned in pre-order traversal. On the other hand, in the case of the HBDS-tree as
shown in Figure 4 -(b), we can search the rightmost leaf by scanning all nodes and
leaves of the only separated tree 1.
The algorithm to retrieve a key in the HBDS-tree uses the pre-order bit stream.
The binary sequence H(k) of the key is divided into the following binary sequence:
\[ H(k) = H_t(k) \ H_i(k) c \ H_0(k) c \ H_s(k) \]
Supposing that the separation depth is denoted by L, the lengths of H_t(k), H_i(k), H_0(k), and H_s(k) are L
bits and the length of H_s(k) is less than L bits. The HBDS-tree can be compressed into
a very compact data structure named the pre-order bit stream as well as the BDS-
tree. The pre-order bit stream is created and controlled for each of the separated
trees. The pre-order bit stream that corresponds to the i-th separated tree in the
Figure 3: Retrieval of the BDS-tree in the worst-case.
HBDS-tree consists of treemap, leafmap, and $B_TBL_i$, but the leaf which becomes the pointer to the next separated tree is regarded as a special leaf and $B_TBL_i$ contains the number of the next separated tree preceded by a minus sign in the slot corresponding to the leaf. The HBDS-tree obtained based on the BDS-tree of Figure 1, with a separated depth of 2, is shown in Figure 5, and the pre-order bit stream for the HBDS-tree of Figure 5 is shown in Figure 6, where, as can be seen above the treemap, the leaves which became the pointer to the separated tree are marked by “(,)”. By using this improved method, each process can be sped up, because unnecessary scanning of the pre-order bit stream for each separated tree can be omitted.
The algorithm for retrieval of the HBDS-tree represented by the pre-order bit stream is shown below, where it uses the following variables:
$i$: The current separated tree number.
$s\_key$: The key to be searched.
$keypos$: A pointer to the current position in $s\_key$.
$treepos$: A pointer to the current position in treemap.
$leafpos$: A pointer to the current position in leafmap.
$bucketnum$: The corresponding bucket number.
Moreover, each of the functions performs the same process as the functions explained in Section 2 toward the $i$-th separated tree, when $i$ is initialized with 1.
[An Algorithm to search in the HBDS-tree]
Input: $s\_key$;
Output: If $s\_key$ can be found, then the output is TRUE, otherwise FALSE;
Step(S'-1)\~ Step(S'-5): The same procedures as the Step(S-1)\~ Step(S-5) are performed, however their treemap, leafmap are changed into treemap, leafmap;
Step(S'-6): {Verification of bucketnum}
Figure 4: Improvement of the BDS-tree by using hierarchical structures.
bucketnum ← FIND_BUCKET(i);
If bucketnum ≤ 0, proceed to Step(S'-7), otherwise proceed to Step(S'-8):
Step(S'-7): {Obtaining the separated tree number}
i = −1 × bucketnum; Return to Step(S'-1);
Step(S'-8): {Verification of B_TBL}
If the bucket indicated by bucketnum contains the key, return TRUE, otherwise return FALSE;
For example, in the case of retrieval the key = zoo (s_key = “11c”) in the pre-order bit stream of the HBDS-tree as shown in Figure 6, s_key can be retrieved in the HBDS-tree by using the pre-order bit stream of the only separated tree 1, so that the time-cost of retrieval becomes better than the case by using the BDS-tree’s one.
4 An Insertion Algorithm
The method for inserting the new key into the HBDS-tree is divided into the following three cases as well as the BDS-tree.
1) the required bucket is partially filled.
2) the required bucket is a dummy bucket.
3) the required bucket is full.
In this chapter, the third case, when the required bucket is full, that is, the method for dividing the full bucket into the new two buckets is explained. An explanation of the other cases is omitted, because they are very simple.
When there is an overflow in the required bucket, in the BDS-tree, the following processes are repeated until the overflow of the bucket does not happen. First, the corresponding leaf to the full bucket is changed into a tree which consists of a node and two dummy leaves. This tree is called a unit tree. Next, all the keys in the full bucket and an insertion key are distributed between the corresponding two buckets to dummy leaves of the unit tree. On the HBDS-tree, when the unit tree is made, a new separated tree must be created every time the depth of each separated tree exceeds the separation depth. As for the insertion process which uses the pre-order bit stream, a bit line “011”, which represents the unit tree in treemap, and a bit line “00”, which represents the two dummy leaves of the unit tree in leafmap, are inserted into treemap and leafmap respectively.
5 Evaluation
5.1 Theoretical Evaluation
In this section, the worst-case time complexities of each algorithm for the BDS-tree and HBDS-tree are theoretically analyzed. And the space complexities of each pre-order bit stream for the BDS-tree and HBDS-tree also are calculated. Let the tree structure to be analyzed be the complete tree. The following parameters are used:
- \( n \): The depth of the complete tree;
- \( m \): The separation depth;
- \( \alpha \): The number of layers in the HBDS-tree. It is obtained by \([n/m]\), where \([n/m]\) indicates the minimum integer greater than or equal to \(n/m\);
As for the time complexity, the worst-case time complexity for retrieval for the BDS-tree is \(O(2^n)\), because the whole of the complete tree must be scanned. However, for the HBDS-tree it is \(O(\alpha 2^m)\), since only \(\alpha\) separate trees are scanned. Regarding the insertion and deletion, the worst case is when each process is done toward the leftmost bucket in the tree. In this case, suppose the bucket is divided and merged, the BDS-tree has a time complexity \(O(2^n - n)\), because all bits after the bit corresponding to the bucket in the pre-order bit stream have to be shifted, however for the HBDS-tree it is \(O(2^m - m)\), because the same operations are performed toward only one separated tree. Generally, for \(n \ll 2^n\) and \(m \ll 2^m\), the worst-case time complexity for insertion and deletion in the BDS-tree is \(O(2^n)\) and for the HBDS-tree it is \(O(2^m)\).
As for the space complexity, on the BDS-tree, the number of bits used for the \(\text{treemap}\) is equal to the total number of nodes (internal nodes and leaves) of the complete tree, that is, it is \(2^n + 1\). And the leafmap needs \(2^n\) bits which is the number of leaves in the complete tree. As for the sizes of the \(\text{treemap}\) and \(\text{leafmap}\)
for the HBDS-tree, they are calculated as shown below:
Number of bits required for treemap
\[ \text{Number of bits required for treemap} = \left( \frac{m}{(\text{number of all nodes of the separated tree})} \right) \left( \frac{1}{(\text{number of the separated trees within the complete tree})} \right) \]
\[ = \sum_{k=0}^{m} \left( 2^k \times \sum_{k=1}^{a} 2^{m(k-1)} \right) = \left( 2^{m+1} - 1 \right) \frac{2^{ma} - 1}{2^m - 1} \]
\[ = \{2 \left( 2^m - 1 \right) + 1 \} \frac{2^{ma} - 1}{2^m - 1} = \left( 2^{ma+1} - 2 \right) + \frac{2^{ma} - 1}{2^m - 1} \]
\[ = \left( 2^{m+1} - 1 \right) + \frac{2^n - 1}{2^m - 1} - 1 \]
Number of bits required for leafmap
\[ \text{Number of bits required for leafmap} = \left( \frac{m}{(\text{number of leaves of the separated tree})} \right) \left( \frac{1}{(\text{number of the separated trees within the complete tree})} \right) \]
\[ = \sum_{k=1}^{a} 2^{m(k-1)} = 2^m \frac{2^{ma} - 1}{2^m - 1} \]
\[ = \left( 2^m - 1 + 1 \right) \frac{2^{ma} - 1}{2^m - 1} = \left( 2^{ma} - 1 \right) + \frac{2^{ma} - 1}{2^m - 1} \]
\[ = 2^n + \frac{2^n - 1}{2^m - 1} - 1 \]
From the above results, if the BDS-tree is separated, the storage requirement for both the treemap and the leafmap increases only \((2n - 1)/(2m - 1) - 1\) bits.
### 5.2 Experimental Evaluation
This method was written in about 2,000 lines of code in C, and implemented on a Sun Microsystems Sparc Station 2 (28 MIPS).
<table>
<thead>
<tr>
<th>Key sets</th>
<th>Japanese nouns</th>
<th>English words</th>
</tr>
</thead>
<tbody>
<tr>
<td>Kinds of trees</td>
<td>BDS-tree</td>
<td>HBDS-tree</td>
</tr>
<tr>
<td>Number of</td>
<td></td>
<td></td>
</tr>
<tr>
<td>non-dummy leaves</td>
<td>6,002</td>
<td>6,159</td>
</tr>
<tr>
<td>dummy leaves</td>
<td>3,649</td>
<td>8,411</td>
</tr>
<tr>
<td>Internal nodes</td>
<td>9,650</td>
<td>14,569</td>
</tr>
<tr>
<td>depth</td>
<td>82</td>
<td>70</td>
</tr>
<tr>
<td>separated tree</td>
<td>2,060</td>
<td>2,940</td>
</tr>
<tr>
<td>Time (Second)</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Registration</td>
<td>870</td>
<td>146</td>
</tr>
<tr>
<td>Retrieval</td>
<td>8.68</td>
<td>0.48</td>
</tr>
<tr>
<td>Insertion</td>
<td>38.00</td>
<td>3.00</td>
</tr>
<tr>
<td>Storage (K-byte)</td>
<td></td>
<td></td>
</tr>
<tr>
<td>treemap</td>
<td>2.41</td>
<td>2.67</td>
</tr>
<tr>
<td>leafmap</td>
<td>1.21</td>
<td>1.46</td>
</tr>
<tr>
<td>B_TBL</td>
<td>12.00</td>
<td>16.12</td>
</tr>
</tbody>
</table>
Table 1: Experimental results.
In order to observe the effect of this method, we compare the cost time of each process and storage requirement for the BDS-tree and the HBDS-tree. 50,000 nouns in Japanese and 50,000 English words with an average length of 6 and 9 bytes respectively are used as the key sets. Table 1 shows the experimental results for the each
of key sets, where the separation depth is 5 and $B_{SIZE}$ is 16. Retrieval time is the average time required for a key when all registered keys are searched and deleted, respectively. Insertion time is the average time required for a key when 1000 unregistered keys are added to the key set. Storage in Table 1 shows the memory required for the registration of each key set.
From the experimental results, the retrieval in the HBDS-tree is 13.20 times faster than in the BDS-tree, the insertion is 11.13 times faster. Thus, it can be concluded that the time each of the processes requires is significantly less when using this method. As for the storage space required by the HBDS-tree, the sizes of $treenmap$, $leafmap$ and $B_{TBL}$ are 1.11, 1.21 and 1.34 times the size of the ones used by the BDS-tree. However, by nature, the pre-order bit stream is very compact in size, thus their sizes are good enough for practical applications. Moreover, for the BDS-tree and the HBDS-tree, both represented by the pre-order bit stream, the storage requirement to register one key is of 2.50 and 3.24 bits, respectively. Thus, these methods can be operated with more compact storage than the $B$-tree, $B^+$-tree, etc.
6 Conclusions
The Binary trie represented by the pre-order bit stream can search a key in order, however, the time-cost of each process becomes high for large key sets. So, the method for solving the above problem by separating the tree structure has been presented in this paper. The time and space efficiency of the proposed method is theoretically discussed, and the validity of this method has been supported by empirical observations. As future improvements, an efficient method to improve the space efficiency of the bucket should be designed.
References
|
{"Source-Url": "http://www.stringology.org/cgi-bin/getfile.cgi?c=-&n=1&t=pdf&y=1997", "len_cl100k_base": 6083, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 30646, "total-output-tokens": 7091, "length": "2e12", "weborganizer": {"__label__adult": 0.00039076805114746094, "__label__art_design": 0.0004401206970214844, "__label__crime_law": 0.00052642822265625, "__label__education_jobs": 0.0009708404541015624, "__label__entertainment": 0.00012385845184326172, "__label__fashion_beauty": 0.00021398067474365232, "__label__finance_business": 0.0003800392150878906, "__label__food_dining": 0.00045871734619140625, "__label__games": 0.0007410049438476562, "__label__hardware": 0.003086090087890625, "__label__health": 0.0008463859558105469, "__label__history": 0.00040984153747558594, "__label__home_hobbies": 0.00013256072998046875, "__label__industrial": 0.0007243156433105469, "__label__literature": 0.000438690185546875, "__label__politics": 0.00033164024353027344, "__label__religion": 0.0006527900695800781, "__label__science_tech": 0.330322265625, "__label__social_life": 0.00011408329010009766, "__label__software": 0.0159149169921875, "__label__software_dev": 0.6416015625, "__label__sports_fitness": 0.0003025531768798828, "__label__transportation": 0.0006375312805175781, "__label__travel": 0.00021398067474365232}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23857, 0.03977]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23857, 0.77699]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23857, 0.8894]], "google_gemma-3-12b-it_contains_pii": [[0, 2305, false], [2305, 5488, null], [5488, 7696, null], [7696, 9605, null], [9605, 12567, null], [12567, 14301, null], [14301, 15039, null], [15039, 16417, null], [16417, 18295, null], [18295, 21206, null], [21206, 23857, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2305, true], [2305, 5488, null], [5488, 7696, null], [7696, 9605, null], [9605, 12567, null], [12567, 14301, null], [14301, 15039, null], [15039, 16417, null], [16417, 18295, null], [18295, 21206, null], [21206, 23857, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23857, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23857, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23857, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23857, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23857, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23857, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23857, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23857, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23857, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23857, null]], "pdf_page_numbers": [[0, 2305, 1], [2305, 5488, 2], [5488, 7696, 3], [7696, 9605, 4], [9605, 12567, 5], [12567, 14301, 6], [14301, 15039, 7], [15039, 16417, 8], [16417, 18295, 9], [18295, 21206, 10], [21206, 23857, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23857, 0.09189]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
f5a84d47efb8e95a3fb3bd13d21a82ba6b919c1c
|
Topic 10
Basic Classes
Department of Engineering Physics
University of Gaziantep
Course web page
www.gantep.edu.tr/~bingul/ep241
Sep 2013
Introduction
In this lecture we will learn basic classes in C++. C and C++ allow you to define your own data types. These *user-defined* data types are created using the `struct` or the `class` keywords.
In C++, a class is like an array: *it is a derived type*. But unlike an array, the elements of a class may have different types. Furthermore, some elements of a class may be functions and operators.
Structures in C/C++
- A data structure (or derived data type) is a set of data elements grouped together under one name.
- These data elements, known as members, can have different types and different lengths.
```
struct name {
type1 member_name1;
type2 member_name2;
...
} object_names;
```
- Here, **Student** is a new valid type name like the fundamental ones **int** or **double**. **s1** and **s2** are objects (or variables) derived from this new type.
// A basic use of the structure
#include <iostream>
#include <iomanip>
using namespace std;
struct Fruit{
double weight;
double price;
};
int main(){
Fruit orange, apricot;
orange.price = 2.50; // TL/kg
apricot.price = 3.25; // TL/kg
cout << "Input the amount of orange in kg: ";
cin >> orange.weight;
cout << "Input the amount of apricot in kg: ";
cin >> apricot.weight;
cout << "\nTotal prices (TL):
";
cout << setprecision(2) << fixed;
cout << "Orange = " << orange.price * orange.weight << endl;
cout << "Apricot = " << apricot.price * apricot.weight << endl;
}
Basic Classes
- A **class** is an expanded concept of a data structure in C. Instead of holding only data, a class can hold both data and functions.
- An **object** is an instantiation of a class. In terms of variables, a class would be the *type*, and an object would be the *variable*.
- Classes are decelerated by using **class** keyword.
```cpp
class class_name {
access_specifier_1:
member1;
access_specifier_2:
member2;
...
} object_names;
```
An access specifier is one of the followings:
- **private**
members of a class are accessible only from within other members of the same class
- **public**
members are accessible from anywhere where the object is visible
- **protected**
members are accessible from members of their same class but also from members of their derived classes
By default, all members of a class declared with the `class` keyword have **private** access for all its members.
The following class can be used to represent a planet whose mass is \( M \) and radius is \( R \).
// Example Class
class Planet{
public:
void SetMassRadius(double, double);
double Density();
double Gravity();
private:
double M, R, G;
};
- declares a class (i.e. a type) called Planet
- The functions:
- SetMassRadius()
- Density()
- Gravity()
- Member \( M, R \) and \( G \) have (default) \texttt{private} access and member functions have \texttt{public} access.
### Planets and Pluto: Physical Characteristics
This table contains selected physical characteristics of the planets and Pluto.
<table>
<thead>
<tr>
<th>Planet</th>
<th>Equatorial Radius</th>
<th>Mean Radius</th>
<th>Mass (x 10^{24} kg)</th>
<th>Bulk Density (g cm^{-3})</th>
<th>Sidereal Rotation Period (d)</th>
<th>Sidereal Orbit Period (y)</th>
<th>V(1,0) (mag)</th>
<th>Geometric Albedo</th>
<th>Equatorial Gravity (m s^{-2})</th>
<th>Escape Velocity (km s^{-1})</th>
</tr>
</thead>
<tbody>
<tr>
<td>Mercury</td>
<td>2439.7 ±1.0</td>
<td>2439.7 ±1.0</td>
<td>0.330104 ±0.000036</td>
<td>5.427 ±0.007</td>
<td>58.6462 [D]</td>
<td>0.2408467 [B]</td>
<td>-0.60 ±0.10</td>
<td>0.106 [B]</td>
<td>3.70 [M]</td>
<td>4.25 [M]</td>
</tr>
<tr>
<td>Venus</td>
<td>6051.8 ±1.0</td>
<td>6051.8 ±1.0</td>
<td>4.86732 ±0.00049</td>
<td>5.243 ±0.003</td>
<td>-243.018 [D]</td>
<td>0.61519726 [B]</td>
<td>-4.47 ±0.07</td>
<td>0.65 [B]</td>
<td>8.87 [M]</td>
<td>10.36 [M]</td>
</tr>
<tr>
<td>Earth</td>
<td>6378.14 ±0.01</td>
<td>6371.00 ±0.01</td>
<td>5.97219 ±0.0060</td>
<td>5.5134 ±0.0006</td>
<td>0.99726968 [B]</td>
<td>1.0000174 [B]</td>
<td>-3.86 [B]</td>
<td>0.367 [B]</td>
<td>9.80 [M]</td>
<td>11.19 [M]</td>
</tr>
<tr>
<td>Mars</td>
<td>3396.19 ±1.1</td>
<td>3389.50 ±2</td>
<td>0.641693 ±0.00064</td>
<td>3.9340 ±0.0008</td>
<td>1.02595676 [D]</td>
<td>1.8808476 [B]</td>
<td>-1.52 [B]</td>
<td>0.150 [B]</td>
<td>3.71 [M]</td>
<td>5.03 [M]</td>
</tr>
<tr>
<td>Saturn</td>
<td>60268 ±4</td>
<td>58232 ±6</td>
<td>568.319 ±0.057</td>
<td>0.6871 ±0.0002</td>
<td>0.44401 [D]</td>
<td>29.447498 [B]</td>
<td>-8.88 [B]</td>
<td>0.47 [B]</td>
<td>10.44 [M]</td>
<td>36.09 [M]</td>
</tr>
<tr>
<td>Uranus</td>
<td>25559 ±4</td>
<td>25362 ±7</td>
<td>86.8103 ±0.0087</td>
<td>1.270 ±0.001</td>
<td>-0.71833 [D]</td>
<td>84.016846 [B]</td>
<td>-7.19 [B]</td>
<td>0.51 [B]</td>
<td>8.87 [M]</td>
<td>21.38 [M]</td>
</tr>
<tr>
<td>Neptune</td>
<td>24764 ±15</td>
<td>24622 ±19</td>
<td>102.410 ±0.010</td>
<td>1.638 ±0.004</td>
<td>0.67125 [D]</td>
<td>164.79132 [B]</td>
<td>-6.87 [B]</td>
<td>0.41 [B]</td>
<td>11.15 [M]</td>
<td>23.56 [M]</td>
</tr>
<tr>
<td>Pluto</td>
<td>1151 ±6</td>
<td>1151 ±6</td>
<td>0.01309 ±0.00018</td>
<td>2.05 ±0.04</td>
<td>-6.3872 [D]</td>
<td>247.92065 [B]</td>
<td>-1.0 [B]</td>
<td>0.3 [B]</td>
<td>0.66 [M]</td>
<td>1.23 [M]</td>
</tr>
</tbody>
</table>
Implementation of the Planet Class
- Consider a planet of mass $M$ and equatorial radius $R$. The mean mass density $d$ and equatorial gravity $g$ of the planet are given respectively by
$$
g = \frac{GM}{R^2}
$$
$$
d = \frac{M}{4\pi R^3/3}
$$
- where $G$ is the universal gravitational constant and has the value $6.67428 \times 10^{-11}$ m$^3$/kg/s.
// A basic use of classes
#include <iostream>
#include <cmath>
using namespace std;
class Planet{
public:
void SetMassRadius(double, double);
double Density();
double Gravity();
private:
double M, R, G;
};
int main(){
Planet Mars;
Mars.SetMassRadius(6.4e23, 3.4e6);
cout << "Density = " << Mars.Density() << endl;
cout << "Gravity = " << Mars.Gravity() << endl;
}
// continue ...
// Set the mass (kg) and
equatorial radius (m) of the planet
void Planet::SetMassRadius(double mass, double radius){
M = mass;
R = radius;
G = 6.67428e-11;
}
// Mass density in g/cm3
double Planet::Density(){
double d = M/(4.0*M_PI*R*R*R/3);
return d * 1.0e-3;
}
// Surface gravity in m/s2
double Planet::Gravity(){
double g = G*M/(R*R);
return g;
}
Density = 3.88736
Gravity = 3.6951
Here **Mars** is declared to be an object of the **Planet** class. Consequently, **Mars** has its own internal data members **M**, **R**, and **G** and has also ability call member functions.
The mass and radius of **Mars** are supplied via the **SetMassRadius()** method.
Its density and surface gravity are evaluated and output.
Notice one must use the specifier **Planet::** before each member function to indicate that these functions are the members of the **Planet** class.
The output shows that the density of the Mars is about 3.9 g/cm³ and its surface gravity is 3.7 m/s².
- public members are accessible from outside the class but private members are not.
- Therefore, the following accesses are forbidden:
```cpp
cout << Mars.M << endl; // forbidden
cout << Mars.R << endl; // forbidden
```
// Self contained implementation in a class
#include <iostream>
#include <cmath>
using namespace std;
class Planet{
public:
void SetMassRadius(double mass, double radius){
M = mass; R = radius; G = 6.67428e-11;
}
double Density(){
return 1.0e-3 * M/(4.0*M_PI*R*R*R/3);
}
double Gravity(){ return G*M/(R*R); }
private:
double M, R, G;
};
int main(){
Planet Mars;
Mars.SetMassRadius(6.4e23, 3.4e6);
cout << "Density = " << Mars.Density() << endl;
cout << "Gravity = " << Mars.Gravity() << endl;
}
Constructors and Destructors
- The `Planet` class uses the `SetMassRadius()` function to initialize its objects. However, you can initialize the values when the object is declared like ordinary variables
```
int p = 35;
string name = "Bjarne";
```
- This is done by means of a constructor function which is a member function called automatically when an object is declared.
- A constructor function must have the same name as the class name and have no return type.
// A basic use of class constructor
#include <iostream>
#include <cmath>
using namespace std;
class Planet{
public:
Planet(double, double);
double Density();
double Gravity();
private:
double M, R, G;
};
int main(){
Planet Mars(6.4e23, 3.4e6), Jupiter(1.9e27, 7.0e7);
cout << "Mars Density = " << Mars.Density() << endl;
cout << "Mars Gravity = " << Mars.Gravity() << endl;
cout << "Jupiter Density = " << Jupiter.Density() << endl;
cout << "Jupiter Gravity = " << Jupiter.Gravity() << endl;
}
// continue ...
// Set the mass (kg) and
equatorial radius (m) of the planet
Planet::Planet(double mass, double radius){
M = mass;
R = radius;
G = 6.67428e-11;
}
// Mass density in g/cm^3
double Planet::Density(){
double d = M/(4.0*M_PI*R*R*R/3);
return d * 1.0e-3;
}
// Surface gravity in m/s^2
double Planet::Gravity(){
double g = G*M/(R*R);
return g;
}
Mars Density = 3.88736
Mars Gravity = 3.6951
Jupiter Density = 1.32242
Jupiter Gravity = 25.8799
Pointers to Classes
It is perfectly valid to create pointers that point to classes.
For example:
```c
Planet *p;
```
is a pointer to an object of class `Planet`.
In order to refer directly to a member of an object pointed by a pointer we can use the arrow operator (`->`) of indirection.
// Pointer to a class
#include <iostream>
#include <cmath>
using namespace std;
class Planet{
public:
Planet(double mass, double radius){
M = mass; R = radius; G = 6.67428e-11;
}
double Density(){ return 1.0e-3 * M/(4.0*M_PI*R*R*R/3); }
double Gravity(){ return G*M/(R*R); }
private:
double M, R, G;
};
int main(){
Planet *gezegen = new Planet(6.4e23, 3.4e6);
cout << "Density = " << gezegen->Density() << endl;
cout << "Gravity = " << gezegen->Gravity() << endl;
}
Including a Class from a File
The contents of the main program, and of the class(es), can be placed into separate files.
Then, using the \texttt{#include} directive you can use the class(es) required.
In general, the files containing classes (or functions) are called \textit{header files}. Usually headers have the extension ".h" or ".hpp".
```cpp
#ifndef PLANET_H
#define PLANET_H
class Planet{
public: Planet(double mass, double radius);
double Density();
double Gravity();
private:
double M, R, G;
};
// Constructor function to set the mass and radius of the planet
// By default the planet is assumed to be Earth
Planet::Planet(double mass = 6.0e24, double radius = 6.4e6){
M = mass; R = radius;
G = 6.67428e-11;
}
// Mass density in g/cm3
double Planet::Density(){
return M/(4.0*M_PI*R*R*R/3) * 1.0e-3;
}
// Surface gravity in m/s2
double Planet::Gravity(){
return G*M/(R*R);
}
#endif
```
// Including a class from a file
#include <iostream>
#include <cmath>
using namespace std;
#include "Planet.h"
int main()
{
Planet Mars(6.4e23, 3.4e6), Jupiter(1.9e27, 7.0e7);
cout << "Mars Density = " << Mars.Density() << endl;
cout << "Mars Gravity = " << Mars.Gravity() << endl;
cout << "Jupiter Density = " << Jupiter.Density() << endl;
cout << "Jupiter Gravity = " << Jupiter.Gravity() << endl;
}
Example: ‘A Cat class’
Each object of this class will represent a cat. The class includes
* a constructor function whose prototype is
```cpp
Cat(int Age=1, double Mass=2.0);
```
to set (initialize) the age and weight of the cat.
* a member function named `void speak()` that outputs a "meow" message.
* a member function named `void kill()` that reduces the cat's lives by one (the cat has nine lives).
* a member function named `double getMass()` to get the mass of the cat.
* a member function named `int getAge()` to get the age of the cat.
* a member function named `int getLife()` to get the remaining life(s) of the cat.
Example: ‘A Point Class’
Each object of this class will represent a Point in x-y plane.
The class includes
* a constructor function whose prototype is
\[
\text{Point(double xx=0, double yy=0);}
\]
to set (initialize) the coordinate.
* a member function named \text{double distance()}
that returns the distance of the point to the origin.
* a member function named \text{double angle()}
that returns the angle w.r.t x-axis
Homeworks
In the x-y plane, the general equation of a circle of radius $r$ is given by: $(x - a)^2 + (y - b)^2 = r^2$.
Implement a `Circle` class. Each object of this class will represent a circle, storing its radius ($r$) and the $a$ and $b$ coordinates of its center as doubles. The class must include:
- a default constructor function whose prototype is `Circle(double radius, double centerX, double centerY);`
to set (initialize) radius and center coordinates.
- a member function named `double area()` that returns the area of the circle.
- a member function named `double circ()` that returns circumference.
- a member function named `bool isInside(double x, double y)` that returns true if the given point $(x, y)$ is inside the circle and returns false otherwise.
Assume that the class declaration and its members/methods are stored in the file `Circle.h`. An example usage of the `Circle` is given below:
```cpp
#include <iostream>
using namespace std;
#include "Circle.h"
int main()
{
// a circle whose center is origin
Circle guzelCember(10.0, 0.0, 0.0);
cout << guzelCember.area() << endl;
cout << guzelCember.circ() << endl;
cout << guzelCember.isInside(1.5, 2.7) << endl;
return 0;
}
```
Implement an RC circuit class. Each object of this class will represent a simple charging RC circuit.
The class must include
- a default constructor function whose prototype is
\texttt{RCircuit(double R, double C, double V0);}
to initialize the values of resistance (R) in Ohms, capacitor (C) in Farads and the potential difference across DC voltage source (V0) in Volts.
- a member function named \texttt{double \ current(double t)} that returns the current in the circuit at given time (in seconds) where \( t > 0 \).
- a member function named \texttt{double \ VC(double t)} that returns potential across the capacitor at given time (in seconds) where \( t > 0 \).
- a member function named \texttt{double \ VR(double t)} that potential across the capacitor at given time (in seconds) where \( t > 0 \).
- a member function named \texttt{double \ tau()} that returns the time constant of the circuit defined by \( T = R \times C \).
Assume that the class declaration and its members/methods are stored in the file \texttt{RCircuit.h}.
Example usage of the RCircuit class is given below:
```cpp
#include <iostream>
using namespace std;
#include "RCircuit.h"
int main(){
RCircuit *Devrem = new RCircuit(2.2e+6, 1.0e-6, 12.);
double time = 0.0;
cout << "time constant: " << Devrem->tau() << endl;
do{
cout << Devrem->current(time) << "\t"
<< Devrem->VC(time) << "\t"
<< Devrem->VR(time) << endl;
time += 0.1;
}while(time < 5*Devrem->tau());
return 0;
}
```
Implement a Square class
Each object of this class will represent a square of given side.
The class includes
* a constructor function whose prototype is
\[ \text{Square(double side = 1);} \]
to set (initialize) the side.
* a member function named \text{double area()} that returns the area of the square.
* a member function named \text{double circ()} that returns the circumference of the square.
* a member function named \text{double diag()} that returns the diagonal length of the square.
**Implement a Cube class**
Each object of this class will represent a cube of given side.
The class includes
* a constructor function whose prototype is
```cpp
Cube(double side = 1);
```
to set (initialize) the side.
* a member function named `double area()`
that returns the total surface area of the cube.
* a member function named `double volume()`
that returns the volume of the cube.
* a member function named `double diag()`
that returns the longest diagonal length of the cube.
|
{"Source-Url": "http://www1.gantep.edu.tr/~bingul/ep241/docs/ep241-topic10.pdf", "len_cl100k_base": 4985, "olmocr-version": "0.1.51", "pdf-total-pages": 28, "total-fallback-pages": 0, "total-input-tokens": 46652, "total-output-tokens": 6551, "length": "2e12", "weborganizer": {"__label__adult": 0.0005145072937011719, "__label__art_design": 0.0005297660827636719, "__label__crime_law": 0.0002968311309814453, "__label__education_jobs": 0.005458831787109375, "__label__entertainment": 0.00011104345321655272, "__label__fashion_beauty": 0.0002231597900390625, "__label__finance_business": 0.00014066696166992188, "__label__food_dining": 0.0007042884826660156, "__label__games": 0.0010919570922851562, "__label__hardware": 0.0024871826171875, "__label__health": 0.0005660057067871094, "__label__history": 0.0003390312194824219, "__label__home_hobbies": 0.000286102294921875, "__label__industrial": 0.0006151199340820312, "__label__literature": 0.0003180503845214844, "__label__politics": 0.00023543834686279297, "__label__religion": 0.0007200241088867188, "__label__science_tech": 0.0128936767578125, "__label__social_life": 0.0001844167709350586, "__label__software": 0.004337310791015625, "__label__software_dev": 0.9658203125, "__label__sports_fitness": 0.0004472732543945313, "__label__transportation": 0.0011510848999023438, "__label__travel": 0.0003185272216796875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17121, 0.02771]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17121, 0.90547]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17121, 0.72836]], "google_gemma-3-12b-it_contains_pii": [[0, 141, false], [141, 546, null], [546, 1020, null], [1020, 1644, null], [1644, 2114, null], [2114, 2578, null], [2578, 3095, null], [3095, 5913, null], [5913, 6268, null], [6268, 6707, null], [6707, 7123, null], [7123, 7709, null], [7709, 7931, null], [7931, 8487, null], [8487, 8967, null], [8967, 9539, null], [9539, 10007, null], [10007, 10300, null], [10300, 10805, null], [10805, 11150, null], [11150, 11743, null], [11743, 12168, null], [12168, 12890, null], [12890, 13329, null], [13329, 14566, null], [14566, 16108, null], [16108, 16613, null], [16613, 17121, null]], "google_gemma-3-12b-it_is_public_document": [[0, 141, true], [141, 546, null], [546, 1020, null], [1020, 1644, null], [1644, 2114, null], [2114, 2578, null], [2578, 3095, null], [3095, 5913, null], [5913, 6268, null], [6268, 6707, null], [6707, 7123, null], [7123, 7709, null], [7709, 7931, null], [7931, 8487, null], [8487, 8967, null], [8967, 9539, null], [9539, 10007, null], [10007, 10300, null], [10300, 10805, null], [10805, 11150, null], [11150, 11743, null], [11743, 12168, null], [12168, 12890, null], [12890, 13329, null], [13329, 14566, null], [14566, 16108, null], [16108, 16613, null], [16613, 17121, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 17121, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 17121, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17121, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17121, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17121, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17121, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17121, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17121, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17121, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17121, null]], "pdf_page_numbers": [[0, 141, 1], [141, 546, 2], [546, 1020, 3], [1020, 1644, 4], [1644, 2114, 5], [2114, 2578, 6], [2578, 3095, 7], [3095, 5913, 8], [5913, 6268, 9], [6268, 6707, 10], [6707, 7123, 11], [7123, 7709, 12], [7709, 7931, 13], [7931, 8487, 14], [8487, 8967, 15], [8967, 9539, 16], [9539, 10007, 17], [10007, 10300, 18], [10300, 10805, 19], [10805, 11150, 20], [11150, 11743, 21], [11743, 12168, 22], [12168, 12890, 23], [12890, 13329, 24], [13329, 14566, 25], [14566, 16108, 26], [16108, 16613, 27], [16613, 17121, 28]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17121, 0.02813]]}
|
olmocr_science_pdfs
|
2024-12-04
|
2024-12-04
|
0f943df6b3edaf3a3c54d368f075306a5a34a4fa
|
Algorithm for Automatic Web API Composition
Yong-Ju Lee
School of Computer Information, Kyungpook National University, 386 Gajangdong, Sangju, South Korea
[email protected]
Abstract—Data mashup is a special class of mashup application that combines Web APIs from several data sources to generate a new and more valuable dataset. Although the data mashup has become very popular over the last few years, there are several challenging issues when combining a large number of APIs into the data mashup, especially when composite APIs are manually integrated by mashup developers. This paper proposes a novel algorithm for automatic composition of Web APIs. The proposed algorithm consists of constructing a directed similarity graph and searching composition candidates from the graph. We construct a directed similarity graph which presents the semantic functional dependency between the inputs and the outputs of Web APIs. We generate directed acyclic graphs (DAGs) that can produce the output satisfying the desired goal. We rapidly prune APIs that are guaranteed not to involve the composition in order to produce the DAGs efficiently. The algorithm is evaluated using a collection of REST and SOAP APIs extracted from ProgrammableWeb.
Keywords-automatic composition algorithm; semantic data mashup; ontology learning method; Web API
I. INTRODUCTION
A mashup is a Web application that combines data, presentation, or functionality from several different sources to create new services. An example of the mashup is HousingMaps [1], which displays available houses in an area by combining listings from Craigslist with a display map from Google. A data mashup is a special class of the mashup application that combines data from several data sources (typically provided through Web APIs; these API types are usually SOAP, REST, JavaScript, XML-RPC, Atom, etc.) to generate a more meaningful dataset. Data mashups have become very popular over the last few years. For example, as of August 2012, ProgrammableWeb [2] has published more than 7000 Web APIs. Several mashup tools such as Yahoo’s Pipes, IBM’s Damia, and Intel’s Mashmaker have been developed to enable users to create data mashups without programming knowledge.
Although the data mashup has emerged as a common technology for combining Web APIs, there are several challenging issues. First, since a portal site may have a large number of APIs available for data mashups, manually searching and composing compatible APIs can be a tedious and time-consuming task. Therefore, mashup developers wish to quickly find the desired APIs and easily integrate them without having to expend considerable programming efforts. Second, portal sites typically only support keyword search or category search. These search methods are insufficient due to their bad recall and bad precision. To make mashups more efficiently, we need a semantic-based approach such that agents can reason about the capabilities of the APIs that permit their discovery and composition. Third, most mashup developers want to figure out all the intermediate steps needed to generate the desired mashup automatically. An infrastructure that allows users to provide some interesting or relevant composition candidates that can possibly incorporate with existing mashups is needed.
To solve the above issues, we present an algorithm for automatic discovery and composition of Web APIs using their semantic descriptions. Given a formal description of the Web API, a desired goal can be directed matched to the output of a single API. This task is called discovery. If the API is not found, the agent can search for two or more APIs that can be composed to satisfy the required goal. This task is called composition. Since the discovery is a special case of the composition where the number of APIs involved in the composition is exactly equal to one, discovery and composition can be viewed as a single problem.
We define API descriptions to syntactically describe Web APIs, and use an ontology learning method [3] to semantically describe Web APIs. We propose a Web API composition algorithm based on the ontology learning method. The proposed algorithm consists of constructing a directed similarity graph and searching composition candidates. The composition process can be described as that of generating directed acyclic graphs (DAGs) that can produce the output satisfying the desired goal, where the DAGs are gradually generated by forward-backward chaining of APIs. In order to produce the DAGs efficiently, we filter out APIs that are not useful for the composition. The main contributions from this paper are as follows:
- The paper proposes a new efficient algorithm for solving the Web API composition problem that takes semantics into account. The proposed algorithm automatically selects the individual APIs involved in the composition for a given query, without the need for manual intervention.
- Selecting and integrating APIs suitable for data mashups are critical for any mashup toolkits. We show in this paper how the characteristics of APIs can be syntactically defined and semantically described, and how to use the syntactic and semantic descriptions to aid the easy discovery and composition of Web APIs.
- A semantic-based data mashup tool is implemented for lowering the complexity of underlying programming efforts. Using this tool, the composition of APIs does not require in-depth programming knowledge. Users are able to integrate APIs with minimal training.
The rest of this paper is organized as follows. Section 2 begins by introducing our ontology learning method. Section 3 describes automatic Web API discovery and composition algorithms. Section 4 describes an implementation and experiment. Section 5 discusses related work, and Section 6 contains conclusions and future work.
II. ONTOLOGY LEARNING METHOD
The successful employment of semantic Web APIs is dependent on the availability of high-quality ontologies. The construction of such ontologies is difficult and costly, thus hampering Web API deployment. Our ontology learning method [3] automatically generates ontologies from Web API descriptions and their underlying semantics.
A. Parameter Clustering Technique
We have developed a parameter clustering technique to derive several semantically meaningful concepts from API parameters. We consider the syntactic information that resides in the API descriptions, and apply a mining algorithm to obtain their underlying semantics. The main idea is to measure the co-occurrence of terms and cluster the terms into a set of concepts. Formally, we can define an API as follows:
Definition 1: A Web API \( W = < I, O > \) where \( I \) is the input and \( O \) is the output. Each input and output contains a set of parameters for the API.
The input/output parameters are often combined as a sequence of several terms. We utilize a heuristic as the basis of our clustering, in that the terms tend to express the same concept if they frequently occur together. This allows us to cluster terms by exploiting the conditional probability of their occurrences in the input and output of Web APIs, specifically we are interested in the association rules [4]. We use the agglomerative hierarchical clustering algorithm to turn the set of terms \( T = \{ t_1, t_2, \ldots, t_m \} \) into the concepts \( C = \{ c_1, c_2, \ldots, c_n \} \). For example, the terms \{zip, city, area, state\} can be treated as one concept, they are grouped into one cluster.
B. Pattern Analysis Technique
The pattern analysis technique captures relationships between the terms contained in a parameter, and matches the parameters if both terms are similar and the relationships are equivalent. This approach is derived from the observation that people employ similar patterns when composing a parameter out of multiple terms. Based on the experimental observations, the relationships between the terms are defined in Table 1. Two ontological concepts are matched if and only if one of the following is true; (1) one concept is a property of the other concept, and (2) one concept is a subclass of the other concept.
From the above rules, an agent would be able to find a match based on the similarities of the API. For example, assume that a parameter CityName was to be compared against another parameter CodeOfCity. The keyword search would not count these as a possible match. However, if the City term had the relationships “X propertyOf Y” in its pattern rule, the matching logic will return a matching score because these two parameters are closely related (perhaps using the rules “CityName propertyOf City” and “CodeOfCity propertyOf City”).
<table>
<thead>
<tr>
<th>No</th>
<th>Pattern</th>
<th>Relationships</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Noun1+Noun2</td>
<td>Parameter propertyOf Noun1</td>
</tr>
<tr>
<td>2</td>
<td>Adjective+Noun</td>
<td>Parameter subClassOf Noun</td>
</tr>
<tr>
<td>3</td>
<td>Verb+Noun</td>
<td>Parameter subClassOf Noun</td>
</tr>
<tr>
<td>4</td>
<td>Noun1+Noun2+Noun3</td>
<td>Parameter propertyOf Noun1</td>
</tr>
<tr>
<td>5</td>
<td>Noun1+Preposition+Noun2</td>
<td>Parameter propertyOf Noun2</td>
</tr>
</tbody>
</table>
C. Semantic Matching Technique
The semantic matching technique estimates the similarity of the input and output by considering the underlying concepts the input/output parameters cover. Formally, we describe the input as a vector \( I = << p_i, C_i > \) (similarly, the output can be represented in the form \( O = << p_o, C_o > \)), where \( p_i \) is the set of input parameters and \( C_i \) is the concept that is associated with \( p_i \). Then, the similarity of the input can be found using the following two steps (the output can be processed in a similar fashion): (1) we split \( p_i \) into a set of terms, we then find synonyms for these terms, and (2) we replace each term with its corresponding concepts, and then compute a similarity score.
The similarity score is defined to select the best matches for the given input. Consider a pair of candidate parameters \( p_i \) and \( p_j \), the similarity between \( p_i \) and \( p_j \) is given by the following formula:
\[
\text{Sim}(p_i, p_j) = \frac{2 \times \| \text{Match}(p_i, p_j) \|}{m+n}
\]
where \( m \) and \( n \) denote the number of valid terms in parameters, \( \| \text{Match}(p_i, p_j) \| \) returns the number of matching terms. Here, the similarity of each parameter is calculated by the best matching parameter that has a larger number of semantically related terms. The overall similarity is computed by a linear combination [3] to combine the similarity of each parameter.
Since existing matching techniques based on the clustering consider all terms in a cluster as an equivalent concept and ignore any hierarchical relationships between the terms, matches might exist that are irrelevant to the user’s intention (i.e., false positives). Thus, a pruning process is necessary to improve the precision of the results. The basic idea is to improve the precision of the matching technique by applying the pattern relationships defined in Table 1. For details, readers may refer to our previous work [3].
III. WEB API DISCOVERY AND COMPOSITION
A. Discovery Problem
Given a query and a collection of APIs stored in the registry, automatically finding an API from the registry that
matches the query requirement is the Web API discovery problem. For example, we are looking for an API to search a hotel. Table 2 shows the input/output parameters of a query and an API. In this example a Web API W satisfy the query Q. Q requires HotelName as the output and W produces HotelName and ConfirmNumber. The extra output produced can be ignored. W requires CountryCode and NameOfCity as the input and Q provides CountryID, StateName, and CityName as the input. An API parameter can be matched with the other parameter only if there is a semantic relationship between them. Here, although CountryCode and CountryID are different forms, they have the same semantics since they are referred to the same concept. Also NameOfCity and CityName have the same semantics since they are properties of the same object (i.e., City). Therefore, the agent is able to infer that Q and W input parameters have semantically the same classes.
<table>
<thead>
<tr>
<th>API</th>
<th>Input Parameters</th>
<th>Output Parameters</th>
</tr>
</thead>
<tbody>
<tr>
<td>Q</td>
<td>CountryID, StateName,</td>
<td>HotelName</td>
</tr>
<tr>
<td></td>
<td>CityName</td>
<td></td>
</tr>
<tr>
<td>W</td>
<td>CountryCode, NameOfCity</td>
<td>HotelName, ConfirmNumber</td>
</tr>
</tbody>
</table>
We describe an automatic Web API discovery algorithm similar to the one in [5]. An API matches a query when an API is sufficiently similar to the query. This means that we need to allow the agent to perform matches that recognize the degree of similarity between APIs and the query. We define the matching criteria as follows:
**Definition 2:** An API W matches a query Q when all the output parameters of Q are matched by the output parameters of W, and all the input parameters of W are matched by the input parameters of Q.
Definition 2 guarantees that the API found satisfies the needs of the query, and the query provides all the input parameters that the API needs to operate correctly. Our discovery algorithm is shown in Algorithm 1. This algorithm adopts strategies that rapidly prune APIs that are guaranteed not to match the query, thus improving the efficiency of the system. A query is matched against all APIs stored in the registry. A match between a query and an API consists of matching all the output parameters of the query against the output parameters of the API; and all the input parameters of the API against the input parameters of the query. If one of the query's output parameters is not matched by any of the API's output, the match fails. Matching between inputs is computed by the same process, but with the order of the query and API reversed. The similarity score of a match between two parameters is calculated by the semantic matching technique described in the previous section. The APIs are returned in the descending order of similarity scores.
**Algorithm 1:** Discovery Algorithm
```java
//input: query (Q), APIs
//output: matched APIs
for all APIs
if Matching(Q, API) then result.append(API)
return Sort(result)
Matching(Q, API)
SemanticMatch(Q.O, API.O)
SemanticMatch(API.I, Q.I)
```
### B. Composition Problem
Given a query and a collection of APIs, in case a matching API is not found, searching a sequence of APIs that can be composed together is the composition problem of Web APIs. It means that the output generated by one API can be accepted as the input of another API. For example, we are looking for APIs to find a hotel’s location. Table 3 shows the input/output parameters of a query Q, and two Web APIs W1 and W2 in the registry. Suppose the agent cannot find a single API that matches the criteria, then it composes n APIs from the set of Web APIs available in the registry. In this table, W1 returns HotelName as the output. W2 receives it as the input and returns Location as the result. So, the subsequent W2 may use the output produced by the preceding W1 as the input.
<table>
<thead>
<tr>
<th>API</th>
<th>Input Parameters</th>
<th>Output Parameters</th>
</tr>
</thead>
<tbody>
<tr>
<td>Q</td>
<td>CountryID, StateName,</td>
<td>Location</td>
</tr>
<tr>
<td></td>
<td>CityName</td>
<td></td>
</tr>
<tr>
<td>W1</td>
<td>CountryCode, NameOfCity</td>
<td>ConfirmNumber, HotelName</td>
</tr>
<tr>
<td>W2</td>
<td>HotelName</td>
<td>Location</td>
</tr>
</tbody>
</table>
Now we can define the Web API composition problem as follows:
**Definition 3:** If an API W1 can produce O1 as its output parameters and an API W2 can consume O1 as input parameters, we can conclude that W1 and W2 are composable. Then, the Web API composition problem can be defined as automatically finding a DAG of APIs from the registry.
We describe a Web API as \(<W_1, W_2, O_1, O_2>\) and a query as \(<Q_1, Q_2>\). A composition is valid if the following conditions are satisfied:
1) \(\exists W_l (Q_l \supseteq W_l, l)\)
2) \(\exists W_j (Q_j \subseteq W_j, j)\)
3) \(\forall W_l W_j\), there exists at least a path from \(W_l\) to \(W_j\).
In other words, the APIs in the first stage of the composition can only use the query input parameters. The outputs produced by the APIs in the last stage of the composition should contain all the output parameters that the query requires to be produced. The output from an API at any
stage in the composition should be able to provide as the input to the next API.
The composition problem is just achieving a desired goal from the initial request, while not making it know the underlying composition details. The mashup developers can now simply describe a goal in form of the query, and submit the requirement to our system. If the desired goal can be directly matched to the output of a single Web API, the composition problem reduces to the discovery problem. Otherwise, it can be accomplished by searching a sequence of APIs that can produce the desired output. Such sequence composition of APIs can be viewed as a searching DAG that can be constructed from an initially given query. In particular, when all nodes in the graph have not more than one incoming edge and not more than one outgoing edge, the problem reduces to a linearly linked APIs problem. Because the discovery problem is a simple case of the composition where the number of APIs involved in the composition is exactly equal to one, the discovery and composition can be viewed as a single problem.
C. Constructing Directed Similarity Graph
In order to speed up the calculation of possible composition plans, we use a pre-computed directed similarity graph that chains the output of one API into the input of another API. The connection of the nodes is based on the semantic similarity between the output and input of the nodes. Algorithm 2 illustrates the construction procedure for the graph. At the beginning, we assign each API in the registry to vertexes iteratively. We then establish edges between the vertexes. For each vertex \( v_i \), we check whether its corresponding output can be accepted as an input by a \( v_j \) by computing the similarity score. If the output of \( v_i \) is semantically similar to the input of \( v_j \) (i.e., \( \text{Sim}(v_i, O, v_j, I) > 0 \)), then we add a directed edge from \( v_i \) to \( v_j \) (in the reverse direction) and assign a similarity score. We also check if there exists a vertex \( v_j \), whose output can be consumed by \( v_i \) as an input, in the similar manner. After constructing the directed similarity graph, we solve the composition problem within this graph. This initial graph is dynamically modified if new APIs become available.
**Algorithm 2: Graph Construction Algorithm**
```plaintext```
//input: APIs
//output: a directed similarity graph
for all APIs
\( v_i = \text{addVertex}(API) \)
for each \( v_i \in V \)
for each \( v_j \in V \)
if \( \text{Sim}(v_i, O, v_j, I) > 0 \) then addEdge( \( v_i, v_j \) )
if \( \text{Sim}(v_i, I, v_j, O) > 0 \) then addEdge( \( v_j, v_i \) )
```
D. Graph-based Composition Algorithm
Our graph-based composition algorithm can be described as that of generating DAGs that can produce the output satisfying the desired goal. In order to produce the DAGs efficiently, we rapidly filter out APIs that are not useful for the composition. We extend our discovery algorithm to handle the composition problem. The algorithm is based on a modified Breath-First Search (BFS) algorithm [6] which can find a shortest path from a source vertex to a target vertex. We solve the composition problem in four main stages: searching sub-graphs, adding start nodes, validating candidates, and ranking candidates.
**Searching sub-graphs:** First, we search the API registry about any API that has all the output parameters of the query (we call “last nodes”), and any API that has at least one of the input parameters of the query (we call “first nodes”). After this searching it is assumed that non empty sets are obtained for the first and last nodes. The next is to create n-ary trees for every last node by visiting all the nodes connected to a particular last node. Such tree is constructed by recursively including nodes and edges from the directed similarity graph until we reach the first nodes. We use the BFS algorithm to solve this problem. Now we can find all the possible composition candidates from the trees. Figure 1 shows a general overview of the query and the matching APIs before constructing the overall composition plans.

**Adding start nodes:** In this stage, a start node is added to each of the trees. The start node is a special dummy node for a dynamically created API, namely the API that provides the input of the query. The start node is represented as \( W_0 = <\emptyset, Q.I> \), namely \( W_0 \) is an API in a tree with no input, having only an output. Finding a possible composition candidate consists in generating a DAG from the start node to the last node in the trees. When a possible composition candidate has been found, all the nodes participating in the composition should be validated in the next stage.
**Validating candidates:** A possible composition candidate is valid if all nodes in the composition can be executed (non-)sequentially in order to produce the desired results. This validating is done by starting from the start node working our way backwards. At this point, first nodes consist of all the APIs such that all their inputs are provided by the start node. Let \( Q_1 \) be a union of all outputs produced from the first nodes in the composition, and \( I_1 \) (i.e., \( Q.I \)) be the query input. Inputs for the second nodes are all the outputs...
produced by the previous nodes and the query input, i.e., \( I_2 = O_1 \cup I_1 \). The combination \( I_2 \) will be the available input for the next nodes. This transition (i.e., \( I_{i+1} = O_i \cup I_i \)) is repeated until the last node is reached, removing redundant nodes which do not contribute to the optimal path at each step.
**Ranking candidates**: A DAG is considered as a composition candidate only if it meets the requirements of the output and input described in the query. It means all output parameters of the query must be obtained, and partly or fully the input parameters of the query must be consumed. After a composition candidate has been found, we gather all the similarity data from the edges involved in the composition in order to compute a similarity score. This score is calculated by the average value of all the similarity data related to the edges, and the ranking of the composition candidate is determined by the score. The list of composition candidates is ordered according to this ranking score and the head of the list is considered the best, recommended option for the user. Algorithm 3 illustrates our graph-based composition algorithm.
**Algorithm 3**: Composition Algorithm
//input: query (Q), a directed similarity graph
//output: ranked composition candidates
if SemanticMatch(Q,O, APIO) is empty then fail
if SemanticMatch(APII, Q,I) is empty then fail
for each last node
Call BFS algorithm
Create n-ary trees
for each tree
Adding a start node to the tree
Generating a DAG from start node to last node
//Validating possible composition candidates
i = 1, Ii = QI
Li = NextApiList(i)
while Not (last node \( \land V_i \equiv \emptyset \))
Oi = UnionAllOutputs(Li)
Ii = Oi \cup Ii
Li+1 = NextApiList(i+1)
Removing redundant nodes
i = i+1
endwhile
endfor
Ranking composition candidates
IV. IMPLEMENTATION AND EXPERIMENT
We developed a semantic-based data mashup tool. The system architecture is shown in Figure 2. The composition planner is responsible for planning to achieve the composition relevant to the desired goal. It captures the current composition states and dynamically composes relevant APIs that can be added to the mashup. The mashup engine interprets the composition of corresponding APIs and displays the immediate results. In the graphical user interface (GUI), mashup developers can obtain the immediate composition results visually, and iteratively refine their goals until the final results satisfying. The ontology learning method automatically builds semantic ontologies from Web API descriptions.
To experiment with the data mashup tool we extracted a collection of REST and SOAP APIs from ProgrammableWeb. To avoid potential bias, we chose different APIs from different domains. We first collected a subset which associated REST APIs for three domains: weather, travel, and mapping. This set contains 63 APIs. Next, we collected a subset containing 17 SOAP APIs from three domains: zipcode, location, and search. In Figure 3, we show a directed similarity graph which obtained from our experimental dataset. The graph consists of 80 nodes and 123 edges.
A possible query for the Web API composition is given as follows: \( Q_I = \{ \text{zipcode} \}, \ Q_O = \{ \text{city, latitude, longitude} \} \). The composition result is exemplified by part of the directed similarity graph as shown in Figure 4. From the registry our engine has discovered 8 last nodes (dark grey circles) and 7 first nodes (light grey circles). We call the BFS algorithm and create an n-ary tree for each last node. This is repeated until all the last nodes are reached.
A total of 3 possible composition candidates have been automatically generated from the graph. As we have mentioned in Section 3.D, a start node \( W_0 \) is added to each tree and the validation of candidates is performed for optimal paths. After running the validation, final composition candidates are selected and similarity scores are calculated. In Table 4, we list these ranked composition candidates.
To evaluate our composition quality, we check how many of desired goals are captured by the composition algorithm. We can observe that two third of all the recommen-
ded results in Table 4 have desired or relevant goals. Although the 3rd ranking result turns out to be invalid as it does not satisfy the user requirement, top 2 ranking results have desired composition plans. These results have shown that our algorithm can generate most user desired outputs.

(a) Discovered last and first nodes (b) n-ary trees
Figure 4. Result of Graph-Based Composition Algorithm
<table>
<thead>
<tr>
<th>Rank</th>
<th>Score</th>
<th>DAG</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>0.625</td>
<td>W_9\rightarrow(9, 72)\rightarrow23</td>
</tr>
<tr>
<td>2</td>
<td>0.550</td>
<td>W_9\rightarrow(9, 72)\rightarrow24</td>
</tr>
<tr>
<td>3</td>
<td>0.222</td>
<td>W_9\rightarrow(9, 72, 65, 66, 67)\rightarrow21\rightarrow11\rightarrow12\rightarrow25</td>
</tr>
</tbody>
</table>
### V. RELATED WORK
Most researches handing the automatic composition problem have been focusing on the composition of SOAP-based Web services. Many various techniques have been used for this study, such as graph-based search algorithm [7] and AI planning [8]. However, the work presented in this paper is not limited to composing SOAP-based Web services, but also considers REST, JavaScript, XML-RPC, and Atom Web APIs. The use of graph-based search algorithms to solve the composition problem has been studied before. Kona et al. [7] propose an automatic composition algorithm for semantic Web services. Rodriguez-Mier et al. [9] propose a heuristic-based search algorithm for automatic Web service composition. Shiah et al. [10] present an incremental graph-based approach to automatic service composition. These works are similar to our study. However, they cannot find an optimal solution, and do not support various Web API protocols.
We recently proposed an automatic Web API composition algorithm [11] to handle the sequential composition problem. This paper is an extension of our previous work and focuses on the (non-)sequential composition that can be represented in the form of directed acyclic graphs (DAGs). This is the most general case of the Web API composition.
### VI. CONCLUSIONS AND FUTURE WORK
This paper presents an algorithm for the automatic Web API composition. This algorithm is based on the graph-based approach, where composition candidates are gradually generated by forward-backward chaining of APIs. Our algorithm can get optimal plans by applying strategies that rapidly prune APIs that are guaranteed not to match the query. A key issue is how to locate the desired APIs. The efficient discovery can play a crucial role in conducting further API composition. We define API descriptions that syntactically describe Web APIs, and use an ontology learning method that semantically describes APIs. These syntactic and semantic descriptions allow the agent to automate the composition of Web APIs.
Our future work is focusing on the investigation of the performance and scalability measures for the proposed graph-based composition algorithm. By this we aim to optimize the functionality of our system. We are also exploring various optimization techniques that can apply to the algorithm. For example, a heuristic AI planning technique can be used to find an optimized solution with a minimal number of paths. The use of dynamic optimization techniques over the graph helps greatly in obtaining the effectiveness and efficiency of our approach.
### ACKNOWLEDGMENT
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science, and Technology (No. 2010-0008303).
### REFERENCES
|
{"Source-Url": "http://www.thinkmind.org/download.php?articleid=web_2013_3_10_40053", "len_cl100k_base": 6551, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 27581, "total-output-tokens": 7216, "length": "2e12", "weborganizer": {"__label__adult": 0.0002734661102294922, "__label__art_design": 0.00035953521728515625, "__label__crime_law": 0.0003261566162109375, "__label__education_jobs": 0.00070953369140625, "__label__entertainment": 7.390975952148438e-05, "__label__fashion_beauty": 0.00013387203216552734, "__label__finance_business": 0.0002295970916748047, "__label__food_dining": 0.00027179718017578125, "__label__games": 0.0004000663757324219, "__label__hardware": 0.0006046295166015625, "__label__health": 0.0004088878631591797, "__label__history": 0.00020134449005126953, "__label__home_hobbies": 7.343292236328125e-05, "__label__industrial": 0.00027632713317871094, "__label__literature": 0.0003097057342529297, "__label__politics": 0.00019443035125732425, "__label__religion": 0.00031876564025878906, "__label__science_tech": 0.035247802734375, "__label__social_life": 8.690357208251953e-05, "__label__software": 0.0169219970703125, "__label__software_dev": 0.94189453125, "__label__sports_fitness": 0.00017058849334716797, "__label__transportation": 0.0003113746643066406, "__label__travel": 0.00016379356384277344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31387, 0.01355]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31387, 0.44359]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31387, 0.87312]], "google_gemma-3-12b-it_contains_pii": [[0, 5515, false], [5515, 11326, null], [11326, 16533, null], [16533, 21926, null], [21926, 26123, null], [26123, 31387, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5515, true], [5515, 11326, null], [11326, 16533, null], [16533, 21926, null], [21926, 26123, null], [26123, 31387, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31387, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31387, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31387, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31387, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31387, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31387, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31387, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31387, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31387, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31387, null]], "pdf_page_numbers": [[0, 5515, 1], [5515, 11326, 2], [11326, 16533, 3], [16533, 21926, 4], [21926, 26123, 5], [26123, 31387, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31387, 0.14557]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
5380313ebd089db481b676144fbbb506c1462075
|
Contract-based Component System Design
Holger Giese
Institut für Informatik, Westfälische Wilhelms-Universität,
Einsteinstraße 62, 48149 Münster, GERMANY
[email protected]
Abstract
Component technology tries to solve many problems of today's software industry practice: the productivity and produced quality should be increased and a better infrastructure for maintenance of the products is promised. The integration of off-the-shelf components to build customized products allows to source out the development of general purpose components. A crucial prerequisite for the intended scenario of component usage is their strong separation. Especially in a distributed environment, synchronization aspects are of great importance to identify a suitable architecture and to decide whether a component matches some requirements. The presented approach allows to model the synchronization aspect of contracts in a flexible manner including a whole spectrum of different degrees of preciseness from declaration of abstraction barriers to complete synchronization specifications describing the explicit behavior. The used Petri net based \( \text{OCOn} \) behavior specification formalism is structurally embedded in the UML and supports analysis and design of component systems.
1. Introduction
The complexity of today's software projects is continuously growing and so does the need for sophisticated system analysis and design. Object-oriented analysis and design [5, 32, 22, 12] offers methods for analysis, design and implementation of systems in a seamless fashion. In contrast to structured analysis [13], the transition from design to implementation is more continuous. Traditionally, object-oriented techniques are used to specify fine grain structures using classes and their relations. Normally, one of the popular object-oriented programming languages, like C++, is chosen as target language. Often, the overall architecture or the coarse grain structure has been neglected or ignored at all. On the other hand, a dedicated design of a suitable software architecture [34] is often needed to improve software quality and to provide better maintainable products. But the hope that object technology can be used to establish systematic reuse has failed. The shift from objects to components reflects these additional requirements. A fixed architectural basis and system level mechanisms instead of programming language mechanisms are the crucial point to handle the described additional requirements and to achieve a more flexible notion for the composition of elements. Component technology [37] goes one step further in comparison with object-orientation as a language feature by decomposing an application or system into runtime elements, that can be build, analyzed, tested and maintained independently. The integration of available off-the-shelf components into applications and their combination can help to further improve productivity and decrease the time to market in the software industry.
A development method for component based applications and systems must be aware of additional problems. The design is further separated into component design, where a single independent shippable product for general use is the intention, and component system design, which considers combination and configuration of given components or the decomposition of a task into given and application specific components. Component design is restricted to isolated components having a fixed contract with the environment, while the component system design has to consider the coarse grain design and separation. The isolation between design and implementation of a component has to be supported by the architecture and a suitable separation. Otherwise the postulated component exchangeability and independence between component provider and component integrating products is not realistic. Both kinds of design problems have to face the resulting problems of late integration. The knowledge of common models for software testing using module and integration testing is not sufficient any more. The component notion of quality has to satisfy higher expectations, because the late integration phase is not available for testing any more. Thus, software components have to be more robust than usual applications. This additional demand for software quality may delay the development of a component market. The support for maintenance,
management and configuration has to be integrated into the component infrastructure.
Up to now, software products often provide isolated solutions for business or industry applications. Today software begins to interlink the different isolated information system structures. Interoperability, flexible data exchange and sharing as well as support for group work become essential requirements. Thus, distribution and concurrency are aspects, further generations of software have to manage.
The presented approach provides techniques and notations to tackle the additional requirements of component design. Structure and connections of component systems are specified using the structure description notations of the UML [31], the de-facto standard for object-oriented modeling. The common notion of interfaces is extended by a protocol to support contract-based design for components. Synchronization restrictions can further be specified in a flexible manner to describe dependencies between different contracts of the same component. Thus, the concrete interaction can be specified and architectural aspects become more obvious.
In the following section, several relevant characteristics of components and the available technology are discussed. Then, component synchronization and its impact is considered in section 3. The proposed approach is sketched in section 4 and its structural embedding into the UML is presented. An example in section 5 presents several different design decisions and their modeling with the approach. The article closes with some remarks on related work.
2. Component Notion
A general notion of a component should also include traditional component types like libraries or modules. Even when they do not support all characteristics of today’s off-the-shelf component concepts, it is important to keep the basic concepts and their implications in mind. Besides the pure off-the-shelf component notion, there may exist several levels of component usage, which are of interest, too. Imported and exported types of a component are a relevant aspect as well as its connections with the environment. Szyperski [37, 38] defines a component as follows: "A Software component is an unit of composition with contractually specified interfaces and explicit context dependencies only. A software component can be deployed independently and is subject to composition by third parties." For each interface a component has either contractually obligations or demands and thus at least some kind of informal contract for each of them exists.
The general description of a component consists of the component or subsystem itself and its imported or exported contracts. To make the contract notion more concrete, the approach clearly distinguishes between exported contracts, called provided and imported contracts, called used, w.r.t. a component. For provided contracts, the component has the obligation to serve it and for used ones the component may demand several contractual properties.
To figure out which aspects are of importance for a suitable contract notion, several characteristics of today’s component concepts like linkage time and linkage typing are discussed next. Afterwards, the additional constraints for component design and the need to consider the synchronization between components is demonstrated.
A central characterization for component contracts or connections is the point in time when the connections are established (linkage). The traditional cases are static linkage of subcomponents at construction time of a program, dynamic linkage during the program startup or runtime linkage where running components are interconnected.
The typing of linked connections is also of considerable interest. For interprocess communication, untyped interaction based on streams, shared memory, etc. or even abstract synchronization with mutex or semaphores are used, while linking programs, modules and libraries often supports procedure typing on the compiler level. The case of runtime linkage is of special interest for today’s component technology. Several levels of typing have been introduced. On the socket level, several services based on TCP connections have been standardized (ftp, nfs, http, etc.). To further support remote or local procedure call client/server interaction, common packet formats and integrated marshaling stubs (e.g. DCE [9]) have been used. These approaches still provide only a host-server abstraction, while object-oriented extensions introduce the object or interface notion to make service access points first class elements. CORBA [27] started from scratch 1989 as an initiative to build an interoperable object bus standard with suitable infrastructure. Its main antagonist is Microsoft’s DCOM [10] which is a step by step extension of COM (component object model, formerly named common object model). These approaches allow to send and distribute interface references as usual parameter values. Java RMI [36] further extends this development by also supporting the object per value discipline within its remote method invocation mechanism. CORBA, DCOM and Java RMI are enabling technologies which provide typed component linkage at runtime. To discuss this development, the relevant aspects for runtime linkage are of interest.
When untyped basis mechanisms like TCP sockets are the linkage mechanism, a suitable connection has to be described by defining all valid packet formats and an agreement on the protocol built upon the packet formats. When abstracting from the basic TCP protocol steps to establish a connection, often simple stateless protocols like the common basic HTTP [3] protocol, which uses the request/response scheme, are used. These protocols provide
a high degree of independence, which is often useful in a distributed environment.
The further improved typing using client/server approaches manages the error prone encoding of packets and provides the higher level concept of a remote procedure call. In general this basic scheme of interaction does not make an explicit interaction protocol obsolete. Client/server systems often provide a stateless protocol, e.g., NFS [35] is based on a standardized remote procedure call mechanism and is a stateless and idempotent protocol to handle connection aborts and re-transmissions. It is remarkable, that common network based services like NFS avoid any complex interaction with third components and thus build final leaves in the component tree or directed acyclic graph.
The CORBA or DCOM object bus approaches provide the illusion of a virtual object space, where interfaces instead of hosts abstract from physical locations. These references can be further distributed to make them available to other clients. But, their typing notion is still restricted to the syntactical interface aspect. Complex protocols and their impact on a correct cooperation are not considered. The object bus interface concept does essentially combine data and behavior by applying the object metaphor. Thus, the resulting protocols might not always remain stateless as common for the design of services like NFS or HTTP.
Traditionally, the basic mechanisms used for component reuse and static linkage are the libraries which provide a procedural abstraction with strict acyclic depending layers. The explicit sharing of resources is avoided where possible. The common components for dynamic linkage are either named shared library or dynamic link library (dll). They support a perfect separation for the using clients and provide the perfect illusion of exclusive usage, too. Also, a layered structure from the operating system API up to domain specific or more comfortable libraries is common. Both scenarios provide contracts in an exclusive fashion and abstract from code or data sharing. The off-the-shelf component concept in contrast is intended to support arbitrary structures, has to be able to allow more sophisticated interaction concepts like callbacks. Also the restriction to stateless protocols is often not possible.
Besides the basic object bus infrastructure and a communication mechanism, component based development requires further aspects. DCOM supports components with its ActiveX or DNA architecture as well as Java Enterprise Beans (EJB) [23]. A specification of a component model for CORBA is under development (see [1]). These component models improve the basic object bus technology by specifying interfaces for several basic component management aspects and support for component life cycles. But besides these technical solutions to obtain interoperable runtime components, the necessary contract specification is neglected. In contrast to the former definition for components which emphasizes the contract principle [24] as essential aspect of any component technology, the specification of contracts in practice is not supported by any object bus technology. Instead, the handling of interface contracts is assumed to take place in additional specification documents and additional features like unique interface version numbers are used to achieve consistency.
### 3. Component Synchronization
Szyperski [37] identifies another serious problem occurring when callbacks are used. He demands to specify re-entrance conditions to cover these problems, but re-entrance is only a special case of the more general question how components may synchronize. When state based protocols are considered and concurrency is present, a general treatment of synchronization aspects is needed.

The structural situation of a callback is visualized in figure 1. The provided and used contracts build a cyclic dependency and thus the classical procedural abstraction fails and instead synchronization aspects have to be considered, too. In classical layered hierarchical systems, callbacks against the hierarchy called up-call [11] cause several problems and enforce the library designer to provide a consistent library state even during such calls.
Using thread-safe objects does not ensure systems which are also re-entrance safe. Phenomena like self-recursion and re-entrance patterns additionally lead to deadlocks (so called self-infected deadlocks [8]). But even in simple cases, the system malfunction may be caused by synchronization effects. Consider, for example, the case of a component with a single thread of control. When it calls another component via a remote procedure, it is blocked until the request is processed and thus any callback is blocked. If the called component waits for the callback to fulfill the request, at least the first component is totally blocked forever due to a resource conflict concerning its single thread. For components in a distributed environment the situation becomes even more complex and the system operation may critically depend on the request scheduling strategy of the implementation.
Object-oriented type structures often contain cycles (recursive data types), but traditional object-oriented systems were not concurrent, and, hence, this aspect has often been ignored. In the case of multiple threads or concurrency in
general, the synchronization becomes even more important. Consider as an example the classical recursive defined directory class. A first version may not support file links and thus cycles are excluded. But when links are also considered, a possibly cyclic structure is described. Common realizations like file systems reflect this by extending related tools to prevent infinite processing (e.g., Unix find command). Directory like structures in distributed systems are found in an Internet name server. There, an asynchronous update scheme is used and thus no update request can lead to infinite processing, because only the local cache content is propagated. The CORBA name service [28] also provides the directory access in such a way that any direct usage of related directories is excluded. Instead, the client has to traverse the structure on its own. By avoiding any global operation, the synchronization and termination problems can be excluded, but the complexity is left to the clients.
Object protocols with states or some kind of life cycle are common in object-oriented systems. The possible processing orders are specified, for example, by using Harel statecharts [19] in OMT [32] and path expressions in FUSION [12]. The life cycle or protocol describes the possible non uniform service availability provided by the object.

Consider for example the read file handle protocol presented in figure 2. Reading data chunks is only supported after open (open). Than, data chunks can be read until the end of file (EOF) is reached. When the file is closed (close), again no read operation is available. The OCoN notation [17] is used to describe the resulting state changes and the available operations in each state as well as the resulting state. Hexagons represent possible states and actions consisting of a call and return step with possible multiple return alternatives are represented by squares.
The combination of components during the component system design is different from combining and designing classes during the fine grain object-oriented design. The object-oriented techniques support encapsulation by private and public access to classes. This style does not fulfill the additional requirements for separation. CORBA, DCOM and Java RMI use interfaces to decouple specification and implementation, but additional information necessary to ensure a correct integration is missing. As demonstrated, the syntactical interface typing does not cover all relevant aspects for component composition. It determines all message formats of a protocol by defining a standard encoding, but it does not describe which processing order is needed. Only interfaces with stateless protocol in situations without re-entrance and cyclic structures are covered. Suggested trace-based extensions [29, 30, 25] can exclude the occurrence of message not understood errors, but fail to consider synchronization effects.
Following Szyperski [37], a contract should contain a functional specification usually given by pre and post conditions and non-functional requirements often named service-level or quality of service containing aspects like availability, throughput, latency and capacity. As demonstrated above, synchronization is another important aspect, but behavior modeling is an inherent complex problem. Beugnard et al. [4] present a contract hierarchy that systematically distinguishes basic contracts which represent the common interface notion, behavioral contracts that provide pre and postconditions, synchronization contracts for several request synchronization policies and quality of service contracts covering aspects like availability, throughput, latency etc.

When the interaction of arbitrary structured systems of components is considered, the synchronization is of crucial importance. The abstraction assumed for a component (figure 3, left-hand-side) is usually characterized by an abstraction barrier (middle), while the real synchronization (right-hand-side) does not respect it. A formalism like a finite state machine (FSM) has to be used to describe the behavior aspect using states and transitions.
When components are connected, their external synchronization specification has to be combined to obtain the resulting behavior. This explicit combination results already for very restricted system models to serious problems. For the finite state machine formalism chosen in figure 3 and every more expressive formalism, the state space grows exponential, known as the state explosion problem [39] for system analysis. Formal approaches to system verification and validation try to overcome this problem, but the explicit modeling of interaction includes several aspects like synchronization distances which contradict this.
But this problem is also of crucial impact for the system design concerning change impact. The exponential growing model sizes coincide with an exponential growing number of implicit implementation dependencies. The transitive nature of synchronization for two connected component systems causes this problem. Thus, to change a component implementation may influence every other implicitly connected one. This effect can be prevented by restricting the general interaction and structure as done in the case of libraries. The approach proposed here avoids the demonstrated problems by using the abstraction barrier, visualized in figure 3, when suitable and the explicit specification of synchronization if needed. The synchronization can even be described with different levels of preciseness. This also improves the resulting situation for the design of the system. Callbacks or even cyclic structures introduce complex interaction dependencies and the concrete external behavior has to be specified very early in the design. Otherwise both involved components can not be further considered in isolation. Thus, the proposed approach does combine the improvement for the analysis and design as well as formal modeling by supporting abstraction barriers as design principle and as mechanisms to make a formal analysis feasible.
4. Contract-based Design
The presented approach emphasizes contract-based design to improve separation using synchronization contracts, extends the contract notion to cover bilateral interaction in a manner which still leads to unilateral dependencies as well as the explicit design concerning the component contract structures and cycles.
The formalism of the OCoN approach [41, 16, 17, 18] for seamless object-oriented behavior modeling is used also to cover the behavioral aspects of components. OCoNs (Object Coordination Nets) formally defined in [15], a special form of Petri nets [6], are used to describe the possible protocol interactions in a visual manner. These nets specify the intended interaction and allow to describe procedure-call and message-passing oriented interaction within one formalism. In object-oriented design practice, behavioral aspects are often only considered when already implementing the system. Thus, synchronization aspects have not been well or completely documented during design phase and the needed information concerning the synchronization with the environment are usually not available during the design. In contrast, the OCoN approach supports the modeling of synchronization and coordination aspects during the design. The resulting component specification can be extracted from the component design and not from the implementation. On the other hand, if contracts are specified during the decomposition of the system, new general purpose or application specific component specifications including their synchronization behavior are obtained.
By emphasizing the contract idea, the using and providing components have to agree on each contract. The following parts of a contract description can be distinguished: a protocol describing the provided coordination sequences and a functional specification given by pre and post condition formulas. While the protocol is already considered during the design phase, the pre and post conditions can only be used for verification and runtime checks. Thus, the approach concentrates on the protocol aspect, which can be supported by tools for restricted models.
The contract notion is of central relevance for the design process. Nierstrasz [26] proposes to add a finite state machine to an object interface to build regular types. This approach is extended by also integrating the occurrence of return alternatives and spontaneous contract behavior into the protocol specification. Thus, instead of error prone direct callback designs, an encoding into the protocol states and spontaneous behavior can be used in most cases. Thus, a client obtains an unilateral contract which does not contain any obligations for the client side. The only exception is that the replies for pending operation calls must be at least buffered by the client to exclude the blocking of the called component.
For an example consider the Observable contract presented in figure 4, which provides a solution for the observer pattern that is still unilateral concerning the synchronization and typing dependencies. An additional arbitrary state change for the observable contract is modeled using a quiescent step [15]. Its occurrence is neither determined nor guaranteed. The client may observe the state change and do an update as needed. The still unilateral contract thus can be used to avoid cyclic dependencies as introduced by the general scheme of a callback presented in figure 1. Thus, the approach integrates bilateral interaction into an unilateral contract and can further provide maximal degree of flexibility for the using side (client).
The component behavior can be specified in an operational fashion also using the OCoN approach (see [17]). Thus, the contract protocol can be used to simulate parts of a system in an abstract fashion by representing the environment by its contract protocols. But such explicit design
including several component internal aspects is not suitable in general. A more abstract and implicit solution is needed.
Traditionally, architectural aspects are often neglected in object-oriented system design. By considering connections like connectors to be a kind of first class elements of an architecture (see [2]), it is achieved that the architectural aspect is specified adequately. In order to apply the concept of connectors and the contract principle, an UML <<contract>> stereotype containing an interface describing a set of interaction steps and a protocol description specifying the supported interaction orders is introduced. A single contract is unilateral and describes what behavior one interface of a component assures and how another component can interact with it. For a more detailed description see [18, 15].
To provide the demanded component specification, the synchronization of provided and used contracts has to be specified. Two situations for contracts are further distinguished. Either they are simple and their guaranteed operations are not restricted or an additional <<synchronization>> stereotype is used to further restrict the protocol by introducing synchronizations with other provided or used contracts of the same component (see figure 5). These synchronization declarations are added to each component type and are additionally visualized using a dashed box around every covered contract. Each contract can at most take place in one such synchronization and thus the dashed rectangles of one component can not share any contract declaration. The Through synchronization presented in figure 5 describes how the put and get operations of the provided contract InOut p are mapped to the used contracts u of the same type. The actions with a shadow describe the processing of incoming requests for the p contract while the usual actions specify the requested operations for contract u. The synchronization is described using untyped places (circles) and additional pre and post condition arcs. Each requested put or get is forwarded from p to u and the return is processed vice versa.

The contract is used to describe the combination of an interface and a protocol net. In contrast to the UML interface notion, the contracts are instances and the relations among them are explicitly modeled as presented in figure 8.
There are two distinct kinds of contracts, exclusive and shared ones. This technical distinction for contracts is in conformance with the ISO Open Distributed Processing model [21], where implicit and explicit bound objects are distinguished. The exclusive contracts are interpreted as explicit bound objects while shared contracts fit to implicit bound objects (cf. [25]).
For an exclusive contract, the interface circle symbol is used as a shortcut (see figure 5) and for all usage connections an implicit xor and client side cardinality 1 is assumed and thus omitted, too. For the connection to the providing component only the number of served instances is of interest. Each contract is served by exactly one component and thus the component side cardinality is omitted. For shared contracts also sharing by multiple clients is allowed and thus the usual cardinality annotations can be used for connections to the clients. A circle with double border is used as shortcut. The annotations for connections to the providing component are the same as for the exclusive case.

Besides these explicit synchronization descriptions, also an implicit description using a synchronization dependency relation depend (→) is supported by the approach. The synchronization is not explicit described and instead any arbitrary but valid usage of used depending contracts and no synchronization with used independent (not connected) contracts is assumed. If neither an explicit specification nor such an explicit relation is given, simply the worst case of a full dependency relation is assumed. This way the traditional abstraction barrier between exported provide contracts and imported use contracts can be used. A behavior cover is build by all possible implementations for each provided contract that synchronizes at most with all used contracts, the provided contract depends on (→). Each correct implementation has to respect this behavioral cover. Each orthogonal line to all depend arcs builds a suitable abstraction barrier. But the provided abstraction is not valid in gen-
---
[2]: A reference to a specific source or paper.
[18]: Another reference.
[15]: Yet another reference.
[21]: Reference to ISO Open Distributed Processing model.
[25]: Reference to another source or paper.
eral. The transitive extension of all local depend annotations has to be acyclic to make the assumed abstraction a correct one.
As demonstrated in figure 6, the depend relation restricts the valid embedding of a component. But this way, an explicit and complete synchronization specification can be avoided. The provided contract \texttt{Config} is used to configure the component and thus should be implemented in a fashion that does not rely on the used \texttt{Use} contract. In contrast, the \texttt{Provide} contract will rely on the correct responses of the \texttt{Use} contract. This dependency is specified by defining a component specific depend relation “\texttt{!}” using dashed arcs. Thus, the possible component behavior is already restricted concerning the possible synchronization dependencies, but still a whole bunch of possible internal component behaviors are suitable solutions.

The provided mechanisms for contract specification allow to specify the contract behavior and their synchronization with several levels of granularity as presented in figure 7. Starting during the component system design, the relation may be left unspecified and thus a complete depend relation connection of each provided interface with all \textit{used} ones is assumed. When further knowledge about the separation and wanted parallel availability for interfaces is given, a refined view by specifying an explicit depend relation is possible. If the planned embedding enforces the explicit modeling of synchronization aspects concerning a subset of the component contracts, this can be done using a \texttt{<synchronization>>} stereotype. Now, \textit{slices} of the component behavior can be specified in an independent fashion. A complete behavior description is also possible using a single synchronization element that covers all provided and used contracts. Thus, also a behavior description enclosing the whole behavior as described in figure 3 (right-hand-side), is possible. The provided spectrum allows to specify the behavior in the adequate level of granularity during the decomposition of the design. For already fixed components, e.g. off-the-shelf, provided by others, a specification of suitable preciseness may be chosen and can be used to embed them into a design.
To provide a sound framework to handle component protocols and their synchronization, the correct behavioral preorder describing a correct abstraction or refinement is needed. The given synchronization protocols can be compared as labeled nets by considering the label occurrences. The underlying formalism to determine a valid refinement or abstraction step concerning the component synchronization is then \textit{reduction} [7], which is the coarsest relation w.r.t. preserving deadlock-freeness (see [40]). The symbol $\subseteq$ is used where \textit{A} $\subseteq$ \textit{B} states that \textit{A} is a valid refinement of \textit{B}. The abstraction from finite internal interaction can be used to replace a synchronization combination by a single more abstract version where the double covered contracts are omitted (see for example figure 10).
### 5. Example
To give an example, the common pipeline processing of a compiler is considered. The structure may consist of a pre-processor phase for macro expansion as well as a compiler with lexical analysis, syntactical analysis, semantical analysis, code generation and assembly stage. This software architecture style provides a high degree of flexibility and distinct stages may be exchanged on demand, e.g., the assembly stage to adjust the compiler to a certain hardware. By specifying the data format for each stage transition, each stage does only communicate with its predecessor and successor and thus the coupling is minimized. In order to reduce the example complexity the same general interface for each stage is assumed. Two solutions for a general pipeline structure are presented.

pipeline can be build using a specific coordinator component, as demonstrated in figure 8 for the trivial case of two stages. It is remarkable that the structure does not reflect the pipeline and instead the usage relationship from the coordinator component to each stage is made explicit.
This flexibility is a suitable reason to choose this solution, while efficiency reasons make this solution sub-optimal. The applied remote procedure call interaction and the central coordination using an additional coordination component results in doubling the communication and a possible bottleneck for long pipelines. The bottleneck can be avoided by using a tree like coordinator structure which further increases the communication overhead. Each node in the tree provides the same synchronization type as a leaf and abstracts from the inner pipeline structure.
A more efficient design can be build by avoiding the overhead of moving the data to the pipeline coordination component and vice versa. Instead, the stage components are directly connected and each component has to provide an input and output stream (see figure 10).
Figure 9. Transform component
The general scheme of independent provided contracts and a simple depend relation is not sufficient any more. Instead, the complex contract notion specifying the combined behavior of sets of provided and used contracts is needed. For the provided contract in and the used contract out a specific combined behavior is described in figure 9. This complex contract behavior is realized using a \texttt{<<synchronization>>} Stage, which synchronizes the in and out contracts of a single component. An incoming start request for in asynchronously triggers a start request for out and an internal place is initialized with a token. Afterwards each put request for out is pending. Closing the contract is delayed until the out contract confirms the close request. Note, that the actions are a shorthand notation for two steps, a call and a corresponding return step. For example, the out.close post condition is a pre condition of the in.close return step. The resulting processing specification describes the explicit buffering behavior and thus even cyclic pipelines like ring structures may be build.
Figure 10. Pipeline of Transform components
For this non hierarchical structure it can also be abstracted from two or more stages by combining their complex contracts and abstracting from their inner communication. See figure 10 for the resulting behavioral cover of two synchronously connected Stage synchronization restrictions. The resulting common behavior of two stages has to describe the internal buffering in a concrete fashion. When an arbitrary but non-determined internal buffering like described by the second abstraction, is used, the resulting behavioral cover can be combined and used only in a restricted way. Consider a cyclic pipeline case and combine \( n \) of these abstract stage components. For a secure processing at most \( n - 1 \) data packages can be inserted. Otherwise the cycling might be blocked and thus \( n \) or more packages may not work. For such ring like structures abstracting from the buffering effect is not always useful. When abstracting from the buffer depth the information is lost and not available. The ring structure will only work if at least one buffer element is still empty. If all buffer capacities are exhausted, each stage will be blocked by the next one and no progress is possible any more. Thus, abstracting from the buffering depth may be not appropriate.
The described two behaviors \texttt{CombinedStage2} and the more abstract version \texttt{CombinedStages} are valid abstractions for the behavior preorder (\( \preceq \)). Their nets describe the explicit buffering where \( \texttt{x+} \) denotes \( x \) token and the resource of type \texttt{int} is initially filled with an arbitrary value.
6. Related Work
The presented approach extends the concept of regular types of Nierstrasz [26]. In contrast to this reduction-based approach, do trace-based notions [30, 25] not consider synchronization effects and exclude only message not understood errors. To reduce cycles in the usage graph, the unilateral contract notion is extended to include bilateral interaction. This way most of the error prone callback handling can be handled in a more suitable fashion. The presented protocol formalism additionally covers distinct synchronization with the request replies, while Nierstrasz work is restricted to request acceptance. This way oracle like requests and the influence of distinguished replies on the resulting protocol state can be incorporated, too. The concept of explicit contract synchronizations and an implicit contract depend relation further extend the framework towards a flexible specification tool for component synchronization.
The integration into the analysis and design level instead of the programming language or a formal calculus context is another distinction. The approach allows to consider synchronization and protocol aspects, which are of great importance for the architecture design, already during the analysis and design. It supports the specification for incomplete system, refinement and explorative design evaluations by simulation.
Holland et al. [20] suggest a contract notion that abstracts from performance and resource consumption aspects and includes safety and progress conditions, which are needed to predict the component behavior from a clients perspective. So called type obligations demand abstract attributes and interface aspects for each participant while causal obligations describe the ordered sequences for actions and their effect on the attributes. The CATALYSIS [14] approach emphasizes a pre and post condition concept but also contains a comparable concept as extension and suggests statecharts or sequence expressions to specify the order of internal called actions called raised actions. The general concept to describe object behavior for a group of objects is promising, but the resulting system is more suitable for frameworks. The superposition of such interaction concepts is not always conflict free and the pre and post conditions or invariants make an automatic tool support impossible. In the area of object-oriented design for real-time systems, the ROOM [33] method also uses protocols defined for a group of objects and signal based protocol roles called ports as connectors. In contrast to the presented approach, the protocol is used to describe the bilateral signal exchange and no notion for behavioral abstraction is considered. The structural description techniques of the UML are used for the structural part of architecture descriptions. For behavior specification, the OCoN approach is used, because it provides a seamless integration of used or provided contracts (see [17]). The behavior description techniques of the UML are not capable of these aspects. For a comparison between the OCoN approach and the behavior formalisms of the UML see [18]. Remarks concerning the great variety of other proposed object-oriented Petri net notations can be found in [17].
7. Conclusion
The presented approach provides mechanisms to achieve a higher degree of independence, to exclude implicit implementation dependencies and make requirements and the provided behavior of components more concrete. The OCoN formalism together with the presented extensions provides a suitable framework for the described component design techniques. An external behavioral specification technique using synchronization slices and a declarative depend relation for implicit synchronization specification is presented. From the perspective of formal model specification, a seamless transition from totally separated contracts and a dependency relation over slices of partial behavior specifications to a complete external behavior specification is supported. The unilateral contracts with possibly shared protocols refining the connector concept allow to analyze the quality of an architecture concerning decomposition on well established object-oriented knowledge. The formal description of interaction properties by object coordination nets allows the analysis of behavioral properties and possible interaction scenarios can be simulated and visualized.
References
|
{"Source-Url": "https://www.hpi.uni-potsdam.de/giese/misc/publications/hicss33.pdf", "len_cl100k_base": 7573, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33628, "total-output-tokens": 10261, "length": "2e12", "weborganizer": {"__label__adult": 0.00025272369384765625, "__label__art_design": 0.0005311965942382812, "__label__crime_law": 0.00024580955505371094, "__label__education_jobs": 0.0004682540893554687, "__label__entertainment": 4.3332576751708984e-05, "__label__fashion_beauty": 0.00010895729064941406, "__label__finance_business": 0.00014925003051757812, "__label__food_dining": 0.00023806095123291016, "__label__games": 0.0003752708435058594, "__label__hardware": 0.0005555152893066406, "__label__health": 0.00023484230041503904, "__label__history": 0.00018739700317382812, "__label__home_hobbies": 5.626678466796875e-05, "__label__industrial": 0.00029659271240234375, "__label__literature": 0.00015866756439208984, "__label__politics": 0.0002162456512451172, "__label__religion": 0.00035858154296875, "__label__science_tech": 0.00714111328125, "__label__social_life": 4.935264587402344e-05, "__label__software": 0.005390167236328125, "__label__software_dev": 0.982421875, "__label__sports_fitness": 0.00022530555725097656, "__label__transportation": 0.0003635883331298828, "__label__travel": 0.00017368793487548828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49286, 0.03247]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49286, 0.77726]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49286, 0.8914]], "google_gemma-3-12b-it_contains_pii": [[0, 4440, false], [4440, 10186, null], [10186, 15615, null], [15615, 20474, null], [20474, 25707, null], [25707, 30456, null], [30456, 34502, null], [34502, 38421, null], [38421, 43453, null], [43453, 49286, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4440, true], [4440, 10186, null], [10186, 15615, null], [15615, 20474, null], [20474, 25707, null], [25707, 30456, null], [30456, 34502, null], [34502, 38421, null], [38421, 43453, null], [43453, 49286, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49286, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49286, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49286, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49286, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49286, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49286, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49286, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49286, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49286, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49286, null]], "pdf_page_numbers": [[0, 4440, 1], [4440, 10186, 2], [10186, 15615, 3], [15615, 20474, 4], [20474, 25707, 5], [25707, 30456, 6], [30456, 34502, 7], [34502, 38421, 8], [38421, 43453, 9], [43453, 49286, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49286, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
7f77fce0357e3f79b12ccec0803544d4c16ece91
|
Frequent Closed Item Set Mining Based on Zero-suppressed BDDs
Shin-ichi Minato
Graduate School of Information Science and Technology,
Hokkaido University, Sapporo, 060-0814 Japan.
[email protected]
Hiroki Arimura
(affiliation as previous author)
[email protected]
keywords: data mining, item set, BDD, ZBDD, closed pattern
Summary
Frequent item set mining is one of the fundamental techniques for knowledge discovery and data mining. In the last decade, a number of efficient algorithms for frequent item set mining have been presented, but most of them focused on just enumerating the item set patterns which satisfy the given conditions, and it was a different matter how to store and index the result of patterns for efficient data analysis. Recently, we proposed a fast algorithm of extracting all frequent item set patterns from transaction databases and simultaneously indexing the result of huge patterns using Zero-suppressed BDDs (ZBDDs). That method, ZBDD-growth, is not only enumerating/listing the patterns efficiently, but also indexing the output data compactly on the memory to be analyzed with various algebraic operations. In this paper, we present a variation of ZBDD-growth algorithm to generate frequent closed item sets. This is a quite simple modification of ZBDD-growth, and additional computation cost is relatively small compared with the original algorithm for generating all patterns. Our method can conveniently be utilized in the environment of ZBDD-based pattern indexing.
1. Introduction
Frequent item set mining is one of the fundamental techniques for knowledge discovery and data mining. Since the introduction by Agrawal et al. [Agrawal 93], the frequent item set mining and association rule analysis have been received much attentions from many researchers, and a number of papers have been published about the new algorithms or improvements for solving such mining problems [Goethals 03a, Han 04, Zaki 00]. However, most of such item set mining algorithms focused on just enumerating or listing the item set patterns which satisfy the given conditions and it was a different matter how to store and index the result of patterns for efficient data analysis.
Recently, we proposed a fast algorithm [Minato 06] of extracting all frequent item set patterns from transaction databases, and simultaneously indexing the result of huge patterns on the computer memory using Zero-suppressed BDDs. That method, called ZBDD-growth, does not only enumerate/list the patterns efficiently, but also indexes the output data compactly on the memory. After mining, the result of patterns can efficiently be analyzed by using algebraic operations.
The key of the method is to use BDD-based data structure for representing sets of patterns. BDDs [Bryant 86] are graph-based representation of Boolean functions, now widely used in VLSI logic design and verification area. For the data mining applications, it is important to use Zero-suppressed BDDs (ZBDDs) [Minato 93], a special type of BDDs, which are suitable for handling large-scale sets of combinations. Using ZBDDs, we can implicitly enumerate combinatorial item set data and efficiently compute set operations over the ZBDDs.
In this paper, we present an interesting variation of ZBDD-growth algorithm to generate frequent closed item sets. Closed item sets are the subset of item set patterns each of which is the unique representative for a group of sub-patterns relevant to the same set of transaction records. Our method is a quite simple modification of ZBDD-growth. We inserted several operations in the recursive procedure of ZBDD-
growth, to filter the closed patterns from all frequent patterns. The experimental result shows that the additional computation cost is relatively small compared with the original algorithm for generating all patterns. Our method can conveniently be utilized in the environment of ZBDD-based data mining and knowledge indexing.
2. ZBDD-based item set representation
As the preliminary section, we describe the methods for efficiently indexing item set data based on Zero-suppressed BDDs.
2.1 Combinatorial item set and ZBDDs
A combinatorial item set consists of the elements each of which is a combination of a number of items. There are \(2^n\) combinations chosen from \(n\) items, so we have \(2^n\) variations of combinatorial item sets. For example, for a domain of five items \(a, b, c, d,\) and \(e,\) we can show examples of combinatorial item sets as: \(\{ab, c\}, \{abc, cde, bd, acde, e\}, \{1, cd\}, 0.\) Here “1” denotes a combination of null items, and “0” means an empty set. Combinatorial item sets are one of the basic data structure for various problems in computer science, including data mining.
A combinatorial item set can be mapped into Boolean space of \(n\) input variables. For example, Figure 1 shows a truth table of Boolean function: \(F = (a b c) \vee (b c),\) but also represents a combinatorial item set \(S = \{ab, ac, c\}.\) Using BDDs for the corresponding Boolean functions, we can implicitly represent and manipulate combinatorial item set. In addition, we can enjoy more efficient manipulation using “Zero-suppressed BDDs” (ZBDD) [Minato 93], which are special type of BDDs optimized for handling combinatorial item sets. An example of ZBDD is shown in Figure 2.
The detailed techniques of ZBDD manipulation are described in the articles [Minato 93]. A typical ZBDD package supports cofactoring operations to traverse 0-edge or 1-edge, and binary operations between two combinatorial item sets, such as union, intersection, and difference. Our ZBDD package generates new ZBDD nodes in the main memory, as the results of these algebraic operations. The computation time for each operation is almost linear to the number of ZBDD nodes related to the operation. We can also delete a ZBDD which has become useless, and such garbage ZBDD nodes are efficiently collected to be used for another new ZBDD.
2.2 Tuple-Histograms and ZBDD vectors
A Tuple-histogram is the table for counting the number of appearance of each tuple in the given database. An example of tuple-histogram is shown in Figure 3. This is just a compressed table of the database to combine the same tuples appearing more than once into one line with the frequency.
Our item set mining algorithm manipulates ZBDD-based tuple-histogram representation as the internal data structure. Here we describe how to represent tuple-histograms using ZBDDs. Since ZBDDs are representation of sets of combinations, a simple ZBDD distinguishes only existence of each tuple in the database.
In order to represent the numbers of tuple’s appearances, we decompose the number into m-digits of ZBDD vector \( \{F_0, F_1, \ldots, F_{m-1}\} \) to represent integers up to \((2^m - 1)\), as shown in Figure 4. Namely, we encode the appearance numbers into binary digital code, as \( F_0 \) represents a set of tuples appearing odd times (LSB = 1), \( F_1 \) represents a set of tuples whose appearance number’s second lowest bit is 1, and similar way we define the set of each digit up to \( F_{m-1} \).
In the example of Figure 4, The tuple frequencies are decomposed as: \( F_0 = \{abc, ab, c\} \), \( F_1 = \{ab, bc\} \), \( F_2 = \{abc\} \), and then each digit can be represented by a simple ZBDD. The three ZBDDs shares their sub-graphs each other.
Now we explain the procedure for constructing a ZBDD-based tuple-histogram from original database. We read a tuple data one by one from the database, and accumulate the single tuple data to the histogram. More concretely, we generate a ZBDD of \( T \) for a single tuple picked up from the database, and accumulate it to the ZBDD vector. The ZBDD of \( T \) can be obtained by starting from “1” (a null-combination), and applying “Change” operations several times to join the items in the tuple. Next, we compare \( T \) and \( F_0 \), and if they have no common parts, we just add \( T \) to \( F_0 \). If \( F_0 \) already contains \( T \), we eliminate \( T \) from \( F_0 \) and carry up \( T \) to \( F_1 \). This ripple carry procedure continues until \( T \) and \( F_k \) have no common part. After finishing accumulations for all data records, the tuple-histogram is completed.
Using the notation \( F.add(T) \) for addition of a tuple \( T \) to the ZBDD vector \( F \), we describe the procedure of generating tuple-histogram \( H \) for given database \( D \).
\[
\begin{align*}
H &= 0 \\
\text{forall } T &\in D \text{ do} \\
H &= H.add(T) \\
\text{return } H
\end{align*}
\]
When we construct a ZBDD vector of tuple-histogram, the number of ZBDD nodes in each digit is bounded by total appearance of items in all tuples. If there are many partially similar tuples in the database, the sub-graphs of ZBDDs are shared very well, and compact representation is obtained. The bit-width of ZBDD vector is bounded by \( \log S_{max} \), where \( S_{max} \) is the appearance of most frequent items.
Once we have generated a ZBDD vector for the tuple-histogram, various operations can be executed efficiently. Here are the instances of operations used in our pattern mining algorithm.
- \( H.factor0(v) \): Extracts sub-histogram of tuples without item \( v \).
- \( H.factor1(v) \): Extracts sub-histogram of tuples including item \( v \) and then delete \( v \) from the tuple combinations. (also considered as the quotient of \( H/v \)).
- \( v \cdot H \): Attaches an item \( v \) on each tuple combinations in the histogram \( F \).
- \( H_1 + H_2 \): Generates a new tuple-histogram with sum of the frequencies of corresponding tuples.
- \( H_.tuplecount \): The number of tuples appearing at least once.
These operations can be composed as a sequence of ZBDD operations. The result is also compactly represented by a ZBDD vector. The computation time is bounded by roughly linear to total ZBDD sizes.
3. ZBDD-growth Algorithm
Recently, we developed a ZBDD-based algorithm [Minato 06], ZBDD-growth, to generate “all” frequent item set patterns. Here we describe this algorithm as the basis of our method for “closed” item set mining.
ZBDD-growth is based on a recursive depth-first search over the ZBDD-based tuple-histogram representation. The basic algorithm is shown in Figure 5, where \( H \) is a ZBDD for the tuple-histogram of the given database, and \( \alpha \) is a given number of minimum support threshold.
In this algorithm, we choose an item \( v \) used in \( H \), and compute the two sub-histograms \( H_1 \) and \( H_0 \). (Namely, \( H = (v \cdot H_1) \cup H_0 \)) Since we always choose \( v \) at the highest position in the ZBDD vector, \( H_1 \) and \( H_0 \) can be obtained just by referring the 1-edge and 0-edge of the highest ZBDD-node, and the computation time for factoring each digit of ZBDD is a constant.
ZBDDgrowth(H, \alpha)
{
if(H has only one item v)
if(v appears more than \alpha )
return v ;
else return "0" ;
if(F exists) return F ;
v ← H.top ; /* Top item in H */
H_1 ← H.factor1(v) ;
H_0 ← H.factor0(v) ;
F_1 ← ZBDDgrowth(H_1, \alpha) ;
F_0 ← ZBDDgrowth(H_0 + H_1, \alpha) ;
F ← (v · F_1) ∪ F_0 ;
Cache(H) ← F ;
return F ;
}
Fig. 5 ZBDD-growth algorithm.
4. Frequent closed item set mining
In frequent item set mining, we sometimes faced with the problem that a huge number of frequent patterns are extracted and hard to find useful information. Closed item set mining is one of the techniques to filter important subset of patterns. In this section, we present a variation of ZBDD-growth algorithm to generate frequent closed item sets.
4.1 Closed item sets
Closed item sets are the subset of item set patterns each of which is the unique representative for a group of sub-patterns relevant to the same set of tuples. For more clear definition, we first define the common item set Com(S_T) for the given set of tuples S_T, such that Com(S_T) is the set of items commonly included in every tuple T ∈ S_T. Next, we define occurrence Occ(D, X) for the given database D and item set X, such that Occ(D, X) is the subset of tuples in D, each of which includes X. Using these notations, if an item set X satisfies Com(occ(D, X)) = X, we call X is a closed item set in D.
For example, let us consider the database D as shown in Figure 3. Here, all item set patterns with threshold \alpha = 1 is: \{abc, ab, ac, a, bc, b, c\}, but closed item sets are: \{abc, ab, bc, b, c\}. In this example, “ac” is eliminated from a closed pattern because \text{Occ}(D, "ac") = \text{Occ}(D, "abc").
In recent years, many researchers discuss the efficient algorithms for closed item set mining. One of the remarkable result is LCM algorithm[Uno 03] presented by Uno et. al. LCM is a depth-first search algorithm to extract closed item sets. It features that the computation time is bounded by linear to the output data length. Our ZBDD-based algorithm is also based on a depth-first search manner, so, it has similar properties as LCM. The major difference is in the data structure of output data. Our method generates ZBDDs for the set of closed patterns, ready to go for more flexible analysis using ZBDD operations.
5. Experimental Results
Here we show the experimental results to evaluate our new method. We used a Pentium-4 PC, 800MHz, 1.5GB of main memory, with SuSE Linux 9. We can deal with up to 40,000,000 nodes of ZBDDs in this machine.
Table 1 shows the time and space for generating ZBDD vectors of tuple-histograms for the FIMI2003 benchmark databases[Goethals 03b]. This table shows the computation time and space for providing input data for ZBDD-growth algorithm. In this table, \#T shows the number of tuples, total|T| is the total of tuple sizes (total appearances of items), and |ZBDD| is the number of ZBDD nodes for the tuple-histograms. We can see that tuple-histograms can be constructed
\[ P \text{. permit}(Q) \]
\[
\begin{array}{l}
\text{if}(P = \text{"0" or } Q = \text{"0") return "0" ;} \\
\text{if}(P = Q) return P ; \\
\text{if}(P = \text{"1") return "1" ;} \\
\text{if}(Q = \text{"1")} \\
\text{if}(P \text{ include "1") return "1" ;} \\
\text{else return } \text{"0" ;} \\
R \leftarrow \text{Cache}(P,Q) ; \\
\text{if}(R \text{ exists}) return R ; \\
v \leftarrow \text{TopItem}(P,Q) ; /\* \text{Top item in } P,Q */ \\
(P_0, P_1) \leftarrow \text{factors of } P \text{ by } v ; \\
(Q_0, Q_1) \leftarrow \text{factors of } Q \text{ by } v ; \\
R \leftarrow (v \cdot F_0.\text{permit}(Q_0)) \\
\text{and } (F_0.\text{permit}(Q_0 \cup Q_1)) ; \\
\text{Cache}(P,Q) \leftarrow R ; \\
\text{return } R ; \\
\end{array}
\]
\[ \text{ZBDDgrowthC}(H, \alpha) \]
\[
\begin{array}{l}
\text{if}(H \text{ has only one item } v) \\
\text{if}(v \text{ appears more than } \alpha\text{ ) return } v ; \\
\text{else return } \text{"0" ;} \\
F \leftarrow \text{Cache}(H) ; \\
\text{if}(F \text{ exists}) \text{return } F ; \\
v \leftarrow H.\text{top} ; /\* \text{Top item in } H */ \\
H_1 \leftarrow H.\text{factor}(v) ; \\
H_0 \leftarrow H.\text{factor}(v) ; \\
F_0 \leftarrow \text{ZBDDgrowthC}(H_0, \alpha) ; \\
F_0 \leftarrow \text{ZBDDgrowthC}(H_0 + H_1, \alpha) ; \\
F \leftarrow (v \cdot F_1) \cup \\
(F_0 - (F_1 - F_1.\text{permit}(H_0))) ; \\
\text{Cache}(H) \leftarrow F ; \\
\text{return } F ; \\
\end{array}
\]
\[ P \text{. permit}(Q) \]
\[ \text{ZBDDgrowthC}(H, \alpha) \]
returns a set of combinations in \( P \) each of which is a subset of some combinations in \( Q \). For example, when \( P = \{ab, abc, bd\} \) and \( Q = \{abc, bc\} \), then \( P.\text{permit}(Q) \) returns \{ab, abc\}. The permit operation is efficiently implemented as a recursive procedure of ZBDD manipulation, as shown in Figure 6. The computation time of permit operation is almost linear to the ZBDD size.
Finally, we describe the ZBDD-growthC algorithm using the permit operation, as shown in Figure 7. The difference from the original algorithm is only one line, written in the frame box.
\[ P.\text{permit}(Q) \]
\[ \text{ZBDDgrowthC}(H, \alpha) \]
4.2 Eliminating non-closed patterns
Our method is a quite simple modification of ZBDD-growth shown in Figure 5. We inserted several operations in the recursive procedure of ZBDD-growth, to filter the closed patterns from all frequent patterns. The ZBDD-growth algorithm is starting from the given tuple-histogram \( H \), and computes the two sub-histograms \( H_1 \) and \( H_0 \), such that \( H = (v \cdot H_1) \cup H_0 \). Then ZBDD-growth\((H_1)\) and ZBDD-growth\((H_1 + H_0)\) is recursively executed.
Here, we consider the way to eliminate non-closed patterns in this algorithm. We call the new algorithm ZBDD-growthC\((H)\). It is obvious that \((v \cdot \text{ZBDD-growthC}(H_1))\) generates (a part of) closed patterns for \( H \) each of which contains \( v \), because the occurrence of any closed pattern with \( v \) is limited in \((v \cdot H_1)\), thus we may search only for \( H_1 \). Next, we consider the second recursive call ZBDD-growthC\((H_1 + H_0)\) to generate the closed patterns without \( v \). Important point is that some of patterns generated by ZBDD-growth\((H_1 + H_0)\) may have the same occurrence as one of the patterns with \( v \) already found in \( H_1 \). The condition of such duplicate pattern is that it appears only in \( H_1 \) but irrelevant to \( H_0 \). In other words, we eliminate the patterns from ZBDD-growth\((H_1 + H_0)\) such that the patterns are already found in ZBDD-growth\((H_1)\) but not included in any tuples in \( H_0 \).
For checking the condition for closed patterns, we can use a ZBDD-based operation, called permit operation by Okuno et al.[Okuno 98].1 Permit operation is basically same as SubSet operation by Coudert et al.[Coudert 93], defined for ordinary BDDs.
for all instances in a feasible time and space. The ZBDD sizes are almost same or less than $total[T]$.
After generating ZBDD vectors for the tuple-histograms, we applied ZBDD-growth algorithm to generate frequent patterns. Table 2 show the results of the original ZBDD-growth algorithm [Minato 06] for the selected benchmark examples, “mushroom,” “T10I4D100K,” and “BMS-WebView-1.” The execution time does not include the time for generating the initial ZBDD vectors for tuple-histograms.
The results shows that the ZBDD size is exponentially smaller than the number of patterns for “mushroom.” This is a significant effect of using the ZBDD data structure. On the other hand, no remarkable reduction is seen in “T10I4D100K.” “T10I4D100K” is known as an artificial database, consists of randomly generated combinations, so there are almost no relationship between the tuples. In such cases, ZBDD nodes cannot be shared well, and only the overhead factor is revealed. For the third example, “BMS-WebView-1,” the ZBDD size is almost linear to the number of patterns when the output size is small, however, an exponential factor of reduction is observed for the cases of generating huge patterns.
Next, we show the experimental results of frequent closed pattern mining using ZBDD-growthC algorithm. In Table 3, we show the results for the same examples as used in the experiment of the original ZBDD-growth. The last column $Time_{closed}/Time_{all}$ shows the ratio of computation time between the ZBDD-growthC and the original ZBDD-growth algorithm. We can observe that the computation time is almost the same order as the original algorithms for “mushroom” and “BMS-WebView-1,” but some additional factor is observed for “T10I4D100K.” Anyway, filtering closed item sets has been regarded as not a easy task. We can say that the ZBDD-growthC algorithm can generate closed item sets with a relatively small additional cost from the original ZBDD-growth.
Finally, we compared our results with a state-of-the-art implementation of the LCM algorithm [Uno 03] on the same PC. The results are shown in the right most column of Table 3. We can observe that the LCM-based program is more than a hundred times faster than our ZBDD-based program. The LCM algorithm features that the computation time is lin-
Table 3: Results of ZBDD-based closed pattern mining.
<table>
<thead>
<tr>
<th>Data name:</th>
<th>Min. freq.</th>
<th>#Freq.</th>
<th>ZBDD-growthC</th>
<th>LCM[Uno 03]</th>
</tr>
</thead>
<tbody>
<tr>
<td>蘑菇: 5,000</td>
<td>16</td>
<td>16</td>
<td>0.06</td>
<td>1.00</td>
</tr>
<tr>
<td>1,000</td>
<td>3,427</td>
<td>1,660</td>
<td>2.75</td>
<td>1.06</td>
</tr>
<tr>
<td>200</td>
<td>26,968</td>
<td>9,826</td>
<td>8.84</td>
<td>1.03</td>
</tr>
<tr>
<td>50</td>
<td>68,168</td>
<td>19,054</td>
<td>11.90</td>
<td>1.31</td>
</tr>
<tr>
<td>16</td>
<td>124,411</td>
<td>24,841</td>
<td>12.24</td>
<td>1.84</td>
</tr>
<tr>
<td>4</td>
<td>203,882</td>
<td>26,325</td>
<td>12.07</td>
<td>3.80</td>
</tr>
<tr>
<td>1</td>
<td>238,709</td>
<td>20,392</td>
<td>11.85</td>
<td>17.43</td>
</tr>
<tr>
<td>11044D100K:</td>
<td>5,000</td>
<td>10</td>
<td>43.94</td>
<td>2.14</td>
</tr>
<tr>
<td>1,000</td>
<td>385</td>
<td>382</td>
<td>145.03</td>
<td>1.91</td>
</tr>
<tr>
<td>200</td>
<td>13,108</td>
<td>4,312</td>
<td>2.657,490</td>
<td>11.88</td>
</tr>
<tr>
<td>50</td>
<td>46,993</td>
<td>20,581</td>
<td>4,556,980</td>
<td>12.47</td>
</tr>
<tr>
<td>16</td>
<td>142,520</td>
<td>89,185</td>
<td>5,755.32</td>
<td>11.51</td>
</tr>
<tr>
<td>4</td>
<td>1,023,614</td>
<td>691,154</td>
<td>18,529.82</td>
<td>3.97</td>
</tr>
<tr>
<td>BMS-WebView-1:</td>
<td>1,000</td>
<td>31</td>
<td>3.95</td>
<td>1.62</td>
</tr>
<tr>
<td>200</td>
<td>372</td>
<td>309</td>
<td>15.22</td>
<td>2.82</td>
</tr>
<tr>
<td>50</td>
<td>7,811</td>
<td>3,796</td>
<td>46.02</td>
<td>1.61</td>
</tr>
<tr>
<td>40</td>
<td>29,489</td>
<td>11,748</td>
<td>87.14</td>
<td>1.64</td>
</tr>
<tr>
<td>36</td>
<td>64,762</td>
<td>25,117</td>
<td>135.39</td>
<td>1.61</td>
</tr>
<tr>
<td>35</td>
<td>76,260</td>
<td>30,011</td>
<td>150.87</td>
<td>1.62</td>
</tr>
<tr>
<td>34</td>
<td>87,982</td>
<td>35,392</td>
<td>168.18</td>
<td>1.64</td>
</tr>
<tr>
<td>33</td>
<td>99,696</td>
<td>40,915</td>
<td>189.42</td>
<td>1.70</td>
</tr>
<tr>
<td>32</td>
<td>110,800</td>
<td>46,424</td>
<td>203.40</td>
<td>1.76</td>
</tr>
<tr>
<td>31</td>
<td>124,190</td>
<td>51,369</td>
<td>229.40</td>
<td>1.91</td>
</tr>
<tr>
<td>30</td>
<td>127,131</td>
<td>55,407</td>
<td>253.15</td>
<td>2.02</td>
</tr>
</tbody>
</table>
Frequent Closed Item Set Mining Based on Zero-suppressed BDDs
In addition, the implementation is highly optimized for enumerating the number of closed item sets without printing out. On the other hand, ZBDD-based method is especially effective when a huge number of item sets produced and they are compactly represented by small size of ZBDDs. In general, closed item sets are already reduced representation of all item sets, and in such cases, ZBDD-based compression is not very effective.
If our final goal is only enumerating closed item sets, the LCM algorithm would be much better. However, our ZBDD-based method does not only enumerate but also constructs efficient data structures for indexing the result of item sets, and they can be used for various data analysis with ZBDD-based algebraic set operations. Here we show several examples of useful post-processing.
(Extracting all non-closed patterns): After executing ZBDD-growth and -growthC, we can easily obtain a set of non-closed patterns by applying a difference operation between the two ZBDDs of all item sets and closed item sets.
(Filtering closed patterns with sub-patterns): By using “factor1(v)” operation, we can filter the subset of closed patterns including an item v. Repeating this operations, “key-word filtering” of closed patterns can be executed.
(Filtering closed patterns by a permissible set): When a set of permissible patterns are given as a ZBDD, we can filter the closed patterns each of which satisfies the constraint given by the ZBDD. For example, we first generate a ZBDD representing a set of patterns each of which contains exactly three items. Next we generate all closed patterns by ZBDD-growthC. Then, an intersection operation between the two ZBDDs extracts all closed patterns each of which consists of three items.
6. Conclusion
In this paper, we presented a variation of ZBDD-growth algorithm to generate frequent closed item sets. Our method is a quite simple modification of ZBDD-growth. We inserted several operations in the recursive procedure of ZBDD-growth, to filter the closed patterns from all frequent patterns. The experimental result shows that the additional computation cost is relatively small compared with the original algorithm for generating all patterns.
In order to just enumerate the closed patterns, our method is not faster than LCM algorithm, which is an existing state-of-the-art method. The main reason is that the closed patterns are already a kind of reduced information of the frequent patterns. In such cases, the ZBDD-based data compressing power is not very effective, and only the overhead factor reveals.
However, our method is still useful because it does not only enumerate but also constructs an efficient indexing data structure for all set of closed patterns, and we can apply various data analysis tasks using the ZBDD-based algebraic operations. In addition, it...
will be an interesting future work to deeply combine the techniques of ZBDD manipulation and the LCM algorithm.
Acknowledgments
This research was partially supported by Grant-in-Aid for Specially Promoted Research on “Semi-Structured Data Mining,” 17002008, Ministry of Education, Culture, Sports, Science and Technology of Japan.
References
Received August 15, 2006.
|
{"Source-Url": "https://eprints.lib.hokudai.ac.jp/dspace/bitstream/2115/47465/1/30_22_165.pdf", "len_cl100k_base": 7111, "olmocr-version": "0.1.48", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 31303, "total-output-tokens": 8356, "length": "2e12", "weborganizer": {"__label__adult": 0.0003590583801269531, "__label__art_design": 0.0003540515899658203, "__label__crime_law": 0.000560760498046875, "__label__education_jobs": 0.0011119842529296875, "__label__entertainment": 8.821487426757812e-05, "__label__fashion_beauty": 0.0002155303955078125, "__label__finance_business": 0.0005846023559570312, "__label__food_dining": 0.00037288665771484375, "__label__games": 0.0009207725524902344, "__label__hardware": 0.002582550048828125, "__label__health": 0.0008320808410644531, "__label__history": 0.0003514289855957031, "__label__home_hobbies": 0.0001894235610961914, "__label__industrial": 0.0011081695556640625, "__label__literature": 0.0002675056457519531, "__label__politics": 0.0003085136413574219, "__label__religion": 0.0005860328674316406, "__label__science_tech": 0.293212890625, "__label__social_life": 0.00013148784637451172, "__label__software": 0.01806640625, "__label__software_dev": 0.6767578125, "__label__sports_fitness": 0.00030732154846191406, "__label__transportation": 0.0005164146423339844, "__label__travel": 0.0001933574676513672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27265, 0.05809]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27265, 0.30536]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27265, 0.86909]], "google_gemma-3-12b-it_contains_pii": [[0, 3637, false], [3637, 5979, null], [5979, 10841, null], [10841, 13218, null], [13218, 17816, null], [17816, 20119, null], [20119, 24804, null], [24804, 27265, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3637, true], [3637, 5979, null], [5979, 10841, null], [10841, 13218, null], [13218, 17816, null], [17816, 20119, null], [20119, 24804, null], [24804, 27265, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27265, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27265, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27265, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27265, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27265, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27265, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27265, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27265, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27265, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27265, null]], "pdf_page_numbers": [[0, 3637, 1], [3637, 5979, 2], [5979, 10841, 3], [10841, 13218, 4], [13218, 17816, 5], [17816, 20119, 6], [20119, 24804, 7], [24804, 27265, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27265, 0.14208]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
16041d4be7bf9a01a21f13f2dcef396e938c4cad
|
This is the accepted version of a paper presented at *IEEE Int. Workshop on Tools in Process (TiP 2013)*.
**Citation for the original published paper:**
Ma, L., Artho, C., Sato, H. (2013)
Analyzing Distributed Java Applications by Automatic Centralization.
In: *2nd IEEE Int. Workshop on Tools in Process (TiP 2013)* (pp. 691-696).
N.B. When citing this work, cite the original published paper.
**Permanent link to this version:**
http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-199125
Analyzing Distributed Java Applications by Automatic Centralization
Lei Ma
Graduate School of Engineering
The University of Tokyo
[email protected]
Cyrille Artho
Research Institute for Secure Systems
AIST
[email protected]
Hiroyuki Sato
Information Technology Center
The University of Tokyo
[email protected]
Abstract—The verification and analysis of distributed applications are difficult. They involve large combinational states, interactive network communication between peers, and concurrency. Some dynamic analysis tools can analyze the runtime behavior of a single-process application. However, they do not support the analysis of a whole distributed application, where multiple processes run simultaneously.
Centralization is a general solution, which transforms multi-process applications into a single-process one that can be directly analyzed by such existing tools. In this paper, we adopt centralization as a general framework for analyzing distributed applications. We propose and solve the essential issue of a class version conflict during centralization. We also propose a clean solution for the shutdown semantics. We implement and apply our centralization tool to some network benchmarks. Experiments, where existing tools are used on the centralized application, support the usefulness of our automatic centralization tool. Centralization enables existing single-process tools to analyze distributed applications.
Keywords—Distributed application; dynamic analysis; software model checking
I. INTRODUCTION
Analyzing distributed applications is challenging. Multiple processes run concurrently and use asynchronous communication over a network. Activities of processes can be arbitrarily interleaved and no two executions of the same application need be the same. Such nondeterminism causes the run-time behavior of distributed applications to be difficult to predict, debug, and verify. This problem gets exacerbated if multiple threads inside a process are involved, creating concurrency inside a process as well as between processes. Most non-trivial applications nowadays are implemented as distributed, networked applications where multiple processes are combined into a complex system. Analysis and verification of such distributed applications are therefore very important. Some existing tools like Java PathFinder (JPF) [12], Java Runtime Analysis Toolkit (JRAT) [7] work on single-process applications, but they do not support multi-process applications. If powerful analysis tools that support a single process were available for multiple processes, development and analysis of distributed systems would become easier.
Process centralization [11], [1] is a solution to enable existing tools to analyze multi-process applications. It transforms a multi-process application into a single one with the equivalent runtime behavior. Fig. 1 shows the centralization of a distributed application containing three components: one server and two clients. Before centralization, each component runs as a process. Inside the server process, three threads run concurrently. Thread main creates two Worker threads to serve each connected client, separately. After centralization, all processes are wrapped as threads and run as one process.
Centralization was initially proposed to verify distributed applications exhaustively. However, the large combinational states limit such an analysis to small applications. We propose to use centralization for general (not necessarily exhaustive) analysis of distributed applications. Centralization enables many existing tools for integrated analysis and reduces the difficulty of analyzing distributed applications. For example, in a single-process debugger, the entire centralized system cannot be paused simultaneously; when centralized this becomes possible. Some dynamic verification tools such as JCarder [8] detect deadlock bugs for single-process applications, but they do not support multi-process applications. Meanwhile, profiling tools [7] are useful for analyzing the runtime performance of distributed applications. As a centralized application runs on a single VM, centralization enable these tools for integrated profiling of the whole distributed application.
There are currently no automatic centralization tools available. A previous centralization tool [1] is outdated and unable to work on current Java applications. Previous work mainly used centralization to verify multi-process applications with JPF [12]. Certain aspects of the implementation such as system startup and shutdown are targeted to JPF and not applicable to other analysis tools. When moving beyond JPF, larger systems can be supported, making tool automation all the more important.
Furthermore, one essential centralization issue, classes with multiple versions in different components, is not covered by previous tools. This is a common occurrence in component-based systems, where different parts are developed independently and thus may use different versions of library classes.
In this paper, we improve centralization to support the general analysis of distributed applications. We propose a general solution to handle classes with multiple versions, and also a solution for the shutdown semantics.
Several existing tools can benefit from our tool for analyzing distributed applications, as demonstrated by experiments. We discuss and verify some network fault models with JPF, allowing some defects to be found that are missed with a single-process analysis. We also discuss how existing profiling tools can be used for analyzing multi-process applications by centralization.
The rest of this paper is organized as follows. Section II summarizes the centralization issues. Section III formalizes the multiple class version issue and explains our solution. Section IV proposes our solution to the shutdown semantics. Section V describes the implementation and experiments using our tool. Section VI presents related work. Section VII concludes and discusses future work.
II. CENTRALIZATION ISSUES
This section summarizes the problems that have to be solved to implement centralization of distributed applications correctly.
Definition 1: The term distributed application contains three aspects [3]: Firstly, it means an application whose functionality is split into a set of cooperating, interacting components; each component has an internal state and operations on its state. Secondly, these components can be assigned to different machines. Finally, the functional components exchange information through the network.
On modern operating systems, distributed applications are implemented as a system using multiple processes, which usually run on different hosts and communicate over a network. Centralization transforms such a multi-process system into a single-process one, while preserving the semantics of the combined system. The transformed system runs on a single host, and all communication between the transformed processes is internalized.
This paper is concerned with the centralization of programs written in Java [5], a very popular programming language that is designed to facilitate the creation of networked applications. The concepts presented in this paper generalize to other platforms using threads, shared memory, and inter-process communication, although their implementations may differ. A centralized program is the program after centralization. Centralization must preserve the semantics of original program. For each execution in the original program, there exists an execution trace in centralized program with the same behavior, and vice versa. To satisfy this requirement, the following issues must be addressed:
1) Version separation: Multiple versions of a class may occur in different components of a distributed application. Before centralization, each component runs as a process on its own Virtual Machine (VM) and holds its own version of each class locally. Because a centralized program runs on single VM, each class is loaded and defined once. Direct centralization is incorrect, if multiple versions of a class with the same name exist. We propose our solution in Section III.
2) Memory space separation: In a multi-process system, the operating system separates the memory spaces of all processes. This separation is absent in the centralized program but can be emulated by program transformation. In Java-like systems, memory space separation is only necessary on static data, which exists once per VM. Static fields and class descriptors are shared as a singleton instance of a given class. Accessing these data by different processes without proper separation in the centralized program will cause data races. Therefore, centralization should keep the memory space of each process separate [11], [1].
3) Runtime behavior: Startup and shutdown. Centralization wraps each process of original program as a group of threads and starts them as such. We denote each group of such threads as a centralized process. For the analysis of network applications, ensuring the server is initialized before clients try to connect is important. Otherwise, a client exits prematurely after failing to connect to the server. Regarding the shutdown semantics of the original program, if a process exits it terminates all its threads. It also releases all resources like socket ports. Its VM is shut down while other processes might continue running. Centralization should preserve the startup order of each centralized process and the shutdown semantics. A solution given in previous work [1] is specific to JPF. We discuss our general solution in Sections IV and V, respectively.
III. VERSION SEPARATION
The usage of slightly different versions of library classes is common in component-based systems, where each component is developed and managed independently. Centralization is incorrect without properly separating the class namespace for each component. In this section, we formalize and propose our solution to this issue.
A. Class Abstraction and Classification
A Java class can be uniquely identified by its name (including package name) and implementation. For a class cl, we use cl.name and cl.code to denote the class name and its implementation, respectively. Given two classes cl1 and cl2, cl1 is equivalent to cl2, denoted by cl1 = cl2, iff both of their names and codes are identical.
Definition 2: A project is a set of classes. We denote a class cl in a project p by p.cl.
**Definition 3:** Given a project \( p \), we define \( \text{name}(p) = \{\text{name} | cl \in p\} \) as the set of all class names in \( p \).
A project abstracts the implementation of a component in distributed applications. Each process may use code from a different project but code from a given project may also be shared among processes.
**Definition 4:** Process centralization is the transformation of multiple processes into a single one with equivalent runtime behavior.
Previous work [1] assumes all the processes run under the same project, where each class has only one version. To centralize processes containing classes with multiple versions, we propose to perform **project centralization**. Before defining project centralization, we first define project renaming substitution and project equivalence.
**Definition 5:** Given a project \( p \), and class names \( n_1, n_2 \), project renaming substitution \( p[n_1/n_2] \) is defined as a project in which \( n_1 \) in \( p \) is substituted for \( n_2 \). A renaming substitution \( p[n_1/n_2] \) is normal iff \( n_1 \in \text{name}(p) \) and \( n_2 \in \text{name}(p) \).
**Definition 6:** Given two projects \( p_1 \) and \( p_2 \), \( p_1 \) is equivalent to \( p_2 \), denoted by \( p_1 = p_2 \) iff they can be renamed to the identical projects by normal renaming substitutions.
**Definition 7:** Project centralization transforms a project set \( P \) into one single project \( p_{\text{centr}} \) such that \( \forall p \in P. \exists p’ \subseteq P_{\text{centr}} | p = p’ \).
Project centralization requires preservation of the class namespace and implementation for each project. Each process that runs on one of the original projects can also run on the centralized project with the same runtime behavior.
**Definition 8:** Given two classes \( cl_1 \) and \( cl_2 \) in a project \( p \), \( cl_1 \) depends on \( cl_2 \), denoted by \( cl_2 = cl_1 \) if \( cl_1 \) references \( cl_2 \). For a class \( cl \in p \), we define \( \text{DEPENDS}(cl, p) = \{cl’ | cl \rightarrow cl’\} \).
The class dependency represents the class relationship in a project. \( \text{DEPENDS}(cl, p) \) is the set of classes in \( p \) that reference \( cl \).
Let \( P \) be a set of projects. To separate the class namespace of each project \( p \in P \), we classify the classes of \( p \) into the following categories:
1. **Unique Class.** \( \text{UNIQUE}(p, P) = \{cl | p \in P, \forall q \in P. (p \neq q \Rightarrow \text{name}(q) \notin \text{name}(p))\} \). A unique class of \( p \) is the class that has a unique name in \( p \), and this name does not occur in any other projects.
2. **Conflict Class.** \( \text{CONFLICT}(p, P) = \{cl | p \in P, \exists q \in P. (\text{name}(cl) \in (\text{name}(p) \cap \text{name}(q)) \land p.cl = q.cl)\} \). The name of a conflict class in \( p \) appears in multiple projects, but with a different implementation.
3. **Shared Class.** \( \text{SHARED}(p, P) = \{cl | p \in P, \exists q \in P. (\text{name}(cl) \in (\text{name}(p) \cap \text{name}(q)) \land p.cl = q.cl)\} \). A shared class of \( p \) shares both its name and implementation among multiple projects.
**B. Example**
Fig. 2.(a) shows an example to centralize three projects, where edges represent the class dependencies. Project 1 and project 2 share most of the classes except different versions of class \( C \). Compared with project 3, project 3 holds a different version of class \( Main \) and a new class \( Unique \). In this example, classes \( A \) and \( B \) are shared between all projects. Class \( C \) is a conflict class in project 1, but it is both a conflict and shared class in project 2 and project 3. Similarly, class \( Main \) is a conflict class in project 3, and it is both shared and conflict in project 1 and project 2.
**C. Project centralization and class renaming**
Consider centralizing processes from the project set \( P = \{p_1, p_2, \ldots, p_n\} \), where one or more processes are started from within each project. Direct centralization of these processes is incorrect if \( \text{CONFLICT}(p_1, P) \neq \emptyset \). Project centralization resolves such class version conflicts, while preserving the semantics of each project. After project centralization, process centralization is simplified without having conflict classes. We adopt the class renaming approach for project centralization.
A trivial solution would entail renaming all classes, duplicating all code for each project. However, excessive code duplication will consume much runtime memory and storage. For example, when analyzing a distributed system containing 20 peers, duplicating all projects from these peers is not necessary as they can reuse some shared classes with proper transformation. Therefore, it is beneficial to share common class code. Our goal is to resolve the class conflict where necessary while sharing equivalent classes between...
For a project set of size $n$, each of them is equivalent to the project before renaming. If some class referenced by a shared class in Fig. 2(b) shows a centralization result without duplicating the code that can be shared.
D. Class renaming algorithm
Fig. 3 shows our renaming algorithm. The input of this algorithm is a set of projects to be centralized. The output is the renamed projects containing no conflict classes, and duplicating the code that can be shared. After finding all the classes, the algorithm iterates and renames each of the first $n-1$ projects. The worklist $w$ is used for traversing the class dependency relation. The queue $q$ stores the classes to be renamed.
All the conflict classes of the each project are put into $q$ for renaming. Their renaming effect then propagates to all the shared classes. If some class referenced by a shared class is renamed, the code of shared class changes, and it can no longer be shared. The renaming effect fully propagates until the worklist $w$ becomes empty. After finding all the classes need renaming, renameProject($q$, $p_i$) in Fig. 3 performs normal renaming substitution on project $p_i$ according to the renaming queue.
The class renaming algorithm is guaranteed to terminate. Each class of a project $p_i$ is added to worklist at most once. The output condition is also guaranteed to hold. There is no class conflict because all conflict classes and their propagation effect are resolved. In addition, each project before and after renaming is equivalent by normal substitution. For complexity, we consider an analysis of $n$ projects, which includes $m$ class names in total. In the worst case, each class name exists in $n$ projects. If the calculation for conflicting and shared classes uses pairwise comparison, the algorithm costs $O(m \times n^2)$. After class renaming, no project holds a conflict class and all projects can be centralized by taking the union of all their classes.
IV. Shutdown semantics
Shutdown semantics [1] concern the termination of the centralized application. Invoking Java standard library methods like Runtime.exit and Runtime.halt [6] terminates the entire VM. In original program, each process runs on a different VM so that while some process invokes these methods to terminate, other processes may continue running. After centralization, all processes are wrapped as threads and run on one single VM. Some centralized process that invokes such shutdown methods terminates all the other processes and the entire VM without proper transformation. The shutdown behavior preservation of centralized program involves two issues: (1) If a process exits it only terminates all its threads. (2) All resources held by the process are released. The second issue is addressed in [1]. For the first issue, a simple way for a thread to terminate itself is to throw an exception of type ThreadDeathException. However, killing other threads in Java is difficult [6].
We adopt the interruption mechanism to kill a thread. This needs collaboration between the threads that send and receive the signal. Given two threads $A$ and $B$, to kill $B$ from $A$, $A$ first calls $B$ interrupt to send interruption signal to $B$. If $B$ has been enabled to run, it can receive the signal from $A$ and exit according to its status: (1) When blocked on an interruptible invocation like wait() or sleep(), the signal triggers an interruption exception by the JVM. By catching this exception, $B$ can exit safely. (2) When blocked on uninterruptible actions like IO.read(), $B$ has to be unblocked by closing the resource it is blocked on in order to check its interruption flag status to exit. (3) When not in a blocking state, $B$ can check its interruption flag to exit. If $B$ is not enabled to run, it does not receive the interruption signal from $A$. The interruption caused by the centralizer and user should also be distinguished. These issues can be solved by adding additional flags to $B$.
To correctly kill a thread covering all these cases, we need to instrument the code of each thread (not necessarily between each statement) to check its interruption flag to exit. Note that each wrapped process either performs an internal operation in its local space that is unobservable by other processes, or it communicates with other processes. The internal operations cannot change the state of another process. Therefore, code instrumentation to check the interruption status is only needed before and after some key communication statements like ServerSocket.accept. If a thread calls shutdown methods to terminate, it sends the interruption signal to all other threads of the same centralized process. When the other threads are scheduled to run, they can check their interruption status to exit safely.
V. IMPLEMENTATION AND EXPERIMENTS
This section presents the implementation and experimental results of our centralization tool.
Table I: EXPERIMENTAL RESULTS OF CENTRALIZATION
<table>
<thead>
<tr>
<th>Project</th>
<th>#Classes</th>
<th>#Unique cl.</th>
<th>Shared Name</th>
<th>#Same Code</th>
<th>#Diff Code</th>
<th>#Renamed</th>
<th>#Static Fields (#Transformed)</th>
<th>#Static Sync Method</th>
</tr>
</thead>
<tbody>
<tr>
<td>Netz-0.4</td>
<td>91</td>
<td>12</td>
<td>37</td>
<td>42</td>
<td>57</td>
<td>109</td>
<td>69</td>
<td>0</td>
</tr>
<tr>
<td>Nett-0.5</td>
<td>88</td>
<td>9</td>
<td>12</td>
<td>59</td>
<td>67</td>
<td>20</td>
<td>4</td>
<td>0</td>
</tr>
<tr>
<td>Kryoent-2.08</td>
<td>79</td>
<td>8</td>
<td>12</td>
<td>59</td>
<td>67</td>
<td>20</td>
<td>4</td>
<td>0</td>
</tr>
<tr>
<td>Kryoent-2.20</td>
<td>104</td>
<td>53</td>
<td>12</td>
<td>59</td>
<td>67</td>
<td>20</td>
<td>4</td>
<td>0</td>
</tr>
<tr>
<td>Xnio-2.1.0CR1</td>
<td>74</td>
<td>7</td>
<td>21</td>
<td>46</td>
<td>66</td>
<td>46</td>
<td>46</td>
<td>0</td>
</tr>
<tr>
<td>Xnio-2.0.0CR2</td>
<td>72</td>
<td>5</td>
<td>21</td>
<td>46</td>
<td>66</td>
<td>46</td>
<td>46</td>
<td>0</td>
</tr>
<tr>
<td>Ganymed-ss2-builtd2009</td>
<td>115</td>
<td>0</td>
<td>94</td>
<td>21</td>
<td>75</td>
<td>182</td>
<td>84</td>
<td>3</td>
</tr>
<tr>
<td>Ganymed-ss2-builtd210</td>
<td>133</td>
<td>18</td>
<td>94</td>
<td>21</td>
<td>75</td>
<td>182</td>
<td>84</td>
<td>3</td>
</tr>
<tr>
<td>Edtftpj-2.3.0</td>
<td>106</td>
<td>7</td>
<td>80</td>
<td>25</td>
<td>51</td>
<td>367</td>
<td>151</td>
<td>10</td>
</tr>
<tr>
<td>Edtftpj-2.4.0</td>
<td>113</td>
<td>8</td>
<td>80</td>
<td>25</td>
<td>51</td>
<td>367</td>
<td>151</td>
<td>10</td>
</tr>
<tr>
<td>Mime4j-core-0.7.1</td>
<td>61</td>
<td>0</td>
<td>60</td>
<td>1</td>
<td>26</td>
<td>118</td>
<td>59</td>
<td>1</td>
</tr>
<tr>
<td>Mime4j-core-0.7.2</td>
<td>61</td>
<td>0</td>
<td>60</td>
<td>1</td>
<td>26</td>
<td>118</td>
<td>59</td>
<td>1</td>
</tr>
<tr>
<td>Jsmpp-2.0</td>
<td>201</td>
<td>0</td>
<td>191</td>
<td>10</td>
<td>134</td>
<td>811</td>
<td>405</td>
<td>2</td>
</tr>
<tr>
<td>Jsmpp-2.1</td>
<td>202</td>
<td>1</td>
<td>191</td>
<td>10</td>
<td>134</td>
<td>811</td>
<td>405</td>
<td>2</td>
</tr>
</tbody>
</table>
A. Implementation
We implement a centralization tool by transforming Java bytecode based on the ASM bytecode library [4]. Before centralization starts, the centralizer parses a user-defined script into a Java startup class file, which defines how the each process starts. The centralizer transforms the classes of each project as described in the script, as defined in previous work [11], [1]. After transformation, the centralized program can be executed from the synthesized startup program.
The centralization tool is implemented into four passes. The first pass reads the class files to build internal data structures; the second pass implements project centralization by the using class renaming algorithm in Fig. 3. The third pass transforms static fields and class descriptors [11], [1]. We refine the centralization of static fields by transforming the final static fields that store mutable data. Final static fields storing immutable data do not cause data races. The fourth pass performs transformation to preserve startup and shutdown semantics. For the startup semantics, the main issue is to ensure components start up in the desired order such that dependencies between components are satisfied; for example, a server needs to be ready to accept connection before its clients are started. We limit our code instrumentation to a few key network functions. Whenever some component tries to connect to a port, it creates an external process to check whether the port is open. If the port is open, it continues to connect, otherwise it waits until the port is open. This approach does not modify Java network library and it scales up for larger network applications. For the shutdown semantics, we have manually verified various situations that a thread successfully terminates after receiving the interruption signal as described in Section IV. Process resource registration and release are also implemented as described in previous work [1].
B. Experiment
We apply our centralization tool on some existing Java network projects as benchmarks. The experiment centralizes two versions of the each project as a group. Table I shows the results. The changes in the number of classes in each group indicate that some classes are removed, or new classes are added. The number of Unique classes shows the details of such changes. Column Shared Name shows the project version update does not change many class names. Most class names remains the same; some classes modify their implementations. These numbers are listed in column Same Code and Diff Code, respectively. Column Rename displays the number of classes are renamed for each project by the renaming algorithm in Fig. 3.
C. Applications
1) Centralization with JPF: To show our centralization tool performs a correct transformation, we first repeat previous experiments using centralization with Java PathFinder (JPF) [1]. These experiments were run on the Echo client/server, Daytime client/server, Alphabet client/server [2], Chat Server as test beds. We can correctly find the all the described bugs.
We proceed to seed some common faults into these benchmarks. One of these faults is a program crash caused by a truncated message. Consider the Echo client/server example: In the original protocol, the server first initializes itself and waits for the two clients to connect. When a client connects to the server, the server sends the same message back to the client. The client exits after receiving the echo message from server. The server terminates after it serves two clients. In the faulty version, we change the code of one client and server. One client is modified to crash if the messages it sends and received are not the same length, and the other client’s code remains the same. On the server side,
we inject a fault to send a truncated message back to the client with a low probability. A modular verification using JPF, analyzing either client side or server side separately as implemented by net-iocache [2], cannot detect such bugs. Previous centralization tools are not applicable as they do not support applications with a class conflict. After using our centralization on all network peers, we can successfully detect these bugs by JPF.
2) Centralization with profiling tools: Profiling is important for understanding the runtime behaviors of network applications. However, existing profiling tools like JRAT [7] only support a single process. Although profiling each process of a network application separately is possible, such analysis is difficult to automate and introduces overhead to start and destroy multiple JVMs. Profiling the centralized program avoids such overhead by running on single VM. The performance of each component and the execution traces of the whole network application can also be retrieved by existing profiling tools. We have performed some experiments to use the JRAT on centralized distributed applications. The result shows that centralization automates such integrated profiling of different components easily.
VI. RELATED WORK
Stoller [11] initially proposes to use centralization for verifying distributed Java applications. Artho et al. [1] improves the accuracy of centralization for such verification by JPF. However, the implementation uses the outdated SERP bytecode library [9], which makes it unable to work on current Java applications.
Compared with previous work, we intend to build an automatic centralization tool for a general-purpose analysis of distributed applications. As resolving class conflicts is essential for centralizing larger distributed applications, we propose our solution and implement it in our centralization tool. Our solution of startup and shutdown semantics does not depend on specific tools, either. Although the large state space of distributed applications limits software model checkers to small cases, our general centralization approach enables many existing dynamic analysis tools to analyze larger distributed applications.
Other work on verifying distributed applications includes net-iocache [2] and modeling the Java class loader [10], both of which target JPF. Compared with the centralization approach, net-iocache runs faster by sacrificing the completeness of verifying all execution traces. However, this limits net-iocache to not being able to find some bugs that centralization can.
Modeling multiple processes by using separate class loaders is proposed as a new feature in JPF v7 [10]. It models class loaders to separate process name spaces. Currently, JPF v7 is under development. We will compare it with our centralization approach after it is released.
VII. CONCLUSION AND FUTURE WORK
In this paper, we advance centralization as a general analysis framework for distributed Java applications. We formalize and solve the class conflict to support centralization on applications containing multiple versions of a given class. We also propose a cleaner and complete solution for shutdown semantics. We implement an automatic centralization tool and validate it empirically. The experiments show that our tool works correctly, and support the usefulness of tool automation. The experiment using Java PathFinder shows that some defects can be detected by analyzing the centralized program but not without centralization.
Future work includes running experiments on various dynamic analysis tools, finishing the remaining implementation of the proposed shutdown semantics, and optimizing the class renaming algorithm.
ACKNOWLEDGMENT
This work was supported by Global COE and SEUT [Secure-Life Electronics] Program from MEXT, Japan. Thanks also go to Nastaran Shafie for her comments.
REFERENCES
|
{"Source-Url": "http://kth.diva-portal.org/smash/get/diva2:1060447/FULLTEXT01", "len_cl100k_base": 6788, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 23697, "total-output-tokens": 7614, "length": "2e12", "weborganizer": {"__label__adult": 0.0002942085266113281, "__label__art_design": 0.0001996755599975586, "__label__crime_law": 0.0002903938293457031, "__label__education_jobs": 0.0005440711975097656, "__label__entertainment": 4.649162292480469e-05, "__label__fashion_beauty": 0.00011795759201049803, "__label__finance_business": 0.00014722347259521484, "__label__food_dining": 0.0002307891845703125, "__label__games": 0.0004274845123291016, "__label__hardware": 0.0006480216979980469, "__label__health": 0.0003554821014404297, "__label__history": 0.00016689300537109375, "__label__home_hobbies": 6.639957427978516e-05, "__label__industrial": 0.00025773048400878906, "__label__literature": 0.0001928806304931641, "__label__politics": 0.00019788742065429688, "__label__religion": 0.000316619873046875, "__label__science_tech": 0.0099029541015625, "__label__social_life": 8.946657180786133e-05, "__label__software": 0.005458831787109375, "__label__software_dev": 0.97900390625, "__label__sports_fitness": 0.0002722740173339844, "__label__transportation": 0.00036454200744628906, "__label__travel": 0.00016546249389648438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32747, 0.02447]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32747, 0.26225]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32747, 0.88462]], "google_gemma-3-12b-it_contains_pii": [[0, 491, false], [491, 5280, null], [5280, 11039, null], [11039, 16001, null], [16001, 20937, null], [20937, 27126, null], [27126, 32747, null]], "google_gemma-3-12b-it_is_public_document": [[0, 491, true], [491, 5280, null], [5280, 11039, null], [11039, 16001, null], [16001, 20937, null], [20937, 27126, null], [27126, 32747, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32747, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32747, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32747, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32747, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32747, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32747, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32747, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32747, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32747, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32747, null]], "pdf_page_numbers": [[0, 491, 1], [491, 5280, 2], [5280, 11039, 3], [11039, 16001, 4], [16001, 20937, 5], [20937, 27126, 6], [27126, 32747, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32747, 0.12698]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
3f87a685aa62542e3781e3dda90f921436c4d841
|
Axilog: Language Support for Approximate Hardware Design
Amir Yazdanbakhsh Divya Mahajan Bradley Thwaites Jongse Park Anandhavel Nagendrakumar
Sindhuja Sethuraman Kartik Ramkrishnan* Nishanthi Ravindran* Rudra Jariwala* Abbas Rahimi§
Hadi Esmaeilzadeh
Georgia Institute of Technology *University of Minnesota §UC San Diego
[email protected]
Abstract—Relaxing the traditional abstraction of “near-perfect” accuracy in hardware design can lead to significant gains in energy efficiency, area, and performance. To exploit this opportunity, there is a need for design abstractions that can systematically incorporate approximation in hardware design. We introduce Axilog, a set of language annotations, that provides the necessary syntax and semantics for approximate hardware design and reuse in Verilog. Axilog enables the designer to relax the accuracy requirements in certain parts of the design, while keeping the critical parts strictly precise. Axilog is coupled with a Relaxability Inference Analysis that automatically infers the relaxable gates and connections from the designer’s annotations. The analysis provides formal safety guarantees that approximation will only affect the parts that the designer intended to approximate, referred to as relaxable elements. Finally, the paper describes an approximate synthesis flow that leverages a commercial synthesis flow using a set of benchmark designs from domains including arithmetic units, signal processing, robotics, machine learning, and image processing. The evaluations use TSMC 45-nm libraries at the slowest PVT corner and show that by setting the quality loss to 5%, our framework achieves, on average, 54% energy savings and 1.9× area reduction with 10% output quality loss.
I. INTRODUCTION
Emerging applications such as data analytics, machine learning, multimedia, search, and cyber physical systems are inherently approximate and can tolerate imprecision in many parts of their computation. The prevalence of these applications has coincided with diminishing performance and energy returns from traditional CMOS scaling [1,2]. Several pioneering works have shown significant benefits with approximation at the circuit level [2–17]. Most of these techniques focus on optimization of individual functional units and approximate synthesis algorithms, opening avenues for utilizing approximation at the circuit level. However, there is a lack of abstractions that enable designers to methodically control which parts of the circuit can be synthesized approximately while keeping critical elements, such as the control logic, precise. Thus, there is a need for approximate hardware description languages for systematic approximate hardware design.
In this work, we introduce Axilog—a set of concise, intuitive, and high-level annotations—that provides the necessary syntax and semantics for approximate hardware design and reuse in Verilog. Axilog enables designers to reason about and delineate which parts of a hardware system or circuit design are critical and cannot be approximated. A key factor in our language formalism is to abstract away the details of approximation while maintaining the designer’s oversight in deciding which circuit elements are synthesized approximately. Axilog is also devised with modular reusability as a first order consideration. In general, hardware systems implementation relies on modular design practices where the engineers build libraries of modules and reuse them to build more complex hardware systems. Axilog provides a specific set of annotations to support reusability. Section II elaborates on the Axilog annotations for approximate hardware design and reuse.
There are a number of approximate software programming languages including EnerJ [18] and Rely [19]. We do not extend EnerJ or Rely’s language constructs to Verilog because they require a large number of manual annotations. Instead, we introduce a new set of annotations and couple them with a Relaxability Inference Analysis that automatically infers which circuit elements are relaxable with respect to the designer’s annotations. The Relaxability Inference Analysis formally guarantees that approximation will only affect the circuit elements that the designer intended to approximate. Section III details this analysis. In Section IV, we describe an approximate synthesis flow that leverages a commercial synthesis tool (Synopsys Design Compiler) to approximate to the parts of the design that are deemed safe to approximate by the analysis.
Section V evaluates Axilog, its analysis, and the synthesis flow using a set of benchmark designs from domains including arithmetic units, signal processing, robotics, machine learning, and image processing. The evaluations use TSMC 45-nm multi-VT libraries at the slowest PVT corner and show that by setting the quality loss to 5%, our framework achieves, on average, 45% energy savings and 1.8× area reduction. Allowing a quality loss of 10% results in 54% average energy savings and 1.9× area reduction. Further, we evaluate the robustness of our approach across a wide range of temperature variations (ΔT=125°C). Axilog yields these significant benefits while only requiring between 2 and 12 annotations even with complex designs containing up to 22,407 lines of code. These results confirm the effectiveness of Axilog in incorporating approximation in the hardware design cycle.
II. APPROXIMATE HARDWARE DESIGN WITH AXILOG
Our principle objectives for approximate hardware design in Axilog are (1) to carefully craft a small number of Verilog annotations which provide the designer with complete oversight and governance over the approximation; (2) to minimize the number of manual annotations while relying on the Relaxability Inference Analysis to automatically infer the designer’s intent for approximation; (3) to relieve the designer from the details of the approximate synthesis process by providing an intuitive separation between approximate design and synthesis.
and (4) to support the reuse of Axilog modules across different designs without the need for reimplementation. Furthermore, Axilog is a backward-compatible extension of Verilog. That is, an Axilog code with no annotations is a normal Verilog code and the design carries the traditional semantics of strict accuracy. To this end, Axilog provides two sets of language extensions, one set for the design (Section II-A) and the other for the reuse and interfacing of hardware modules (Section II-B). Table I summarizes the syntax for the design and reuse annotations. The annotations for design dictate which operations and connections are relaxable (safe to approximate) in the module. Henceforth, for brevity, we refer to operations and connections as design elements. The annotations for reuse enable designers to use the annotated approximate modules across various designs without the need for reimplementation. The back-end flow then uses these annotations to determine where in the design to use less costly hardware resources that allow relaxed accuracy (see section III). We provide detailed examples to illustrate how designers are able to appropriately relax or restrict the approximation in hardware modules. Using these examples, we elucidate the interplay between annotations and language constructs for hardware design, such as instantiation, concurrent assignment, and vector declaration. In the examples, we use background shading to highlight the relaxable elements inferred by the analysis.
A. Design Annotations
Axilog allows each design element to be precise or approximate. The designer’s annotations provide the guidelines to identify the design elements that are safe to approximate. Relaxing Accuracy Requirements By default, all design elements (operations and connections) are precise. The designer can use the relax(arg) statement to implicitly approximate a subset of these elements. The variable arg is either a wire, reg, output, or inout. Design elements that exclusively affect signals designated by the relax annotation are safe to approximate. The use of relax is illustrated using the following example.
```
module full_adder(a, b, c_in, c_out, s);
input a, b, c_in; output c_out, s;
approximate output s;
assign s = a & b & c_in;
assign c_out = a & b + b & c_in + a & c_in;
endmodule
```
In this `full_adder` module, s is the sum of the three inputs, a, b, and c_in. The `relax(s)` statement shows the designer’s intent to relax the accuracy requirement of the design elements that exclusively affect s, while keeping the unannotated c_out (carry out) signal precise. The relax(s) statement implies that the analysis can automatically approximate the XOR operation. Adhering to the designer’s intent, the unannotated c_out signal and the logic generating it will not be approximated. Furthermore, since s will carry relaxed semantics, its corresponding output is marked with the approximate annotation. The approximate annotation is necessary for reusing modules and will be discussed in Section II-B. With these annotations and the automated analysis, the designer does not need to individually declare the inputs (a, b, c_in) or any of the XOR (\(^\lor\)) operations as approximate. Thus, while designing approximate hardware modules, this abstraction significantly reduces the burden on the designer to understand and analyze complex data flows within the circuit.
Scope of approximation. Scope of the relax annotation crosses the boundaries of instantiated modules. The code on the left side of the following example illustrates this characteristic.
The relax(x) annotation in the `nand_gate` module implies that the AND (\(\land\)) operation in the `and_gate` module is relaxable. In some cases, the designer might not prefer the approximation to cross the scope of the instantiated modules. For example, the designer might not want the approximation to affect a third-party IP core. Axilog provides the relax_local annotation to limit the scope of approximation and its effects on the logic within the same module in which the annotation is declared.
```
module and_gate(n, a, b);
input a, b; output n;
assign n = a & b;
endmodule
module nand_gate(x, a, b);
input a, b; approximate output x;
wire w0;
and_gate al(w0, a, b);
assign x = m w0;
relax(x);
endmodule
```
```
module and_gate(n, a, b);
input a, b; output n;
assign n = a & b;
endmodule
module nand_gate(x, a, b);
input a, b; approximate output x;
wire w0;
and_gate al(w0, a, b);
assign x = m w0;
relax_local(x);
endmodule
```
The code on the right side shows that the relax_local annotation does not affect the semantics of the instantiated and_gate module, a1. In this case, the AND(&) operation in the and_gate module is not relaxable. However the NOT(\(^\sim\)) operation which shares the scope of the relax_local annotation is relaxable. The scope of approximation for both relax and relax_local is the module in which they are declared. Relax penetrates the boundary of the module instantiations but relax_local does not. The relax_local and relax annotations can also be applied selectively to certain bits of a vector.
Restricting approximation. In some cases, the designer might want to explicitly restrict approximation in certain parts of the design. Axilog provides the restrict(arg) annotation that ensures that any design element that affects the annotated argument (arg) is precise, unless a preceding relax or relax_local annotation has made the driving elements relaxable.
```
module and_gate(n, a, b);
input a, b; output n;
assign n = a & b;
endmodule
module nand_gate(x, a, b);
input a, b; approximate output x;
wire w0;
and_gate al(w0, a, b);
assign x = m w0;
restrict(w0);
endmodule
```
The above examples show the interplay between the relax and restrict annotations. On the left side, the designer intends to relax the accuracy of the elements that affect w0 while keeping the ones that affect x precise; hence relax(w0) and restrict(x).
With these two declarations, the NOT(\(^\sim\)) operation is not approximated but the AND(&) operation will be approximated. Conversely, in the example on the right, the designer relaxes the accuracy of the elements that affect x excluding that which affects w0. The pair of restrict(w0) and relax(x) imply that the NOT operation is approximated while the and_gate and its AND(&) operation remains precise. The restrict annotation crosses the boundary of instantiated modules. In both examples, the output x carries approximate semantics and needs to be annotated with approximate.
Restricting approximation globally. The restrict annotation does not have precedence over relax. However, there might be cases where the designer intends to override preceding relax annotations. For instance, the designer might intend to reuse a third-party approximate IP core in a precise setting. Certain approximate outputs of the IP core might be used to drive critical signals such as the ones that feed to the controller state machine, write enable of registers, address lines of a memory module, or even clock and reset. These signals are generally critical to the functionality of the circuit and the designers would want to avoid approximating them. To ensure the precision of these signals Axilog provides the restrict_global annotation that has precedence over relax and relax_local. The restrict_global(arg) implies that any design element that affects arg shall not be subject to any approximation. Note that restrict_global penetrates through the boundaries of instantiated modules. The following code snippet illustrates the semantics of the restrict_global annotation.
```verilog
top(top, x1, z);
and a1(top, x0, x1);
relax(z);
endmodule
In the code, restrict_global(x) precedes the relax(n) in the and_gate module. The restrict_global annotation does not allow any form of relaxation to affect the logic that drives x and therefore it is not declared approximate. The rest of this section discusses language annotations, similar to the approximate annotation, that enable reusability in Axilog.
B. Reuse Annotations
This section describes the abstractions that are necessary for reusing approximate modules. Our principle idea for these language abstractions is maximizing the reusability of the approximate modules across designs that may have different accuracy requirements. Axilog’s reuse annotations concisely modify the module interface. These annotations declare which outputs carry approximate semantics and which inputs cannot be driven by relaxed wires without explicit annotations.
Outputs carrying approximate semantics. As mentioned, the designers can use annotations to selectively approximate the design elements in a module. These design elements might have a direct or indirect effect on the accuracy of some of the output ports. An approximate module could be given to a different vendor as an IP core. In this case the reusing designer needs to be aware of the accuracy semantics of the input/output ports without delving into the details of the module. To enable the reusing designer to view the port semantics, Axilog requires that all output ports that might be influenced by approximation to be marked as approximate. Below, the code snippets illustrate the necessity of the approximate annotation.
```verilog
module and_gate(n,a,b);
input a,b;
approximate output n;
assign n = a & b;
relax(n);
endmodule
module and_gate(n,a,b);
input a,b;
approximate output n;
assign n = a & b;
output n;
endmodule
module nand_gate(x, a, b);
input a, b;
wire x;
and_gate a1(x, a, b);
assign x = ~x;
endmodule
module nand_gate(x, a, b);
input a, b;
approximate output x;
wire x;
and_gate a1(x, a, b);
assign x = ~x;
endmodule
module multiplexer(select, x0, x1, z);
critical input select;
input x0, x1;
output z;
assign z = (select == 1) ? x1 : x0;
endmodule
In this example, the select input of the multiplexer is declared as critical to prevent approximation to affect it.
Bridging approximate modules to critical inputs. As of yet, Axilog does not allow any wire that is affected by approximation to drive a critical input. However, we recognize that there may be cases when the reusing designer entrusts critical input with an approximate driver. For such situations, Axilog provides an annotation called bridge, which shows designer’s explicit intent to drive a critical input by an approximate signal and certifies this connectivity. The example below shows the use of the bridge annotation.
```verilog
module top(x0, x1, z);
input x0, x1;
approximate output z; wire z;
and a1(z, x0, x1);
relax(z); bridge(z);
multiplexer m1(z, x0, x1, z);
endmodule
In this code, the designer annotation relaxes the logic driving s that is connected to a critical input select of multiplexer. This connectivity therefore requires designer’s consent. The bridge(s) annotation certifies the connectivity of approximated signal s to the select critical input of the m1 instance of the multiplexer module.
In summary, the semantics of the relax and restrict annotations provides abstractions for designing approximate hardware modules while enabling Axilog to provide formal guarantees of safety that the approximation will only be restricted to the design elements that are specifically selected by the designer. Moreover, the approximate output, critical input, and bridge annotations enable reusability of the modules across different designs. In addition to the modularity, the design and reuse annotations altogether enable approximation polymorphism in hardware design. That is, with Axilog, the modules with approximate semantics can be used in a precise manner without reimplementation and conversely precise modules can be instantiated with approximate semantics. These abstractions provide a natural extension to the current practices of hardware design and enable the designer to apply approximation.
with full control without adding substantial overhead to the conventional hardware design and verification cycle.
III. RELAXABILITY INFERENCE ANALYSIS
After the designer provides annotations, the compiler needs to perform a static analysis to find the approximate and precise design elements in accordance with these annotations. This section presents the Relaxability Inference Analysis, a static analysis that identifies these relaxable gates and connections.
To simplify the implementation, we first translate the RTL Verilog design to primitive gates, while maintaining the module boundaries. We then apply the Relaxability Inference Analysis at the gate level. The Relaxability Inference Analysis is a backward slicing algorithm that starts from the annotated wires and iteratively traverses the circuit to identify which wires must carry precise semantics. Subtracting the set of precise wires from all the wires in the circuit yields the relaxable set of wires. The gates that immediately drive these relaxable wires are the ones that the synthesis can potentially approximate.
Algorithm 1 illustrates the procedure that identifies the precise wires.
```
// Inputs:
K: Circuit-under-analysis
M: Set of all the modules within the circuit
R: Set of all the globally restricted wires
// Output:
P: Set of precise wires
Initialize \( P \leftarrow \emptyset \)
for each \( m_i \in M \) do
\( I \): Set of all the inputs ports in \( m_i \)
\( A \): Set of all the relaxed wires in \( m_i \)
\( L.A \): Set of all the locally relaxed wires in \( m_i \)
\( S.ink \): Set of all the restricted wires in \( m_i \cup A \)
\( U.W \): Set of wires driven by modules that are instantiated within \( m_i \)
Phase1: This loop identifies the module's local precise wires \( w_i \)
Initialize \( N \leftarrow \emptyset \)
while \( (S.ink \neq \emptyset) \) do
\( w_i \leftarrow \) dequeue \( S.ink \)
if \( (w_i \notin I \) and \( w_i \notin U.W) \) then
\( N \leftarrow \) append \( w_i \)
else
\( P \leftarrow \) append \( w_i \)
end if
end while
Phase2: This loop identifies the relaxed wires \( w_j \) that are driven by the \( m_1 \) submodules; the \( m_1 \) submodules are the instantiated modules in \( m_i \)
for \( (w_j \in U.W) \) do
if \( (w_j \notin N \) and \( w_j \) drives wire \( w_i \) \) then
\( m_1 \leftarrow \) module driving the wire \( w_j \)
\( m_1.A \leftarrow \) wire \( w_j \)
end if
end for
Phase3: This loop identifies the precise wires \( w_k \) that are globally restricted
while \( (R \neq \emptyset) \) do
\( w_k \leftarrow \) dequeue \( R \)
\( P \leftarrow \) append \( w_k \)
\( R \leftarrow \) append Set of all the input wires of the gate that is driving \( w_k \)
end while
```
Algorithm 1: Backward flow analysis for finding precise wires.
This procedure is a backward-flow analysis that operates in three phases: (1) The first phase starts by identifying a set of sink wires. The sink wires are either unannotated outputs or wires that are explicitly annotated with restrict. The procedure identifies the gates that are driving the sink wires and adds their input wires to the precise set if they are not explicitly annotated as relaxed. The algorithm repeats this step for the newly added wires until it reaches an input or an explicitly relaxed wire. However, this phase is only limited to the scope of the module-under-analysis; (2) In the second phase, the algorithm identifies the relaxed outputs of the instantiated submodules. Due to the semantic differences between relax and relax_ local, the output of a submodule will be considered relaxed if the following two conditions are satisfied. (a) The output drives another explicitly relaxed wire, which is not inferred due to a relax_ local annotation; and (b) the output is not driving a wire already identified as precise. The algorithm automatically annotates these qualifying outputs as relaxed. The analysis repeats these two phases for all the instantiated submodules. For correct functionality of this analysis, all the module instantiations are distinct entities in the set \( M \) and are ordered hierarchically; (3) In the final phase, the algorithm marks any wire that affects a globally restricted wire as precise. This final phase allows the restrict global to override any other annotations in the design.
Finally, the Relaxability Inference Analysis—part of which is presented in Algorithm 1—identifies the safe-to-approximate subset of the gates and wires with regards to the designer annotations. An approximation-aware synthesis tool can then generate an optimized netlist, with the approximation applied to only the safe-to-approximate circuit elements.
IV. APPROXIMATE SYNTHESIS
In our framework, the synthesis tool first takes in the annotated Verilog source code and produces a gate-level netlist without employing any approximate optimizations. However, the synthesis tool preserves the approximate annotations. Then, the Relaxability Inference Analysis identifies the safe-to-approximate subset of the gates and wires with regards to the designer annotations. In the next step, the synthesis tool applies approximate synthesis and optimization techniques only to the safe-to-approximate circuit elements. The tool has the liberty to apply any approximate optimization technique including gate substitution, gate elimination, logic restructur- ing, voltage over-scaling, and timing speculation as it deems prudent. The objective is to minimize a combination of error, delay, energy, and area considering final quality requirements. Figure 1 shows one such approximate synthesis technique. Our synthesis technique uses commercial tools to selectively relax timing requirements on safe-to-approximate paths of the circuit. As shown in Figure 1a, we first use Synopsys Design Compiler to synthesize the design with no approximation. We perform a multi-objective optimization targeting the highest frequency while minimizing power and area. We will refer to the resulting netlist as the baseline netlist and its frequency as the baseline frequency. We account for variability by using Synopsys PrimeTimeVX which, given timing constraints, provides the probability of timing violations due to variations. In case of violation, the synthesis process is repeated by adjusting timing constraints until PrimeTimeVX confirms no violations.
Second, as shown in Figure 1b, we selectively relax the timing constraints and provide more slack on the safe-to-approximate paths. For the precise paths, the timing constraints are set to the most strict level (the baseline frequency). We then extract the post-synthesis gate delay information in Standard Delay Format (SDF) and perform gate-level timing simulations with a set of input datasets. We use the baseline frequency for the timing simulations even though some of the safe-to-approximate paths are synthesized with more timing slack. The timing simulations yield a set of output values that may incur quality loss since the approximated paths in the circuit may
Fig. 1: Synthesis flow for (a) baseline and (b) approximate circuits.
not generate the correct output at the baseline frequency. We then measure the quality loss and if the quality loss is more than designer’s requirements, we tighten the timing constraints on the safe-to-approximate paths. We repeat this step until the designer quality requirements are satisfied. This methodology has a potential to reduce energy and area by utilizing slower and smaller gates in the safe-to-approximate paths in which we use relaxed timing constraints.
V. EVALUATION
To evaluate the effectiveness of Axilog, we annotate several benchmark designs and apply our Relaxability Inference Analysis and synthesis flow.
**Benchmarks and Code Annotation.** Table II lists the design benchmarks implemented in Verilog. We use Axilog annotations to judiciously relax some of the circuit elements. The benchmarks span a wide range of domains including arithmetic units, signal processing, robotics, machine learning, and image processing. Table II also includes the input datasets, application-specific quality metrics, number of lines, and number of Axilog annotations for design and reuse.
**Axilog annotations.** We annotated the benchmarks with the Axilog extensions. The designs were either downloaded from open-source IP providers or developed without any initial annotations. After development, we analyzed the source Verilog codes to identify relaxable parts. The last two columns of Table II show the number of design and reuse annotations for each benchmark. The number of annotations range from 2 for Brent-Kung with 352 lines to 12 for InverseK with 22,407 lines. The Axilog annotation coupled with the Relaxability Inference Analysis has enabled us to only use a handful of annotations to effectively approximate designs that are implemented with thousands of lines of Verilog.
The relaxable parts are more common in datapath of the benchmarks designs rather than their control logic. For example, K-means involves a significant number of multiplies and additions before the calculated result can be written in a memory module. We used the relax annotations to declare these arithmetic operations approximable; however, we used restrict to ensure the precision of all the control signals. For smaller benchmarks, such as Brent-Kung, Kogge-Stone and Wallace Tree, only a subset of the least significant output bits were annotated to limit the quality loss. To be able to reuse some of the design, we also annotated the benchmarks with reuse annotations. The number of this type of annotation are listed in the last column of Table II. For example, the add_sub signal that selects the addition and subtraction operation for an ALU is annotated with the critical reuse annotation. Overall, one graduate student was able to annotate all the benchmarks within two days without being involved in their design. The intuitive nature of the Axilog extensions makes annotating straightforward.
**Application-specific quality metrics.** Table II shows the application-specific error metrics to evaluate the quality loss due to approximation. Using application-specific quality metrics is commensurate with prior work on approximate computing and language design [18, 19]. In all cases, we compare the output of the original baseline application to the output of the approximated design. For the benchmarks which generate numeric outputs, including brent-kung adder, FIR filter, and wallace tree multiplier, we measure the average relative error. For the neural network, kmeans clustering, and sobel edge detection applications, which produce images, we use the average root-mean-square image difference.
**Tools and experimental setup.** We use Synopsys Design Compiler (G-2012.06-SP5) and Synopsys PrimeTime (F-2011.06-SP3-2) for synthesis and energy analysis, respectively. We use Cadence NC-Verilog (11.10-s062) for timing simulation with SDF back annotations extracted from various operating corners. We use the TSMC 45-nm multi-Vt standard cells libraries and the primary results are reported for the slowest PVT corner (SS, 0.81V, 0°C).
**Experimental results.** Figure 2 illustrates the energy savings (2a) and area reduction (2b) when the quality loss limit is set to 5% and 10% in the synthesis flow. The baseline is synthesis with no approximation. With the 5% limit, our framework achieves, on average, 45% energy and 1.8× area reduction, respectively. When the quality loss limit is set to 10%, the average gains grow to 54% energy reduction and 1.9× area reduction. The Axilog annotations force the control logic in these benchmarks to be precise. Therefore, the benchmarks such as InverseK, Wallace Tree, Neural Network, and Sobel—that have a larger datapath—provide a larger scope for approximation and are usually the ones that see larger benefits. The structure of the circuit also affects the potential benefits. For instance, Brent-Kung and Kogge-Stone adders benefit differently from approximation due to the structural differences in their logic trees. The FIR benchmark shows the smallest energy savings since it is a relatively small design.
which does not provide many opportunities for approximation. Nevertheless, FIR still achieves 11% energy savings and 7% area reduction with 10% quality loss. This result suggests that even designs with limited opportunities for approximation can benefit significantly from the precisely targeted relaxation that Axilog provides. We evaluate the effectiveness of our technique in the presence of temperature variations for a full industrial range of 0°C to 125°C. We measured the impact of temperature fluctuations on the energy benefits for the same relaxed designs. Table III compares the energy benefits at the lower and higher temperatures (the quality loss limit is set to 10%). In this range of temperature variations, the average energy benefits ranges from 54% (at 0°C) to 48% (at 125°C). These results confirm the robustness of our framework that yields significant benefits even when temperature varies.
We visually examine the output of the Sobel application, which generates an image. Figure 3 displays the output with 0% (no approximation), 5%, and 10% quality degradation. Interestingly, even 10% quality loss is nearly indiscernible to the eye. Nevertheless, for the 10% error level approximation where it is most effective without compromising the intricacies of synthesis and optimization. Furthermore, all the abstractions presented in this paper are concrete extensions to the mainstream Verilog HDL providing designers with backward compatibility. We evaluated Axilog, its automated Relaxability Inference Analysis, and the presented approximate synthesis and demonstrate 54% average energy savings and 1.9× area reduction with merely 2 to 12 annotations per benchmark. These results confirm that Axilog is a methodical step toward practical approximate hardware design and reuse.
### VII. Conclusion
Axilog provides a less arduous framework compared to a mere extension of existing approximate programming models for hardware design. Axilog’s automated analysis enables the designers to approximate hardware without delving deeper into the intricacies of synthesis and optimization. Furthermore, all the abstractions presented in this paper are concrete extensions to the mainstream Verilog HDL providing designers with backward compatibility. We evaluated Axilog, its automated Relaxability Inference Analysis, and the presented approximate synthesis and demonstrate 54% average energy savings and 1.9× area reduction with merely 2 to 12 annotations per benchmark. These results confirm that Axilog is a methodical step toward practical approximate hardware design and reuse.
### References
|
{"Source-Url": "https://www.date-conference.com/proceedings-archive/2015/pdf/0513.pdf", "len_cl100k_base": 6783, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 22644, "total-output-tokens": 8114, "length": "2e12", "weborganizer": {"__label__adult": 0.00098419189453125, "__label__art_design": 0.0013322830200195312, "__label__crime_law": 0.0007739067077636719, "__label__education_jobs": 0.0009937286376953125, "__label__entertainment": 0.00022590160369873047, "__label__fashion_beauty": 0.0005707740783691406, "__label__finance_business": 0.0005030632019042969, "__label__food_dining": 0.0010137557983398438, "__label__games": 0.0016078948974609375, "__label__hardware": 0.0953369140625, "__label__health": 0.0015106201171875, "__label__history": 0.0006885528564453125, "__label__home_hobbies": 0.0006394386291503906, "__label__industrial": 0.00399017333984375, "__label__literature": 0.00031757354736328125, "__label__politics": 0.0008015632629394531, "__label__religion": 0.001430511474609375, "__label__science_tech": 0.422119140625, "__label__social_life": 0.0001169443130493164, "__label__software": 0.006336212158203125, "__label__software_dev": 0.454833984375, "__label__sports_fitness": 0.0009217262268066406, "__label__transportation": 0.002559661865234375, "__label__travel": 0.0004150867462158203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35521, 0.01373]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35521, 0.47218]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35521, 0.87201]], "google_gemma-3-12b-it_contains_pii": [[0, 6015, false], [6015, 12668, null], [12668, 18060, null], [18060, 25173, null], [25173, 30325, null], [30325, 35521, null]], "google_gemma-3-12b-it_is_public_document": [[0, 6015, true], [6015, 12668, null], [12668, 18060, null], [18060, 25173, null], [25173, 30325, null], [30325, 35521, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35521, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35521, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35521, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35521, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35521, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35521, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35521, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35521, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35521, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35521, null]], "pdf_page_numbers": [[0, 6015, 1], [6015, 12668, 2], [12668, 18060, 3], [18060, 25173, 4], [25173, 30325, 5], [30325, 35521, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35521, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
d0e95b24c30564cb9f1d9e85494e3b9965b68fd1
|
[REMOVED]
|
{"len_cl100k_base": 6493, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 27624, "total-output-tokens": 8347, "length": "2e12", "weborganizer": {"__label__adult": 0.0003998279571533203, "__label__art_design": 0.0009512901306152344, "__label__crime_law": 0.0004520416259765625, "__label__education_jobs": 0.00511932373046875, "__label__entertainment": 0.0001270771026611328, "__label__fashion_beauty": 0.0002363920211791992, "__label__finance_business": 0.0008525848388671875, "__label__food_dining": 0.0004906654357910156, "__label__games": 0.0008258819580078125, "__label__hardware": 0.0005207061767578125, "__label__health": 0.0007801055908203125, "__label__history": 0.0003826618194580078, "__label__home_hobbies": 0.00011807680130004884, "__label__industrial": 0.000606536865234375, "__label__literature": 0.0011034011840820312, "__label__politics": 0.0003497600555419922, "__label__religion": 0.0006661415100097656, "__label__science_tech": 0.07379150390625, "__label__social_life": 0.00019502639770507812, "__label__software": 0.012420654296875, "__label__software_dev": 0.8984375, "__label__sports_fitness": 0.0003056526184082031, "__label__transportation": 0.0005211830139160156, "__label__travel": 0.0002081394195556641}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35825, 0.01766]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35825, 0.6421]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35825, 0.92374]], "google_gemma-3-12b-it_contains_pii": [[0, 2510, false], [2510, 4006, null], [4006, 6999, null], [6999, 8596, null], [8596, 11793, null], [11793, 14664, null], [14664, 16807, null], [16807, 19944, null], [19944, 23364, null], [23364, 26614, null], [26614, 29249, null], [29249, 30868, null], [30868, 33332, null], [33332, 35825, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2510, true], [2510, 4006, null], [4006, 6999, null], [6999, 8596, null], [8596, 11793, null], [11793, 14664, null], [14664, 16807, null], [16807, 19944, null], [19944, 23364, null], [23364, 26614, null], [26614, 29249, null], [29249, 30868, null], [30868, 33332, null], [33332, 35825, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35825, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35825, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35825, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35825, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35825, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35825, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35825, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35825, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35825, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35825, null]], "pdf_page_numbers": [[0, 2510, 1], [2510, 4006, 2], [4006, 6999, 3], [6999, 8596, 4], [8596, 11793, 5], [11793, 14664, 6], [14664, 16807, 7], [16807, 19944, 8], [19944, 23364, 9], [23364, 26614, 10], [26614, 29249, 11], [29249, 30868, 12], [30868, 33332, 13], [33332, 35825, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35825, 0.05036]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
52c6ae12777fc53ab7a50fdb5fc670dabbaea346
|
• An Example of Using the Stack
• Introduction to Programming the MC9S12 in C
o An example of using the stack
o Including hcs12.inc in assembly language programs
o Using a mask in assembly language programs
o Using the DIP switches on the Dragon12
o Putting a program into the MC9S12 EEPROM
o Displaying patterns from a table on the Dragon12 LEDs
o Comparison of C and Assembly language programs
Examples of Using the Stack
Consider the following:
2000 org $2000
2000 cf 20 00 lds #$2000
2003 ce 01 23 ldx #$0123
2006 cc ab cd ldd #$abcd
2009 34 pshx
200a 36 psha
200b 37 pshb
200c 07 04 bsr delay
200e 33 pulb
200f 32 pula
2010 30 pulx
2011 3f swi
2012 34 delay: pshx
2013 ce 03 e8 ldx #1000
2016 04 35 fd loop: dbne x,loop
2019 30 pulx
201a 3d rts
The following does not work; the RTS goes to the wrong place
```
2000 org $2000
2000 cf 20 00 lds #$2000
2003 ce 01 23 ldx #$0123
2006 cc ab cd ldd #$abcd
2009 34 pshx
200a 36 psha
200b 37 pshb
200c 07 04 bsr delay
200e 33 pulb
200f 32 pula
2010 30 pulx
2011 3f swi
2012 34 delay: pshx
2013 ce 03 e8 ldx #1000
2016 04 35 fd loop: dbne x,loop
2019 3d rts
```
Using Registers in Assembly Language
• The DP256 version of the MC9S12 has lots of hardware registers
• To use a register, you can use something like the following:
PORTB equ $0001
• It is not practical to memorize the addresses of all the registers
• Better practice: Use a file which has all the register names with their addresses
include "hcs12.inc"
• Here is some of hcs12.inc
;
;******************************************************************************
;* PORTA equ 0 ; port a = address lines a8 - a15
;* PTA equ 0 ; alternate name for PORTA
;* PORTB equ 1 ; port b = address lines a0 - a7
;* PTB equ 1 ; alternate name for PORTB
;* DDRA equ 2 ; port a direction register
;* DDRB equ 3 ; port a direction register
;******************************************************************************
* ; Prepared by Dr. Han-Way Huang
; Date: 12/31/2004
; HC12SDP256 I/O register locations
; HCS12 peripheral bits definitions
; D-Bug12 I/O functions calling address
; D-Bug12 SRAM interrupt vector table
; Flash and EEPROM commands
;
******************************************************************************
*
Using DIP switches to get data into the MC9S12
• DIP switches make or break a connection (usually to ground)
DIP Switches on Breadboard
• To use DIP switches, connect one end of each switch to a resistor
• Connect the other end of the resistor to +5 V
• Connect the junction of the DIP switch and the resistor to an input port on the MC9S12
• The Dragon12-Plus has eight dip switches connected to Port H (PTH)
• The four least significant bits of PTH are also connected to push-button switches.
- If you want to use the push-button switches, make sure the DIP switches are in the OFF position.
- When the switch is open, the input port sees a logic 1 (+5 V)
- When the switch is closed, the input sees a logic 0 (0.22 V)
Looking at the state of a few input pins
• Want to look for a particular pattern on 4 input pins
– For example want to do something if pattern on PH3-PH0 is 0110
• Don’t know or care what are on the other 4 pins (PH7-PH4)
• Here is the wrong way to do it:
ldaa PTH
cmpa #$06
beq task
• If PH7-PH4 are anything other than 0000, you will not execute the task.
• You need to mask out the Don’t Care bits before checking for the pattern on the bits you are interested in
– To mask out don’t care bits, AND the bits with a mask which has 0’s in the don’t care bits and 1’s in the bits you want to look at.
ldaa PTH
anda #$0F
cmpa #$06
beq task
• Now, whatever pattern appears on PH7-4 is ignored
Using an HC12 output port to control an LED
• Connect an output port from the HC12 to an LED.
When a current flows through an LED, it emits light.
Making a pattern on a seven-segment LED
• Want to generate a particular pattern on a seven-segment LED:
- For example, to display a 0, turn on segments a, b, c, d, e and f, or bits 0, 1, 2, 3, 4 and 5 of PTH. The binary pattern is 0011 1111, or $3f$.
- To display 0 2 4 6 8, the hex numbers are $3f$, $5b$, $66$, $7d$, $7f$.
• Determine a number (hex or binary) which will generate each element of the pattern
• Put the numbers in a table
• Go through the table one by one to display the pattern
• When you get to the last element, repeat the loop
as12, an absolute assembler for Motorola MCU's, version 1.2h
; Program to display a pattern on a seven-segment LED display
```
include "hcs12.inc"
2000 prog: equ $2000
1000 data: equ $1000
2000 stack: equ $2000
0005 table_len: equ (table_end-table)
2000 org prog
2000 cf 20 00 lds #stack ; initialize stack pointer
2003 86 ff ldaa #$ff ; Make PORTB output
2005 5a 03 staa DDRB ; 0xFF -> DDRB
2007 ce 10 00 l1: ldx #table ; Start pointer at table
200a a6 00 l2: ldaa 0,x ; Get value
200c 5a 01 staa PORTB ; Update LEDs
200e 07 08 bsr delay ; Wait a bit
2010 08 inx ; point to next
2011 8e 10 05 cpx #table_end ; More to do?
2014 25 f4 blo l2 ; Keep going through table
2016 20 ef bra l1 ; At end; reset pointer
2018 36 delay: psha
2019 34 pshx
201a 86 64 ldaa #100
201c ce 1f 40 loop2: ldx #8000
201f 04 35 fd loop1: dbne x,loop1
2022 04 30 f7 dbne a,loop2
2025 30 pulx
2026 32 pula
2027 3d rts
1000 org data
1000 3f table: dc.b $3f
1001 5b dc.b $5b
1002 66 dc.b $66
1003 7d dc.b $7d
1004 7f dc.b $7F
1005 table_end:
```
Putting a program into EEPROM on the Dragon12-Plus
• EEPROM from 0x400 to 0xFFF
• Program will stay in EEPROM memory even after power cycle
– Data will not stay in RAM memory (!)
• If you put the above program into EEPROM, then cycle power, you will display a sequence of patterns on the seven-segment LED, but the pattern will be whatever junk happens to be in RAM.
• To make sure you retain your patterns, put the table in the text part of your program, not the data part.
• If you use a variable which needs to be stored in data, be sure you initialize that variable in your program and not by using dc.b.
• The Dragon12 board uses an 8 MHz clock. The MC9S12 has an internal phase-locked loop which can change the clock speed. DBug12 increases the clock speed from 8 MHz to 48 MHz.
• When you run a program from EEPROM, DBug12 does not run, so your program will run six times slower that it would using DBug12. The lab has instructions on how to increase the MC9S12 clock from 8 MHz to 48 MHz so your program will run with the same speed as under DBug12.
MC9S12 Address Space
- **0x0000** to **0x03FF**: Registers (Hardware) - 1 K Byte (Covers 1 K Byte of EEPROM)
- **0x0400** to **0x0FFF**: User EEPROM - 3 K Bytes
- **0x1000** to **0x3BFF**: User RAM - 11 K Bytes
- **0x3C00** to **0x3FFF**: D-Bug 12 RAM - 1 K Bytes
- **0x4000** to **0x7FFF**: Fixed Flash EEPROM - 16k Bytes
- **0x8000** to **0xBFFF**: Banked Flash EEPROM - 16k Bytes
- **0xC000** to **0xBFFF**: Fixed Flash EEPROM (D-Bug 12) - 16k Bytes
- **0xFFFF**
• Here is the above program with table put into EEPROM
• Also, we have included a variable var which we initialize to $aa in the program
- We don’t use var in the program, but included it to show you how to use a RAM-based variable
```
include "hcs12.inc"
prog: equ $0400
data: equ $1000
stack: equ $2000
table_len: equ (table_end-table)
org prog
lds #stack ; initialize stack pointer
movb #$aa,var ; initialize var
ldaa #$ff ; Make PORTB output
staa DDRB ; 0xFF -> DDRB
l1:
ldx #table ; Start pointer at table
l2:
ldaa 0,x ; Get value
staa PORTB ; Update LEDs
bsr delay ; Wait a bit
inx ; point to next
cpx #table_end ; More to do?
blo l2 ; Yes, keep going through table
bra l1 ; At end; reset pointer
delay:
psha
pshx
ldaa #100
loop2:
ldx #8000
loop1:
dbne x,loop1
dbne a,loop2
pulx
pula
rts
```
table: dc.b $3f
dc.b $5b
dc.b $66
dc.b $7d
dc.b $7F
table_end:
org data
var: ds.b 1 ; Reserve one byte for var
# Programming the MC9S12 in C
- A comparison of some assembly language and C constructs
<table>
<thead>
<tr>
<th>Assembly</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>; Use a name instead of a num COUNT: EQU 5</td>
<td>/* Use a name instead of a num */</td>
</tr>
<tr>
<td>;---------------------------------------------</td>
<td>----------------------------------------</td>
</tr>
<tr>
<td>; start a program</td>
<td>#define COUNT 5</td>
</tr>
<tr>
<td>org $1000</td>
<td>/--------------------------------------</td>
</tr>
<tr>
<td>lds #$3C00</td>
<td>/* To start a program */</td>
</tr>
<tr>
<td>;---------------------------------------------</td>
<td>main()</td>
</tr>
<tr>
<td></td>
<td>{</td>
</tr>
<tr>
<td></td>
<td>}</td>
</tr>
<tr>
<td></td>
<td>/--------------------------------------</td>
</tr>
</tbody>
</table>
- Note that in C, the starting location of the program is defined when you compile the program, not in the program itself.
- Note that C always uses the stack, so C automatically loads the stack pointer for you.
<table>
<thead>
<tr>
<th>Assembly</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>; allocate two bytes for</td>
<td>/* Allocate two bytes for */</td>
</tr>
<tr>
<td>; a signed number</td>
<td>* a signed number */</td>
</tr>
<tr>
<td>org $2000</td>
<td></td>
</tr>
<tr>
<td>i: ds.w 1</td>
<td>int i;</td>
</tr>
<tr>
<td>j: dc.w $1A00</td>
<td>int j = 0x1a00;</td>
</tr>
</tbody>
</table>
Assembly
;----------------------------------------
/*-----------------------------*/
; allocate two bytes for
; an unsigned number
/* Allocate two bytes for
* an unsigned number */
i: ds.w 1
j: dc.w $1A00
/*-----------------------------*/
; allocate one byte for
; a signed number
/* Allocate one byte for */
/* a signed number */
i: ds.b 1
j: dc.b $1F
/*-----------------------------*/
; Get a value from an address
; Put contents of address
; $E000 into variable i
/* Get a value from an address */
/* Put contents of address */
/* 0x00E00 into variable i */
i: ds.b 1
ldaa $E000
staa i
/*-----------------------------------*/
/* Use a variable as a pointer
(address) */
unsigned char *ptr, i;
ptr = (unsigned char *) 0xE000;
i = *ptr;
C
unsigned int i;
unsigned int j = 0x1a00;
signed char i;
signed char j = 0x1f;
unsigned char i;
unsigned char i = * (unsigned char *) 0xE000;
unsigned char *ptr, i;
ptr = (unsigned char *) 0xE000;
i = *ptr;
• In C, the construct *(num) says to treat num as an address, and to work with the contents of that address.
• Because C does not know how many bytes from that address you want to work with, you need to tell C how many bytes you want to work with. You also have to tell C whether you want to treat the data as signed or unsigned.
• \( i = \ast (\text{unsigned char} *) \ 0xE000; \) tells C to take one byte from address 0xE000, treat it as unsigned, and store that value in variable i.
• \( j = \ast (\text{int} *) \ 0xE000; \) tells C to take two bytes from address 0xE000, treat it as signed, and store that value in variable j.
• \( \ast (\text{char} *) \ 0xE000 = 0xaa; \) tells C to write the number 0xaa to a single byte at address 0xE000.
• \( \ast (\text{int} *) \ 0xE000 = 0xaa; \) tells C to write the number 0x00aa to two bytes starting at address 0xE000.
<table>
<thead>
<tr>
<th>Assembly</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>;---------------------------------------------</td>
<td>/<em>------------------------------------</em>/</td>
</tr>
<tr>
<td>; To call a subroutine</td>
<td>/* To call a function */</td>
</tr>
<tr>
<td>ldaa i</td>
<td>sqrt(i);</td>
</tr>
<tr>
<td>jsr sqrt</td>
<td></td>
</tr>
<tr>
<td>;---------------------------------------------</td>
<td>/<em>------------------------------------</em>/</td>
</tr>
<tr>
<td>; To return from a subroutine</td>
<td>/* To return from a function */</td>
</tr>
<tr>
<td>ldaa j</td>
<td>return j;</td>
</tr>
<tr>
<td>rts</td>
<td></td>
</tr>
<tr>
<td>;---------------------------------------------</td>
<td>/<em>------------------------------------</em>/</td>
</tr>
<tr>
<td>; Flow control</td>
<td>/* Flow control */</td>
</tr>
<tr>
<td>blo</td>
<td>if (i < j)</td>
</tr>
<tr>
<td>blt</td>
<td>if (i < j)</td>
</tr>
<tr>
<td>bhs</td>
<td>if (i >= j)</td>
</tr>
<tr>
<td>bge</td>
<td>if (i >= j)</td>
</tr>
<tr>
<td>;---------------------------------------------</td>
<td>/<em>------------------------------------</em>/</td>
</tr>
</tbody>
</table>
Here is a simple program written in C and assembly. It simply divides 16 by 2. It does the division in a function.
<table>
<thead>
<tr>
<th>Assembly</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>org $1000</td>
<td>unsigned char i;</td>
</tr>
<tr>
<td>i: ds.b 1</td>
<td></td>
</tr>
<tr>
<td>org $2000</td>
<td>unsigned char div(unsigned char j);</td>
</tr>
<tr>
<td>lds #$3C00</td>
<td>main()</td>
</tr>
<tr>
<td>ldaa #16</td>
<td>{</td>
</tr>
<tr>
<td>jsr div</td>
<td>i = div(16);</td>
</tr>
<tr>
<td>staa i</td>
<td>}</td>
</tr>
<tr>
<td>swi</td>
<td></td>
</tr>
<tr>
<td>div: asra</td>
<td>unsigned char div(unsigned char j)</td>
</tr>
<tr>
<td>rts</td>
<td>{</td>
</tr>
<tr>
<td></td>
<td>return j >> 1;</td>
</tr>
<tr>
<td></td>
<td>}</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "http://www.ee.nmt.edu/~erives/308_15/Lecture10_S15.pdf", "len_cl100k_base": 4313, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 30883, "total-output-tokens": 5032, "length": "2e12", "weborganizer": {"__label__adult": 0.0007901191711425781, "__label__art_design": 0.0006632804870605469, "__label__crime_law": 0.0005474090576171875, "__label__education_jobs": 0.00061798095703125, "__label__entertainment": 0.0001137852668762207, "__label__fashion_beauty": 0.0003914833068847656, "__label__finance_business": 0.00023686885833740232, "__label__food_dining": 0.0008654594421386719, "__label__games": 0.0010423660278320312, "__label__hardware": 0.06744384765625, "__label__health": 0.0009093284606933594, "__label__history": 0.000392913818359375, "__label__home_hobbies": 0.00048661231994628906, "__label__industrial": 0.002346038818359375, "__label__literature": 0.00023281574249267575, "__label__politics": 0.0004122257232666016, "__label__religion": 0.001194000244140625, "__label__science_tech": 0.04296875, "__label__social_life": 9.47117805480957e-05, "__label__software": 0.00804901123046875, "__label__software_dev": 0.86767578125, "__label__sports_fitness": 0.0008368492126464844, "__label__transportation": 0.0012359619140625, "__label__travel": 0.0003075599670410156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14599, 0.0906]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14599, 0.40426]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14599, 0.69204]], "google_gemma-3-12b-it_contains_pii": [[0, 410, false], [410, 857, null], [857, 1321, null], [1321, 2603, null], [2603, 3101, null], [3101, 3328, null], [3328, 4079, null], [4079, 4228, null], [4228, 4783, null], [4783, 5948, null], [5948, 7015, null], [7015, 7482, null], [7482, 8312, null], [8312, 8475, null], [8475, 10414, null], [10414, 11369, null], [11369, 12241, null], [12241, 13750, null], [13750, 14599, null]], "google_gemma-3-12b-it_is_public_document": [[0, 410, true], [410, 857, null], [857, 1321, null], [1321, 2603, null], [2603, 3101, null], [3101, 3328, null], [3328, 4079, null], [4079, 4228, null], [4228, 4783, null], [4783, 5948, null], [5948, 7015, null], [7015, 7482, null], [7482, 8312, null], [8312, 8475, null], [8475, 10414, null], [10414, 11369, null], [11369, 12241, null], [12241, 13750, null], [13750, 14599, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 14599, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14599, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14599, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14599, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 14599, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14599, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14599, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14599, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14599, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14599, null]], "pdf_page_numbers": [[0, 410, 1], [410, 857, 2], [857, 1321, 3], [1321, 2603, 4], [2603, 3101, 5], [3101, 3328, 6], [3328, 4079, 7], [4079, 4228, 8], [4228, 4783, 9], [4783, 5948, 10], [5948, 7015, 11], [7015, 7482, 12], [7482, 8312, 13], [8312, 8475, 14], [8475, 10414, 15], [10414, 11369, 16], [11369, 12241, 17], [12241, 13750, 18], [13750, 14599, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14599, 0.15605]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
c187701c031da961bd507b362b0e7128d81de1f9
|
Building DevOps on Amazon Web Services (AWS)
Abstract
At its core, DevOps makes delivery of applications more efficient. Amazon Web Services (AWS) has the platform and services to recognize a code change and automate delivery of that change from development, through the support environments, to production. However, delivery of code is just one aspect of DevOps.
IBM extends the DevOps definition, making it an enterprise capability that enables organizations to seize market opportunities and reduce time to customer feedback. IBM’s main objectives are speeding continuous innovation of ideas, enabling continuous delivery of those innovations, and providing meaningful feedback for continuous learning, whereby putting all the emphasis on deciding what code to change.
IBM extends DevOps to include all stakeholders in an organization who develop, operate or benefit from businesses systems. DevOps enables design thinking, which focuses on user outcomes, restless reinvention, and empowering teams to act. In addition, DevOps enables lean and agile methodologies, which guide teams to deliver in smaller increments and get early feedback. These approaches improve the content and quality of the changes in the application delivery lifecycle.
IBM provides an engineering approach to implementing DevOps on AWS on an existing portfolio of applications. Through a discovery workshop, we will analyze your application delivery lifecycle, identify areas for improvement, and then execute proof points on preselected applications. Based on those proof points, we will help you learn and move forward to onboard new applications, while also monitoring and measuring impact.
The IBM point of view
This paper will provide IBM’s point of view on DevOps, and how deploying on cloud can help you make the most of it. It will discuss practical approaches while focusing on AWS cloud offerings. It is the latest in our series of papers highlighting the partnership between IBM and AWS to help our joint customers achieve cloud success.
IBM defines DevOps as an enterprise capability that enables organizations to seize market opportunities and reduce time to customer feedback, and has three main business objectives:
1. Speeding continuous innovation of ideas by enabling collaborative development and testing across the value chain
2. Enabling continuous delivery of these innovations by automating software delivery processes and eliminating waste, while also helping to meet regulatory concerns
3. Providing a feedback loop for continuous learning from customers by monitoring and optimizing software-driven innovation
DevOps enables process and technology
Process
DevOps works with agile, lean, and design thinking to drive the loop of continuous delivery, feedback, and innovation. As we will see when discussing technology, AWS’ rapid deployment and data collection feed and improve this cycle.
IBM's DevOps approach applies these thinking principles to all stakeholders in an organization that develops, operates or benefits from the business' software systems, including customers, suppliers, and partners. By extending lean principles across the entire software supply chain, DevOps capabilities can improve productivity through accelerated customer feedback cycles, unified measurements and collaboration across an enterprise, and reduced overhead, duplication and rework.
“Lean and agile thinking guides teams to deliver in smaller increments and get early feedback. As a result, teams reduce cycle time by focusing only on those activities that maximize value based on feedback. Wasted effort is identified and eliminated, enabling teams to spend time on value-add activities, such as innovation and quality improvements.”
— Agile for Dummies, 2nd IBM Limited Edition, ibm.biz
Design thinking principles include:
- Focus on user outcomes, and drive business by helping customers achieve their goals
- Restless reinvention: stay essential by treating everything as a prototype
- Move faster by empowering diverse teams to be proactive
Design thinking provides a complementary set of principles and practices that fits very well with a DevOps approach. In the traditional model, developers are often the furthest removed from the customers. Design thinking reverses this by allowing developers to respond directly to customer feedback. A DevOps team that applies design thinking will focus on achieving their customers’ goals, delivering a quickly expanding minimum viable product based on customer feedback, and empowering team members to fail until they succeed.
<table>
<thead>
<tr>
<th>Process/Technology</th>
<th>Speeding continuous innovation of ideas</th>
<th>Enabling continuous delivery of these innovations</th>
<th>Providing a feedback loop for continuous learning</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Agile</strong></td>
<td></td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>• Fast feedback cycles through early customer involvement</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><strong>Lean</strong></td>
<td></td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>• Value stream mapping</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>• Eliminate waste</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><strong>Design thinking</strong></td>
<td></td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>• Focus on delivering a delightful user experience</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><strong>Cloud operation</strong></td>
<td></td>
<td></td>
<td>✓</td>
</tr>
<tr>
<td>• Quick and flexible management of development, test and production environments</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>• Resilient and scalable</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><strong>Automation</strong></td>
<td></td>
<td></td>
<td>✓</td>
</tr>
<tr>
<td>• Removing the silos between development and IT operations</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>• Treat infrastructure as code</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>• Continuous delivery of changes</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><strong>Application Analytics</strong></td>
<td></td>
<td></td>
<td>✓</td>
</tr>
<tr>
<td>• Real-time insight on problems in production</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>• Insight on application usage</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Table 1. The role of process and technology in DevOps
Building DevOps on Amazon Web Services (AWS)
AWS provides a white paper describing the five pillars of a well-architected framework—security, reliability, performance efficiency, cost optimization, and operational excellence—and a set of best practices that align to those pillars. When designing your virtual data center, building and deploying applications in that data center, and managing those applications, you need to be cognizant of AWS’ best practices. To read the white paper, visit https://aws.amazon.com/whitepapers.
Properly leveraging AWS and third-party services can automate cloud operations, as well as application development lifecycle management. Support environments can be built as needed, and just as easily released. The number of code lines is no longer limited by the number of support environments.
Managing your virtual data center 24/7/365 and maintaining an experienced staff trained in the latest tools around operations requires an investment for work that does not necessarily provide a competitive difference. You can look to managed service providers to perform this work while your staff focuses on the portion of operations that adds value to your business.
**Automation**
**Continuous integration** is a DevOps practice where developers continuously commit their code changes into a source repository. Then, at regular intervals, the system will conduct an automated build, deploy, and test. **Continuous delivery** expands on continuous integration by automatically deploying code changes through the support environments, pausing for approval before going to production. **Continuous deployment** does not wait for approval; rather, it goes to production automatically after successfully passing through automated testing in the support environments.
You can use automation technology to build a target support environment that includes application, database and test servers, install and configure middleware and applications, and then execute automated testing. When the testing is completed, the environment can be released. AWS enables this level of automation through its tool sets and pay-as-you-go pricing model.
AWS also offers a set of code services that provide tools to developers to implement automation. Figure 3 shows these services and how they work together.
Application analytics
Understanding how an application is being used is valuable to both the business and technology teams. Amazon CloudWatch is a monitoring service for cloud resources and applications that run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, and automatically react to changes. The service can also be used to gain system-wide visibility into resource utilization, application performance, and operational health.
Today’s analytics capabilities can go far beyond traditional monitoring. For instance, data can be captured about how customers interact with applications. The section on continuous learning later in the paper will touch on some of the tools available to help you gain insight into how your customers are interacting with your applications.
AWS CodeCommit is a fully-managed source control service that makes it easy for companies to host secure and highly scalable private Git repositories. AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. AWS CodeDeploy is a service that automates code deployments to any instance, including Amazon EC2 instances and servers running on-premises. AWS CodePipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates. AWS CodePipeline builds, tests and deploys your code every time there is a code change, based on the release process models you define. AWS CodeStar enables you to quickly develop, build and deploy applications on AWS. AWS CodeStar provides a unified user interface, enabling you to easily manage your software development activities in one place. AWS X-Ray is a distributed tracing service that helps developers analyze and debug distributed applications, and understand how their application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors.
Continuous innovation
Businesses are under tremendous pressure to create new value for their customers through innovation. However, they are finding that traditional approaches to software development and delivery are not sufficient to deliver the business innovation their customers expect. Manual development processes are error-prone, wasteful, and known to cause significant delays. Through proper application of new technology and the principles of continuous innovation, businesses can eliminate these manual tasks, and start delivering value like never before.
Continuous innovation means continuously developing new ideas into innovative software, which in turn, can continuously improve the value delivered to customers. IBM believes that DevOps is one of the primary means for achieving this sustained innovation.
In its conventional sense, DevOps refers to a closer collaboration between development and operations teams, and the integration of associated processes and tooling. In IBM’s point of view, DevOps is much more than that. We believe that DevOps should encompass collaboration among all stakeholders—not just between development and operations, but also among lines of business, suppliers involved in software delivery, and customers themselves. In this expanded definition, DevOps includes business governance practices around security and compliance, and all aspects of the delivery process, such as multi-sourcing.
Continuous delivery
The main goal of DevOps is to make delivery more efficient. Support environment availability and configuration is a roadblock that often interferes with achieving this goal. It is important to ensure that the support environment matches the production environment, as a mismatch can introduce significant quality issues. Additionally, changes to complex systems—even when componentized—can have unexpected results. Requirements, written or verbal, can be misinterpreted. Automating functional and non-functional testing, along with early feedback by stakeholders, is critical to maintaining quality. Deploying DevOps on AWS can help address these problems.
Let us start with a working definition of Infrastructure as Code (IaC): the process of managing and provisioning computing infrastructure and its configuration through machine-processable definition files, rather than the use of interactive configuration tools.
AWS’ machine-processable definition files are AWS CloudFormation templates. The templates access the same API as the AWS Console and the AWS Command Line Interface (CLI). The templates are JSON or YAML formatted text files that should be placed under normal source control. They are also parameterized, allowing the environments to differ in a controlled way. As an example, a dev environment could use a smaller Amazon EC2 instance (virtual server) than a performance or production environment.
Now, let us walk through the diagram in Figure 5 as an example of a code change being deployed to production. The environment being built on demand could be far more complex than in this example where there is a single application connecting to a data store. A developer is going to make a change to the application code.
Let us start with a working definition of Infrastructure as Code (IaC): the process of managing and provisioning computing infrastructure and its configuration through machine-processable definition files, rather than the use of interactive configuration tools.
AWS’ machine-processable definition files are AWS CloudFormation templates. The templates access the same API as the AWS Console and the AWS Command Line Interface (CLI). The templates are JSON or YAML formatted text files that should be placed under normal source control. They are also parameterized, allowing the environments to differ in a controlled way. As an example, a dev environment could use a smaller Amazon EC2 instance (virtual server) than a performance or production environment.
Now, let us walk through the diagram in Figure 5 as an example of a code change being deployed to production. The environment being built on demand could be far more complex than in this example where there is a single application connecting to a data store. A developer is going to make a change to the application code.
The developer commits a change to the source control repository, AWS CodeCommit.
AWS CodePipeline detects the change, and rebuilds the components affected by the change.
AWS CodePipeline creates the dev environment using a parameterized AWS CloudFormation template. The template would also be stored in AWS CodeCommit.
AWS CodeDeploy is instructed by CodePipeline to install and configure the applications in the dev environment.
AWS CodePipeline triggers automation testing of the dev environment.
AWS CodePipeline repeats the process for the QA environment and then pauses for user acceptance testing.
AWS CodePipeline creates the dev environment using a parameterized AWS CloudFormation template. The template would also be stored in AWS CodeCommit.
After approval, AWS CodePipeline uses the parameterized CloudFormation templates to build the new production compute environment, and AWS CodeDeploy to deploy the applications.
AWS CodePipeline triggers a brief automation test, and if the environment passes, switches the DNS server to the new compute servers. This is a release technique called blue-green deployment.
The AWS services described in this section are designed to work with standard industry software. For example, AWS CodeCommit is a Git repository. The service at the core of AWS’ Infrastructure-as-Code strategy is CloudFormation, which can use configuration management tools such as Chef, Puppet, or Ansible.
See the glossary at the end of this paper for a more detailed description of these services.
Figure 5. Deploying a code change in AWS
**Continuous learning**
As mentioned previously, understanding how an application is being used is valuable to both business and technology teams. In addition to Amazon CloudWatch, there are a variety of analytics solutions available to help you better understand how customers use your applications.
IBM Digital Analytics, formerly Coremetrics Web Analytics, is a platform for near real-time digital analytics, data monitoring, and comparative benchmarking. The solution allows you to track and analyze visitor behavior over time, across multiple touchpoints and channels, and deliver more personalized, relevant and effective information. It also allows you to optimize your web, mobile and social channels by monitoring critical data and key performance indicators in near real-time. With IBM Digital Analytics, you can uncover growth opportunities and areas for improvement.
IBM Tealeaf® is a family of products to improve visitor interactions with customer experience management solutions. For example, IBM Tealeaf CX provides visibility into web and mobile browsers, capturing data down to the individual session level, and analyzes the data to uncover trends and valuable insights. The solution can help you discover unexpected customer pathways through your applications, and areas where customers tend to struggle the most.
**IBM’s approach to DevOps in AWS**
IBM adapted its methodology to provide an engineered approach to implementing DevOps on AWS.
**Discovery workshop**
During a discovery workshop, IBM will help your team perform an assessment of your application delivery lifecycle management (ADLM) through value stream mapping. This is a lean strategy for analyzing the current “as is” state and designing the future “to be” state of ADLM.
The mapping looks at the full lifecycle documenting each step, milestone, and gate. Data about the effort in man hours and duration, as well as value, is considered. In the end, the discovery workshop looks to address bottlenecks in your pipeline. Some common examples include:
- Replacing ticket-based environment provisioning with cloud-hosted self-service
- Replacing weekly hands-on deployments to an integrated environment with more automated daily deployments
- “Shift left” integration testing to match more frequent deployments, while focusing on increasing automation testing
In the discovery workshop, you will review your existing application portfolio for automation readiness, and then select applications that can provide quick wins.
**Proof point**
Following the discovery workshop, IBM will help you gather baseline as-is metrics for environment management and application code delivery. Then, we will execute proof points on select applications by testing proposed remediation of bottlenecks such as automation of environment creation and code delivery. Finally, we will help you capture DevOps-enabled metrics, and then analyze.
**Optimize and expand**
Just as we apply DevOps principles to application development, we will be applying them here as well. Our goal at this stage is to help you learn from the proof points, onboard the next set of applications, and continue to monitor and measure impact.
**For more information**
To learn more about IBM Cloud Migration Services, visit us at ibm.com/cloud-computing/services/cloud-migration, or contact your IBM representative.
---
**Glossary**
**AWS CloudFormation**
AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. You can use AWS CloudFormation's templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application. You can also visualize your templates as diagrams and edit them using a drag-and-drop interface with the AWS CloudFormation Designer.
This service is at the core of AWS' Infrastructure-as-Code strategy. When deploying AmazonEC2 instances, you can build a custom Amazon Machine Image (AMI), run user data scripts to build out the server, or use configuration management tools such as Chef, Puppet, or Ansible. These are third-party tools that integrate with AWS CloudFormation to help automate provisioning, managing, patching, and configuration management of the cloud infrastructure.
[https://aws.amazon.com/cloudformation/](https://aws.amazon.com/cloudformation/)
**Amazon CloudWatch**
Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources. Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances, Amazon DynamoDB tables, and Amazon RDS DB instances, as well as custom metrics generated by your applications and services, and any log files your applications generate. You can use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health. You can use these insights to react and keep your application running smoothly.
[https://aws.amazon.com/cloudwatch/](https://aws.amazon.com/cloudwatch/)
**AWS CodeCommit**
AWS CodeCommit is a fully-managed source control service that makes it easy for companies to host secure and highly scalable private Git repositories.
[https://aws.amazon.com/codecommit/](https://aws.amazon.com/codecommit/)
**AWS CodeDeploy**
AWS CodeDeploy is a service that automates application deployments to any instance, including Amazon EC2 instances and instances running on premises. AWS CodeDeploy allows you to launch and track the status of your application deployments through the AWS Console or CLI. AWS CodeDeploy is platform and language agnostic and works with any application. AWS CodeDeploy can also integrate with your existing software release process or continuous delivery toolchain (such as Jenkins).
https://aws.amazon.com/codedeploy/
**AWS CodePipeline**
AWS CodePipeline creates continuous delivery pipelines that track code changes from sources such as AWS CodeCommit, Amazon Simple Storage Service (S3), or GitHub. You can design your development workflow for checking in code, building the code, deploying your application into staging, testing it, and releasing it to production. You can use AWS CodePipeline as an end-to-end solution. With AWS CodePipeline, you can rapidly deliver features and updates with high quality through the automation of your build, test and release process. You can use AWS CodePipeline to automate the release of your Chef cookbooks and application code to AWS OpsWorks.
http://docs.aws.amazon.com/opsworks/latest/userguide/other-services-cp.html
https://aws.amazon.com/codepipeline/
**AWS Config**
AWS Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications to enable security and governance. AWS Config Rules enables you to create rules that automatically check the configuration of AWS resources recorded by AWS Config.
https://aws.amazon.com/config/
**AWS Elastic Beanstalk**
AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.
You can simply upload your code and AWS Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application, and can access the underlying resources at any time.
https://aws.amazon.com/elasticbeanstalk/
**AWS OpsWorks**
AWS OpsWorks is a configuration management service that helps you configure and operate applications of all shapes and sizes using Chef. You can define the application's architecture and the specification of each component, including package installation, software configuration and resources such as storage. Start from templates for common technologies like application servers and databases, or build your own to perform any task that can be scripted. AWS OpsWorks includes automation to scale your application based on time or load, and dynamic configuration to orchestrate changes as your environment scales.
https://aws.amazon.com/opsworks/
**Blue-green deployment**
Blue-green deployment is a DevOps deployment practice that uses domain name services (DNS) to make application deployments. The strategy involves starting with an existing (blue) environment while testing a new (green) one. When the new environment has passed all the necessary tests and is ready to go live, you simply redirect traffic from the old environment to the new one via DNS.
About the authors
Daniel (Dan) Carr
Dan is the Global AWS Practice Leader at IBM Global Business Services. He is focused on delivering cloud transformations for IBM’s largest enterprise customers, where he brings extensive experience and expertise in enterprise architecture, DevOps, and cloud migrations.
KD Singh
KD is a Global Lead Partner Solution Architect with Amazon Web Services. He brings his expertise in cloud migrations, enterprise architecture, DevOps, and big data analytics to help AWS strategic partners build cloud offerings and implement digital transformations using the cloud.
Endnote
|
{"Source-Url": "https://www.ibm.com/downloads/cas/1GDLKLJ4", "len_cl100k_base": 5025, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 22603, "total-output-tokens": 5244, "length": "2e12", "weborganizer": {"__label__adult": 0.00025153160095214844, "__label__art_design": 0.00023543834686279297, "__label__crime_law": 0.00019729137420654297, "__label__education_jobs": 0.0006575584411621094, "__label__entertainment": 5.900859832763672e-05, "__label__fashion_beauty": 0.00010818243026733398, "__label__finance_business": 0.0019102096557617188, "__label__food_dining": 0.00023305416107177737, "__label__games": 0.0002987384796142578, "__label__hardware": 0.0006761550903320312, "__label__health": 0.0002932548522949219, "__label__history": 0.0001291036605834961, "__label__home_hobbies": 6.139278411865234e-05, "__label__industrial": 0.0002980232238769531, "__label__literature": 0.0001533031463623047, "__label__politics": 0.00017309188842773438, "__label__religion": 0.0002310276031494141, "__label__science_tech": 0.0074005126953125, "__label__social_life": 6.92605972290039e-05, "__label__software": 0.0164794921875, "__label__software_dev": 0.96923828125, "__label__sports_fitness": 0.0001807212829589844, "__label__transportation": 0.00033020973205566406, "__label__travel": 0.00017690658569335938}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27793, 0.00133]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27793, 0.05624]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27793, 0.91724]], "google_gemma-3-12b-it_contains_pii": [[0, 45, false], [45, 2899, null], [2899, 7936, null], [7936, 10257, null], [10257, 12273, null], [12273, 17692, null], [17692, 20490, null], [20490, 23649, null], [23649, 27052, null], [27052, 27793, null]], "google_gemma-3-12b-it_is_public_document": [[0, 45, true], [45, 2899, null], [2899, 7936, null], [7936, 10257, null], [10257, 12273, null], [12273, 17692, null], [17692, 20490, null], [20490, 23649, null], [23649, 27052, null], [27052, 27793, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27793, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27793, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27793, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27793, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27793, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27793, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27793, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27793, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27793, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27793, null]], "pdf_page_numbers": [[0, 45, 1], [45, 2899, 2], [2899, 7936, 3], [7936, 10257, 4], [10257, 12273, 5], [12273, 17692, 6], [17692, 20490, 7], [20490, 23649, 8], [23649, 27052, 9], [27052, 27793, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27793, 0.13971]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
483d9f8bd5e7fbdf64e5c7673fb40ce564bb8af6
|
Middleware Support for Ubiquitous Software Components
Didier Hoareau, Yves Mahéo
To cite this version:
HAL Id: hal-00426229
https://hal.science/hal-00426229
Submitted on 23 Oct 2009
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Middleware support for the deployment of ubiquitous software components
Didier Hoareau and Yves Mahé
Valoria, University of South Brittany
Campus de Tohannic, 56017 Vannes, France
{Didier.Hoareau, Yves.Mahéo}@univ-ubs.fr
Abstract A number of emerging distributed platforms include fixed and robust workstations but, like dynamic and pervasive networks, are often built from mobile and resource-constrained devices. These networks are characterized by the volatility of their hosts and connections, which may lead to network fragmentation. Although increasingly common, they remain a challenging target for distributed applications. In this paper we focus on component-based distributed applications by addressing the distribution and the deployment of software components on dynamic pervasive networks. We present a distribution scheme and some associated middleware mechanisms that allow a component to provide its services in an ubiquitous way. First, an architecture description language extension is proposed in order to specify a deployment driven by constraints on the resources needed by components. Then, a propagative and autonomic deployment process is explained, which is based on a consensus algorithm adapted for dynamic networks. Lastly, implementation details and experiment results are given.
1 Introduction
1.1 Dynamic pervasive platforms
During the last years have emerge new distributed platforms, often qualified as pervasive, that are no longer restricted to an interconnection of workstations that forms a stable network. These platforms may still include powerful and robust machines but they are rather composed of resource-constrained and mobile devices (laptops, personal digital assistants or PDA-, smart-phones, sensors, etc.). Due to the mobility and the volatility of the devices involved, dynamism is one of their major characteristics. A dynamic network hence formed can be described as a partitioned network, viewed as a collection of independent islands. An island is equivalent to a connected graph of hosts that can communicate together, while no communication is possible between two islands. In addition, the configuration of the islands may change dynamically.
In this paper, we are interested in medium-size dynamic pervasive platforms. Figure 1 shows a simple example of such a dynamic network. It is composed of a number of hosts a user has access to and on which a distributed application is meant to be accessible. This set of hosts includes fixed and mobile machines. Connectivity is not ensured between all the hosts. Indeed, at home, the user’s connection to Internet is sporadic and some of the devices are mobile (as such, they may become out of reach) and/or volatile (a PDA may for example be switched off frequently).
1.2 Ubiquitous applications
Although this kind of distributed platform is increasingly common, it remains a challenging target for building, deploying and maintaining distributed applications. The pervasiveness of the equipment should be reflected on the distributed application, leading to some form of ubiquitous applications. Many applications should benefit from ubiquity in this context: enhanced classical applications such as PIM (Personal Information Management) or collaborative applications, but also envisioned applications in e-home or e-business. A ubiquitous application is supposed to render its services everywhere, or at least wherever it makes sense, accounting for the constraints of the hosting devices. For example a PIM application is much more usable if
it offers its services on all the machines owned by a user, even if the entire application is not installed on each machine. It is not desirable however that the application be designed and administered as a collection of target-specific codes. Ubiquity must be made as transparent as possible. Of course, it may occur that some of the services are temporarily not available on a specific host (eg access to an up-to-date shared agenda from a PDA that is isolated from any network). In addition, some functionality may not be accessible everywhere due to a lack of resources (eg extended graphical view on a device with a small display). We believe that a minimal set of mechanisms should be provided to implement this adaptation in order to reduce the complexity of the design and the administration of ubiquitous applications.
1.3 Ubiquitous components
Software components have proved to be useful for developing complex distributed applications, and many component models and their associated technologies are now available. In the component-based approach, the application is designed as an assembly of reusable components that can be bound in a versatile manner, possibly dynamically. Some of the proposed models are known as hierarchical models. They offer the possibility of creating high level components by composing components of lower abstraction level, which represents a software construction principle that is natural and expressive. In such models, a component—that is then called a composite component—can itself be an assembly of components, recursive inclusion ending with primitive components that encapsulate computing code.
Using a hierarchical component-based approach for building a ubiquitous application that targets a dynamic network seems an attractive solution. Yet, several problems remain that are not treated by available component models and component execution supports. In particular, the two following aspects have to be dealt with: (1) how to deploy a hierarchical component in a dynamic network while ensuring that this deployment respects the architecture of the application and adapts itself to the resource constraints imposed by the target platform? (2) how to allow a distributed execution of the components, ie to allow interactions between components in a not-always-connected environment?
1.4 Outline of our approach
This paper describes a distribution scheme for hierarchical components and its associated deployment process that target dynamic pervasive networks. Because of the very constrained environment in which the application is to be deployed, we can hardly envisage a permanent access to the services offered by the application or an optimal utilization of the resources. The emphasis is put on finding a distribution scheme and some deployment mechanisms that achieve a minimal availability while taking account of the environment.
The distribution scheme we propose is related to the hierarchical structure of the application. This scheme is based on the replication of composite components. Indeed, we allow a composite to be accessible on a set of hosts, although each primitive component is localized on a single host. Besides, we also allow a component to operate in a degraded mode in order to account for network disconnections without making the entire application unusable. The notion of active interface is added to the component model. Our runtime support detects network disconnections and deactivates some components’ interfaces accordingly. Introspection on the state (active or inactive) of an interface is possible so as to allow the development of adaptive components.
The deployment of a component covers several parts of the life-cycle of a component. In this paper we focus on the last phases of the deployment, covering the instantiation of the component (that creates an executable instance from a component code), its configuration (that establishes the bindings to its interfaces) and its activation (that allows the other components to invoke its interfaces). The presented techniques should be complemented with component delivery mechanisms such as those described in [1].
The deployment of the hierarchy of components is specified in a constraint-based declarative way. The architecture descriptors of the components are augmented with deployment descriptors in which constraints on the resources required by components and on their possible location can be specified.
When the deployment is triggered, all the constraints listed in the deployment descriptor may not be satisfied immediately. The dynamism of the network makes the situation even more difficult as it may occur that the set of hosts that would satisfy globally the deployment constraints are never connected together at the same time, precluding any deployment. Instantiation of some components and their activation is however
possible as we allow the components to operate in a degraded mode through the dynamic management of interfaces' activation. The deployment process we implement is thus a propagative process: the instantiation and the activation of a component are performed as soon as some resources that meet its needs are discovered. Moreover, as it may occur that resources needed by an already deployed component become not sufficient, the placement choice for a component can be called in question dynamically. The deployment process can thus be considered as autonomic. We propose an algorithm that supports this propagative and autonomic deployment. The scalability of the process is ensured by the distributed and hierarchical organisation of the control. Moreover, we implement a distributed consensus that guarantees that the location constraints are satisfied even in the context of a partitioned network.
The paper is organised as follows. In section 2, the model of hierarchical component we work on is presented and we explain how a hierarchy of components is distributed over a network. The concept of activation at the interface level is briefly exposed. In section 3 we give some details on the form of the deployment descriptor that complements the architecture description, we present the overall propagative and autonomic deployment process, and we detail the distributed instantiation algorithm that forms the basis of the distributed deployment. Section 4 briefly describes the status of the development of our prototype. After discussing related work in section 5 we conclude the paper in section 6.
2 Distributed Hierarchical Components
We describe in this section what we understand by distributed hierarchical components. The basic features of our component model are explained and we detail how the components are distributed over a network of hosts. Further details can be found in [2].
2.1 Hierarchical Component Model
In this paper, we consider a widely applicable hierarchical component model in which a composite component represents a more or less complex structure of interconnected components that can be used as a simple component with well-defined required and provided interfaces. Recursion stops with primitive components that correspond to computing units. Components are interconnected through bindings that each represents a link between a required interface and a provided interface. For practical reasons, we have chosen to base our development on the Fractal component model [3] and more precisely on its reference Java implementation Julia. However, the concepts developed in this paper could easily be applied to other hierarchical component models such as Koola [4], Darwin [5] or Sofa [6].
The notion of composite component is often used at design time and is found in so-called architecture description languages (ADL) [7]. In the applicable framework we have chosen, it is however interesting to also be able to manipulate a composite at execution time in order to ease dynamic adaptation. Therefore the composite is reified at runtime namely by a membrane object that stores the interfaces of the component and its configuration (i.e. the list of its subcomponents and the bindings between these subcomponents).
2.2 Distribution Model
As mentioned in the introduction, we wish to deploy a hierarchy of components on a distributed platform that is characterized namely by its heterogeneity and the volatility of its hosts. The application components are distributed on a set of hosts. The way this placement is performed is detailed in section 3.2. We focus here on the description of the mechanisms allowing a distributed execution of hierarchical components.
In our approach, the architecture of a component is coupled to its placement and this relationship is dealt with differently for composite components than for primitive components. As far as distribution is concerned, a primitive component executes on one host whereas a composite can be physically replicated on a set of different hosts. The main goal of composite replication is that the component's interfaces become directly accessible on several hosts. A composite component can then be seen as providing a ubiquitous service.
A single host is associated with a primitive component whereas a set of hosts is associated with a composite component. This set must be a subset of the set of hosts associated with the including component. By default, the placement set of a composite component is inherited from the including component.
At execution time, each instance of a composite component maintains locally some information about the configuration of its subcomponents. Hence, a distributed composite component $c$ distributed over a set of hosts $H$ respects the following properties:
- The provided and required interfaces of $c$ are accessible on all the hosts $h_i$ of $H$.
- Let $c$ be a composite component that contains a primitive subcomponent $p$. There exists a single host $h_i$ on which $p$ executes. For every host $h_j \in H$ ($j \neq i$), there exists $c_j$, an instance of $c$ on $h_j$. Each $c_j$ holds a remote reference to $p$ (in a proxy).
2.3 Example
We give in this section an example of an application made of hierarchical components and we detail how it can be distributed on a given set of hosts.
Figure 2 depicts the architecture of a photo application that allows the user to search for a number of photos in a repository and to build a diaporama with the selected photos. The top-level composite component (PhotoApp) includes a generic component devoted to document searching (DocumentSearch). This component is also a composite component (taken off the shelf); it is composed of a DocumentFinder and a DocumentBuffer. The primitive DocumentFinder component provides an interface for issuing more or less complex requests based on the names of the documents, on their subjects or some other meta-information, and for selecting the corresponding documents from a given set of documents (a repository). The selected documents are passed to a DocumentBuffer. Apart from an interface for adding new documents, the primitive DocumentBuffer component provides an interface for sorting and extracting documents. This provided interface and the one of DocumentFinder are accessible as provided interfaces of the DocumentSearch component. Finally, the DocumentSearch component is bound to a PhotoRepository component that constitutes the specialized document repository and a DiapoMaker component which allows the selected photos to be assembled in a parameterizable diaporama.
Consider that the photo application is meant to be usable from any of the five machines owned by the user (hosts $h_1$ to $h_5$), in a dynamic network similar to the one depicted in figure 1. Hence, the target set of hosts associated with the PhotoApp component is \{$h_1$, $h_2$, $h_3$, $h_4$, $h_5$\}. A subset of these hosts is dedicated to the distributed execution of the composite component DocumentSearch, say \{$h_1$, $h_2$, $h_3$\}, $h_4$ and $h_5$ being excluded for licence reasons for example. Moreover, some constraints on the required resources result in the following placement of the primitive components (see section 3.2 for details): DocumentFinder on $h_1$, DocumentBuffer on $h_2$, PhotoRepository on $h_4$ and DiapoMaker on $h_5$.
At runtime the membranes of the composite components are maintained on each of their target hosts. A membrane contains the interfaces of the component as well as the description of its architecture (subcomponents and bindings). The instances of components (primitive or composite) that are not present are represented by proxies. Note that for a primitive component, the proxy is linked to the distant (single) instance of this primitive whereas for a composite component, the proxy is linked to one distant instance of the (partially replicated) membrane.
Figure 3 summarizes the placement of the components and shows the runtime entities (architectural information and instances) maintained on every host for our PhotoApp example.
2.4 Support for disconnections
The replication of a composite component eases the access to the services it implements as it permits the use of its provided interfaces on each host. However, because of network disconnections, from a given site, access to a remote component can be interrupted. Consequently, a method invocation in this case may raise some kind of a network exception. This problem is not specific to our approach but appears as soon as remote references are used, that may point to inaccessible components at any time. In a context of hierarchical components, the technique that consists in deactivating a component as soon as one of its required interface is unbound is very penalizing as a single disconnection will end up by ricochet with the deactivation of the top-level component, that is the deactivation of the entire application. In the dynamic environments we target, where disconnections are frequent, the application is likely to be rarely usable.
We address this problem in the following two ways:
- We introduce the notions of active and non active interfaces. We maintain the state (active or not) of an interface according to the accessibility of the component’s instance it is bound to. Moreover, we add a control interface to components to allow introspection on the state of its provided and required interfaces.
- We allow the execution of a component even if some of its interfaces are not active.
On the PhotoApp example, if a disconnection occurs between $h_1$ and $h_4$, the PhotoRepository component is no longer accessible from $h_1$. The disconnection is detected by a dedicated monitor, and consequently, the required interface of the DocumentSearch component is deactivated. This triggers the deactivation of the corresponding required interface of the DocumentFinder and then of its provided interface. However, the second interface of DocumentSearch (the one bound to DiapoMaker) can remain active as the DocumentSearch component is still accessible. Globally the application is still usable, although in a degraded mode, as diaporamas can still be built from the document buffer.
Notice that this approach has an obvious impact on the programming style required when developing components, as the state of an interface should be tested before invoking methods on this interface. Indeed, the
**Fig. 2:** Architecture of the photo application (in UML 2.0)
**Fig. 3:** Placement of components and entities maintained on hosts $h_1$ to $h_5$
uncertainty of the accesses to needed (or required) services—inherent to the targeted dynamic platforms—enforces adaptable code. The provision for tools to introspect on the availability of the interfaces is a minimal answer that should be complemented by other facilities for describing or applying, for example, adaptation strategies. This involves research at language level and middleware level that is out the scope of the presented work.
3 Deployment
3.1 Deployment specification
When considering the deployment of distributed components, the key issue is to build a mapping between the component instances and the hosts of the target platform. This task implies to have some knowledge not only about the identity of the hosts involved in the deployment phase, but also about the characteristics of each of them. Moreover, for a hierarchical component-based application, every component instance at each level of the hierarchy has to be handled.
At design-time, it is unlikely that the designer knows where to deploy each component regarding resource availability. This motivates the need to defer this task at runtime. We propose to add a deployment aspect to an existing architecture description language (such as xAcme\textsuperscript{1} or [8]). This will allow the description of the resource properties that must be satisfied by a machine for hosting a specific component.
We propose an extension to ADLs that makes possible the description of the target platform in a declarative way. The language we propose is purely declarative and descriptive and has a similar objective to the language described in [9]. It is not mandatory to give an explicit name or address of a target machine: the placement of components are mainly driven by constraints on the resources the target host should satisfy. The choice of the machine that will host a component will be made automatically at runtime (during the deployment).
The description of the resources that the target platform must satisfy is defined in a deployment descriptor in which references to component instances (defined in the architecture descriptor) can be made. For each component, a deployment context is defined. Such a context lists all the constraints that a hosting machine has to satisfy. If these constraints are associated with a primitive component, one host will be authorized to instantiate this component whereas several hosts may be selected for hosting the membrane of a composite component, in accordance with our distribution model.
Two types of constraints can be defined in a deployment context: resource constraints and location constraints. Resource constraints allow hardware and software needs to be represented. Each of these constraints defines a domain value for a resource type that the target host(s) should satisfy. Location constraints are useful to drive the placement choice of a component if it occurs that more than one host is candidate.
An example of use of resource and location constraints is illustrated in Figure 4 which shows the deployment descriptor, in an XML notation, of the photo application introduced in the previous section. Descriptor (a) contains the constraints associated with the DocumentSearch composite component and descriptor (b) contains those of the PhotoApp component. Resource constraints are defined within the resource-constraint element. For every component, adding an XML tag corresponding to a resource type (e.g., cpu, memory) specifies a constraint on this resource the target host has to verify.
Location constraints are declared within the location-constraint element. The target element defines the set of hosts among which our runtime support will have to choose. Hosts can be represented in two ways: (1) by their hostname if their identity are known before the deployment or (2) by a variable. A variable name can be used at the composite level to control the placement of the components. This feature is achieved by the use of the operator element, which allows relations between variables to be expressed. For example, in descriptor (a), the DocumentFinder component is said to be deployed on host $x$ and DocumentBuffer on host $y$. Constraining DocumentFinder and DocumentBuffer to be on two distinct hosts is achieved by using the alldiff operator that declares $x$ to be different from $y$. For a primitive component, at most one variable can be declared (because a primitive component will be placed on an unique host). Several variables can be used for a composite component, which is physically distributed over several hosts.
When composing the application, it is possible to use only variables. Then, the definition of the target platform is made at the first level of the hierarchy (for the component PhotoApp on the example) by adding the list of the machines that will be involved in the deployment (lines 71–75 on Figure 4). During the deployment, as it is detailed in the next section, this set of machines, together with the location constraints, will be inherited by the subcomponents.
\textsuperscript{1}xAcme: Acme Extensions to xArch, http://www-2.cs.cmu.edu/~acme/pub/xAcme/
3.2 Deployment process
3.2.1 Overview
When the architecture descriptor and the deployment descriptor are defined, the deployment phase we consider in this article consists in choosing one (or several) target host(s) for every component of the architecture. This selection has to be done in accordance with the deployment context associated with the components: the target hosts must satisfy the resource constraints and must not contradict the location constraints. Depending on the resources that are available on the machines of the network, more than one machine can be chosen for hosting a component: for a primitive component, only one host has to be selected whereas for a composite component, according to our distribution scheme, several hosts can be chosen. It is required to control the placement of components. Indeed, we have to guarantee that two islands of machines do not make inconsistent decisions (e.g., instantiating twice the same primitive component).
Because of the dynamism of the network on which we deploy our applications, it is not possible to base a deployment on a full connection of the different hosts. We are interested in a deployment that will allow an application to be activated progressively, that is, part of its provided services can be put at disposal even if some machines, that are required for the “not yet” installed components, are not available. As soon as these machines become connected, the deployment will go along. Moreover, the progression of the deployment is guaranteed not only thanks to the accessibility of a new connected machine but also because of resource changes on any host. This deployment is therefore qualified as propagative.
However, in the kind of dynamic network we target, when a component is installed and instantiated, the resources it requires may also disappear or become unavailable. A redeployment is then mandatory. The autonomic deployment consists in reconsidering the placement choices that have been made in the propagative phase in order to take into account the unavailability of resources.
The main difficulty of such a deployment in a pervasive network is to guarantee the unicity of the instantiations defined in the architecture descriptor. On one hand, a host that represents a composite component cannot be selected before the deployment, as in a fully connected network, since this machine may not be connected. On the other hand, if we let each of the

The text continues...
machines that host the same replicated composite com-
ponent make a decision, we cannot guarantee that, in
two different islands, contradictory instantiations may
not be performed.
In the following, we present the autonomic deploy-
ment in two steps. First, we detail the propagative de-
ployment, then, we present the mechanisms that make
this deployment autonomic.
3.2.2 Propagative deployment
When the deployment is launched from an initial ma-
cine, the deployment descriptor and the architecture
descriptor are diffused to all the machines that are
listed at the top level of the application (with the
XML target element). Then, each machine that re-
ceives these descriptors, launches a recursive process
(i.e., for each composite component) in order to select
the components that can be deployed (instantiated)
locally. The main steps of this process for a host $h_i$
and for a composite component $C$ are the following:
1. $h_i$ checks if it belongs to the set of target hosts
associated with $C$ (see the XML target element).
If $h_i$ is not concerned by the deployment (instan-
tiation), the process returns for this component;
else,
2. host $h_i$ launches probes corresponding to the re-
source constraints of every subcomponent of $C$ (e.g.
a probe for memory observation). For each sub-
component for which the probes have returned a
compatible value with regard to the resource con-
straints, $h_i$ declares itself as candidate for hosting
this component.
3. $h_i$ also receives other candidates. As soon as $h_i$
has computed a solution in function of these can-
didates, it tries to make it adopted via a consensus
algorithm.
4. Once the consensus has completed, i.e., a majority
of machines has decided (or not) to confirm the
placement solution of $h_i$; this piece of informa-
tion (which contains the values of the free variables)
is sent to the other machines (and therefore to the
other applicants) which will stop the process for
each component they are not authorized to instanti-
tate, else,
5. For each subcomponent that can be instantiated
on $h_i$, the process starts again at step 1.
Since resources may fluctuate (e.g., become available
and unavailable), discovery mechanisms (step 2) are used
periodically. Moreover, it may be possible that no so-
lution exists (step 3), that is, no combination of can-
ididates satisfies the location constraints. Periodic
observation of resources allows a machine to apply for
the instantiation of a specific (not installed yet) com-
ponent as soon as its resource constraints are verified,
potentially allowing the emergence of a new solution
for the location constraints.
The propagative deployment requires a distributed
algorithm in order to make a collective decision (step
3). This is achieved thanks to the use of a consensus
algorithm on the identity of the machines that apply
for the instantiation of a component. This algorithm
is detailed in the next section.
The placement information is diffused to other ma-
cines (step 4) by updating the deployment descriptor
with the new values, i.e., the names of the machines that
are selected for hosting each component. Indeed, be-
fore the deployment, the location of a component can
be defined without any knowledge on the identity of a
specific host through the use of variables. For exam-
ple, if hosts ambika and dakini are chosen respectively
for the DocumentFinder and DocumentBuffer com-
ponent, the following lines are modified in the deploy-
ment descriptor:
```xml
// replace line 12 by:
<target varname="x" value="ambika"/>
// replace line 24 by:
<target varname="y" value="dakini"/>
```
3.2.3 From a propagative deployment to an au-
tonomic deployment
Principle The propagative deployment allows a
component-based application to be deployed as soon as
its required resources become available. But, in gen-
eral, and especially in the kind of network we target,
resources can also become unavailable (e.g., the am-
ount of free memory demanded may decrease and be-
come not sufficient) and faults may happen. In these cases,
one or several components have to be redeployed. This
redeployment can be divided into three steps:
1. Each of the components that depend on the un-
available resource is stopped, yielding the deacti-
vation of its provided interfaces. All the (remote or
local) required interfaces bound to these latter be-
come inactive. Thus, all the interfaces leading to
this component will be deactivated, one after the
other. The application runs then in a degraded
mode.
2. The state of the component is saved in a serializ-
able form (we assume that the developer has antici-
pated this situation).
3. A message holding the identity of the component
to redeploy is diffused. This message also con-
In order to prevent this situation, we allow a newly connected machine to participate in the consensus. This is achieved by periodically broadcasting a message asking if a consensus is still in progress. In this case, the newly connected machine collects the data that have already been exchanged between the other machines and proposes a value that can make the consensus evolve.
4 Implementation status and results
4.1 Component distribution
We have implemented a middleware support for hierarchical distributed components by extending Julia [3], a Java implementation of the Fractal component model. Active interfaces have been realized thanks to the addition of a new controller (cubik-controller) to the primitive and composite components. This controller is in charge of maintaining up-to-date the state of the required and provided interfaces. The cubik-controller prevents method invocations on the inactive interfaces by reifying methods invocation (using the Julia Meta-CodeGenerator). We propose an API to make possible the use of specific strategies when an interface is inactive: for example, one can wait for the reactivation of the interface.
The support for managing active and inactive interfaces relies on the mixin mechanisms offered by Julia that allow code insertion in the membrane of the component. It is thus possible to take into account this kind of interface in any application implemented with Julia without any code modification. The components are then endowed with an API for discovering the state of the interfaces (active or not) and the dependencies between interfaces.
4.2 Context-awareness
The deployment that has been presented in this paper relies on the discovery of the resources required by the components. Thanks to Draje (Distributed Resource-Aware Java Environment) [11], an extensible Java-based middleware developed in our team, hardware resources (eg processor, memory, network interface) or software resources (eg process, socket, thread, directory) can be modeled and observed in an homogeneous way. For every resource constraint of the deployment descriptor, a resource in Draje is created and a periodic observation is launched.
Moreover, Draje has been extended by adding two new types of resources: the RemoteBinding and NetworkLink resources. A NetworkLink resource models the physical link between two hosts and maintains some information about the state of the network connection. A RemoteBinding resource subscribes to a NetworkLink
in order to construct the state of a binding between two remote components. Thus, thanks to a simple notification system, when a disconnection (resp. reconnection) occurs at the network level between two machines, the state of the bindings is updated and the corresponding interfaces of components are deactivated (resp. activated).
4.3 Deployment resolution
The deployment process presented is based on a constraint language to describe the placement of the components according to some conditions on resources. This language is purely declarative. It has been implemented with FractalADL and is supported at run-time by a constraint engine developed with Cream\(^2\). Cream is a Java library for writing and solving constraint satisfaction problems or optimization problems. Thanks to this library, information about candidates and about the state of the local resources can be “told” to a store. This store is then used in order to get a location placement solution or to detect a constraint inconsistency (e.g., the amount of free memory required is no longer available).
4.4 Performance evaluation
The performance of the deployment process depends on multiple parameters imposed by the execution environment (disconnections, fluctuation of the resources, volatility of the hosts, etc.).
In a preliminary experiment, we have tried to isolate the impact of the implementation of our consensus from connectivity conditions. This experiment has been hence conducted on a (fully connected) 100 Mb/s Ethernet network of workstations (2 GHz Pentium 4). It dealt with the deployment of a component whose deployment descriptor is similar to the one of the DocumentSearch component described in section 3. Figure 5 shows the time taken by our algorithm to decide on a placement solution in function of the number of machines involved in the deployment. First (curve 1), we have limited to one the number of machines that apply for hosting a component. Then we have considered concurrent applicants, with 5 and 8 simultaneous candidatures (curves 2 and 3).
This experiment allowed us to verify that the time to obtain a placement solution remains acceptable and that the multiplicity of simultaneous consensus executions does not incur prohibitive overcost.
We are currently investigating the connection of our middleware support to a mobility simulator so as to emulate more realistic executions.
\(^{2}\)http://kurt.scitec.kobe-u.ac.jp/~shuji/cream/
5 Related Work
The main aspects developed in this paper are related to a distribution scheme for hierarchical components on dynamic networks and to an automatic management of their deployment which is driven by constraints on resources that the machines of the network have to satisfy.
Many works have taken into account a context-aware deployment, that is, the placement of components onto hosts according to some resource requirements. A formal statement of the deployment is given in [12] and a set of algorithms that improve mobile system's availability is presented. In [13] the authors propose a deployment configuration language (DCL) in which properties on the target hosts can be expressed. The deployment considered in this work extends the Corba Component Model, which is a flat component model.
In [9], the authors present the Deladas language that also allows constraints to be defined on hosts and components. A constraint solver is used to generate a valid configuration of the placements of components and reconfiguration of the placement is possible when a constraint becomes inconsistent. But this centralized resolution is not suited to the kind of dynamic network we target. Moreover, the current version of Deladas does not consider resource requirements.
These abovementioned works aim at finding an optimum for the placement problem of components. This aspect is not one of our objectives. Indeed, due to the dynamism of the environment, it is hardly feasible to define a quiescent state that will allow our consensus algorithm to decide on an optimal placement. Moreover, the solutions proposed are centralized.

In [14] a decentralized redeployment is presented. The configuration to be deployed is available on every host involved in the deployment. A local decision can then be made according to the local subsystem configuration state. However the choice of the components’ location is made before the deployment process.
The work presented in [15] deals with the deployment of hierarchical component-based applications. The authors describe an asynchronous deployment and use the hierarchical structure of the application in order to distribute deployment tasks. In the solution developed by the authors, a deployment controller is statically chosen and defined in the deployment descriptor. In our approach we could not decide at design-time which machine will host such a controller. Besides, the approach proposed by the authors focuses on functional constraints and thus resource requirements have not been taken into account.
Among the works on autonomic computing, [16] and [17] are based on autonomic entities—the components—to define autonomic systems. Changes in the environment are performed locally by every component that is responsible for its own reconfiguration, update, migration etc. However, the deployment of autonomic systems and the management of architectural consistency are not explicit.
6 Conclusion
This paper has presented a middleware support for deploying and executing an application built with ubiquitous hierarchical components on an heterogeneous and dynamic network. The main contribution of this work is that it attempts to take into account a challenging distributed target platform characterized by the heterogeneity and the volatility of the hosts, volatility that may result in the fragmentation of the network.
A distribution method has been proposed for hierarchical components. Composite components are ubiquitous in the sense that they are made available on a set of hosts whereas each primitive component is localized on a single host. Besides, via the notion of active interface, we allow a component to operate in a degraded mode in order to account for network disconnections without making the entire application unusable.
Our proposal for supporting the deployment covers the last phases of the deployment process, namely the instantiation of the components’ instances and their activation, which are handled through the individual activation of their interfaces. We have presented a purely descriptive language for specifying deployment descriptors that allow for a context-aware deployment. This language is meant to extend some existing ADL. A deployment descriptor allows the description of the resource needs of a component and some placement constraints.
An algorithm allowing an autonomic deployment of a component-based application has been proposed. The instantiation and the activation of a component is performed as soon as some resources that meet its needs are discovered. This early activation is possible because some of its interfaces can remain inactive (the component then executes in a degraded mode) and defines the propagative deployment phase. When a constraint attached to a component becomes inconsistent, its redeployment is performed automatically by going back to the propagative deployment phase. The autonomic deployment is based on a consensus algorithm in order to guarantee the consistency (in terms of components’ instances) of the deployed architecture even in the context of a partitioned network.
References
|
{"Source-Url": "https://hal.science/hal-00426229/document", "len_cl100k_base": 8177, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 58348, "total-output-tokens": 10247, "length": "2e12", "weborganizer": {"__label__adult": 0.0002560615539550781, "__label__art_design": 0.0004377365112304687, "__label__crime_law": 0.00021851062774658203, "__label__education_jobs": 0.0004177093505859375, "__label__entertainment": 6.079673767089844e-05, "__label__fashion_beauty": 0.00012385845184326172, "__label__finance_business": 0.00013184547424316406, "__label__food_dining": 0.0002319812774658203, "__label__games": 0.0003981590270996094, "__label__hardware": 0.0007848739624023438, "__label__health": 0.00031495094299316406, "__label__history": 0.0002005100250244141, "__label__home_hobbies": 6.031990051269531e-05, "__label__industrial": 0.00023949146270751953, "__label__literature": 0.0002135038375854492, "__label__politics": 0.0001691579818725586, "__label__religion": 0.0003495216369628906, "__label__science_tech": 0.015838623046875, "__label__social_life": 6.0558319091796875e-05, "__label__software": 0.0085906982421875, "__label__software_dev": 0.97021484375, "__label__sports_fitness": 0.0001798868179321289, "__label__transportation": 0.000335693359375, "__label__travel": 0.00017404556274414062}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47185, 0.02095]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47185, 0.38329]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47185, 0.91766]], "google_gemma-3-12b-it_contains_pii": [[0, 912, false], [912, 4467, null], [4467, 9354, null], [9354, 14528, null], [14528, 19802, null], [19802, 19950, null], [19950, 25102, null], [25102, 27619, null], [27619, 32446, null], [32446, 34945, null], [34945, 39143, null], [39143, 43965, null], [43965, 47185, null]], "google_gemma-3-12b-it_is_public_document": [[0, 912, true], [912, 4467, null], [4467, 9354, null], [9354, 14528, null], [14528, 19802, null], [19802, 19950, null], [19950, 25102, null], [25102, 27619, null], [27619, 32446, null], [32446, 34945, null], [34945, 39143, null], [39143, 43965, null], [43965, 47185, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47185, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47185, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47185, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47185, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47185, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47185, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47185, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47185, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47185, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47185, null]], "pdf_page_numbers": [[0, 912, 1], [912, 4467, 2], [4467, 9354, 3], [9354, 14528, 4], [14528, 19802, 5], [19802, 19950, 6], [19950, 25102, 7], [25102, 27619, 8], [27619, 32446, 9], [32446, 34945, 10], [34945, 39143, 11], [39143, 43965, 12], [43965, 47185, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47185, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
a43b1eb0283279b3ab7108f662ccb0c053325f0c
|
[REMOVED]
|
{"len_cl100k_base": 7722, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 25073, "total-output-tokens": 8923, "length": "2e12", "weborganizer": {"__label__adult": 0.00031447410583496094, "__label__art_design": 0.0003709793090820313, "__label__crime_law": 0.0003736019134521485, "__label__education_jobs": 0.0009512901306152344, "__label__entertainment": 0.00012385845184326172, "__label__fashion_beauty": 0.00015687942504882812, "__label__finance_business": 0.0004611015319824219, "__label__food_dining": 0.0003917217254638672, "__label__games": 0.0007853507995605469, "__label__hardware": 0.003936767578125, "__label__health": 0.0006594657897949219, "__label__history": 0.00042319297790527344, "__label__home_hobbies": 0.00015270709991455078, "__label__industrial": 0.0008392333984375, "__label__literature": 0.00034737586975097656, "__label__politics": 0.0002968311309814453, "__label__religion": 0.000530242919921875, "__label__science_tech": 0.335693359375, "__label__social_life": 8.636713027954102e-05, "__label__software": 0.0218048095703125, "__label__software_dev": 0.6298828125, "__label__sports_fitness": 0.0002989768981933594, "__label__transportation": 0.0009169578552246094, "__label__travel": 0.00024580955505371094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30281, 0.01347]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30281, 0.31682]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30281, 0.89608]], "google_gemma-3-12b-it_contains_pii": [[0, 2728, false], [2728, 7173, null], [7173, 11586, null], [11586, 16598, null], [16598, 21781, null], [21781, 26134, null], [26134, 30281, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2728, true], [2728, 7173, null], [7173, 11586, null], [11586, 16598, null], [16598, 21781, null], [21781, 26134, null], [26134, 30281, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30281, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30281, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30281, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30281, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30281, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30281, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30281, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30281, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30281, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30281, null]], "pdf_page_numbers": [[0, 2728, 1], [2728, 7173, 2], [7173, 11586, 3], [11586, 16598, 4], [16598, 21781, 5], [21781, 26134, 6], [26134, 30281, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30281, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
cb4b22b665eab679b9de9655c1de07f816edc7ea
|
Package ‘procoil’
June 25, 2020
Type Package
Title Prediction of Oligomerization of Coiled Coil Proteins
Version 2.16.0
Date 2016-08-31
Depends R (>= 3.3.0), kebabs
Imports methods, stats, graphics, S4Vectors, Biostrings, utils
Suggests knitr
Author Ulrich Bodenhofer
Maintainer Ulrich Bodenhofer <[email protected]>
Description The package allows for predicting whether a coiled coil sequence
(amino acid sequence plus heptad register) is more likely to form
a dimer or more likely to form a trimer. Additionally to the
prediction itself, a prediction profile is computed which allows
for determining the strengths to which the individual residues
are indicative for either class. Prediction profiles can also
be visualized as curves or heatmaps.
License GPL (>= 2)
Collate AllClasses.R access-methods.R show-methods.R plot-methods.R
URL http://www.bioinf.jku.at/software/procoil/
https://github.com/UBod/procoil
VignetteBuilder knitr
LazyLoad yes
LazyData yes
biocViews Proteomics, Classification, SupportVectorMachine
git_url https://git.bioconductor.org/packages/procoil
git_branch RELEASE_3_11
git_last_commit e5d3da6
git_last_commit_date 2020-04-27
Date/Publication 2020-06-24
R topics documented:
- procoil-package .................................................. 2
- CCModel-class .................................................... 3
- CCModel-FileOps ................................................... 6
- CCProfile-class ................................................... 7
- plot-methods ....................................................... 10
- predict-methods .................................................. 12
Index
15
Description
The package allows for predicting whether a coiled coil sequence (amino acid sequence plus heptad register) is more likely to form a dimer or more likely to form a trimer. Additionally to the prediction itself, a prediction profile is computed which allows for determining the strengths to which the individual residues are indicative for either class. Prediction profiles can also be visualized as curves or heatmaps.
Details
The package defines two S4 classes, CCModel and CCProfile. The former’s purpose is to represent a coiled coil prediction model. The default model PrOCoilModel is pre-loaded when the package is loaded. An alternative model PrOCoilModelBA is also available. Other models can be loaded with the function readCCModel. The predict function is used to predict the oligomerization of one or more coiled coil sequences (which consist of a amino acid sequences and heptad registers aligned to them). The result is stored in a CCProfile object. The resulting prediction profile can be visualized with plot.
Author(s)
Ulrich Bodenhofer <[email protected]>
References
http://www.bioinf.jku.at/software/procoil/
Examples
```r
## display summary of default model
PrOCoilModel
## predict oligomerization of GCN4 wildtype
GCN4wt <- predict(PrOCoilModel,
"MKQLEDKVELLSKNHLENEVARLKLV",
"abcdefgabcdefgabcdefgabcdefgabcdefgabcdefg"
)```
### Display result
GCN4wt
### Plot profile
plot(GCN4wt)
### Predict oligomerization of unknown sequence (Marcoil example)
MarcoilEx <- predict(PrOCoilModel, "MGECDQLLVPMIT5RVL5LSTLIMDSROVYLENLRQFAENLRQNIENVHSFLNLRADLENLRQKFGKWSAMPGRHG", "---------------------------------------------------------------abcdefgabcdefgabcdefgabcdefgabcdefg-------------------------")
### Display result
MarcoilEx
### Plot profile
plot(MarcoilEx)
---
**CCModel-class**
Class "CCModel"
**Description**
S4 class representing a coiled coil prediction model
**Objects from the Class**
In principle, objects of this class can be created by calls of the form `new("CCModel")`, although it is probably never necessary to create such an object from scratch - and not advised either. The default model is stored in the object PrOCoilModel. An alternative model, PrOCoilModelBA, that is optimized for balanced accuracy is available too (see below). Custom models can be loaded from files using the function `readCCModel`.
**Discriminant function of model**
Given a new coiled coil sequence $x$ and a model, the discriminant function of the model is given as
$$f(x) = b + \sum_{p \in P} N(p, x) \cdot w(p),$$
where $b$ is a constant offset, $N(p, x)$ denotes the number of occurrences of pattern $p$ in sequence $x$, and $w(p)$ is the weight assigned to pattern $p$. $P$ is the set of all patterns contained in the model. In the models used in the procoil package, the weights are computed from a support vector machine. Models can include kernel normalization or not. The formula above refers to the variant without kernel normalization. If kernel normalization is employed, the weights are computed in a different way and the discriminant function changes to
$$f(x) = b + \sum_{p \in P} \frac{N(p, x) \cdot w(p)}{R(x)},$$
where $R(x)$ is a normalization value depending on the sample $x$. It is defined as follows:
$$R(x) = \sqrt{\sum_{p \in P} N(p, x)^2}$$
The procoil package does not consider arbitrary patterns, but only very specific ones: pairs of amino acids at fixed register positions with no more than a maximum number $m$ of residues in between. Internally, these patterns are represented as strings with an amino acid letter on the first position, then a certain number of wildcards (between 0 and $m$ as noted above), then the second amino acid letter, and an aligned sequence with the same number of wildcards and letters 'a'-'g' denoting the heptad register position on the first and last amino acid, e.g. "N..La..d". This pattern matches a coiled coil sequence if the sequence has an 'N' (Asparagine) at an 'a' position and a 'L' (Leucine) at the next 'd' position. For instance, the GCN4 wildtype has one occurrence of this pattern:
```
MKQLEDKVEELLSKNYHLENEVARLKKL
```
```
abcdefgabcdefgabcdefgabcdefgabcdefga
```
```
N..L
```
```
a d
```
**Slots**
- **b:** Object of class numeric the value $b$ as described above
- **m:** Object of class integer the value $m$ as described above
- **scaling:** Object of class logical indicating whether the model should employ kernel normalization
- **weights:** Object of class matrix storing all pattern weights; the matrix in this slot is actually consisting of only one row that contains the weights. The patterns are stored in column names of the matrix and encoded in the format described above
**Methods**
- **predict** signature(object = "CCModel"): see predict
- **show** signature(object = "CCModel"): displays the most important information stored in the CCModel object, such as, kernel parameters and a summary of weights.
- **weights** signature(object="CCModel"): returns the weights stored in object as a named numeric vector.
**Default model** PrOCoilModel
The procoil package provides a default coiled coil prediction model, PrOCoilModel. The model was created with the kebabs package [Palme et al., 2015] using the coiled coil kernel with $m = 5$, $C = 2$, and kernel normalization on the BLAST-augmented data set. It is optimized for standard (unbalanced) accuracy, i.e. it tries to minimize the probability of misclassifications. Since dimers are more frequent in the data set, it slightly favors dimers for unknown sequences.
Note that this is not the original model as described in [Mahrenholz et al., 2011]. The models have been re-trained for version 2.0.0 of the package using a newer snapshot of PDB and newer methods. The original models are still available for download and can still be used if the user wishes to. For detailed instructions, see the package vignette.
**Alternative model** PrOCoilModelBA
As mentioned above, the default model PrOCoilModelBA slightly favors dimers. This may be undesirable for some applications. For such cases, an alternative model PrOCoilModelBA is available that is optimized for balanced accuracy, i.e. it tries not to favor the larger class - dimers -, but may therefore prefer trimers in borderline cases. The overall misclassification probability is slightly higher for this model than for the default model PrOCoilModelBA.
The model ProCoilModelBA was created with PSVM [Hochreiter and Obermayer, 2006] using the coiled coil kernel with $m = 8$, $C = 8$, $\varepsilon = 0.8$, class balancing, and kernel normalization on the PDB data set (i.e. without BLAST augmentation). The same applies as for ProCoilModel: this model has been re-trained for package version 2.0.0.
For detailed instructions how to use the original models, see the package vignette.
Author(s)
Ulrich Bodenhofer <[email protected]>
References
http://www.bioinf.jku.at/software/procoil/
See Also
predict-methods
Examples
showClass("CCModel")
## show summary of default model (optimized for accuracy)
ProCoilModel
## show weight of pattern "N..La..d"
weights(ProCoilModel)["N..La..d"]
## show the 10 patterns that are most indicative for trimers
## (as the weights are sorted in descending order in ProCoilModel)
weights(ProCoilModel)[1:10]
## predict oligomerization of GCN4 wildtype
GCN4wt <- predict(ProCoilModel,
"MKQLEDKVEELLSKNYHLENEVARLKLVL",
"abcdefgabcdefgabcdefgabcdefgabcdefg")
## show summary of alternative model (optimized for balanced accuracy)
ProCoilModelBA
## show weight of pattern "N..La..d"
weights(ProCoilModelBA)["N..La..d"]
## show the 10 patterns that are most indicative for trimers
## (as the weights are sorted in descending order in ProCoilModelBA)
weights(ProCoilModelBA)[1:10]
CCModel-FileOps
Reading and writing of coiled coil prediction model from/to files
Description
Functions for reading a coiled coil prediction models from a file into a `CCModel` object and writing a `CCModel` object to a file.
Usage
```
readCCModel(file)
writeCCModel(object, file)
```
Arguments
- `file`: the name of the file from which `readCCModel` should read the model / the name of the file to which `writeCCModel` should write the model
- `object`: the `CCModel` object that `writeCCModel` should write to a file
Details
The `procoil` package comes with two ready-made models for oligomerization prediction, `PrOCoilModel` and `PrOCoilModelBA`. In case the user wants to define custom models or wishes to use previous versions of the prediction models, the functions `readCCModel` and `writeCCModel` can be used to read/write models from/to plain text files that can be viewed and also modified.
`writeCCModel` writes models in the following format:
```
_b,-1.07262284445085
_m,5
.scaling,1
L...Vd...a,1.63626232200227
R....Eg....e,1.5382098040217
R.Ec.e,1.29025032360792
E..Ve..a,1.22837780239385
...
```
Correspondingly, `readCCModel` expects the file to conform to the above format. See `CCModel` for an overview of model parameters and an explanation of patterns and weights.
Value
Upon successful completion, `readCCModel` returns a `CCModel` object. `writeCCModel` returns an invisible `NULL`.
Note
The PrOCoil model is available on [http://www.bioinf.jku.at/software/procoil/PrOCoilModel_v2.CCM](http://www.bioinf.jku.at/software/procoil/PrOCoilModel_v2.CCM) in exactly the format the function `readCCModel` requires. Analogously for the alternative model optimized for balanced accuracy (see `CCModel`): [http://www.bioinf.jku.at/](http://www.bioinf.jku.at/)
The original models described in [Mahrenholz et al., 2011] are available on [http://www.bioinf.jku.at/software/procoil/PrOCoilModel_v1.CCModel](http://www.bioinf.jku.at/software/procoil/PrOCoilModel_v1.CCModel) and [http://www.bioinf.jku.at/software/procoil/PrOCoilModelBA_v1.CCModel](http://www.bioinf.jku.at/software/procoil/PrOCoilModelBA_v1.CCModel), respectively. So, by loading one of these files, the original models can still be used.
Author(s)
Ulrich Bodenhofer <[email protected]>
References
[http://www.bioinf.jku.at/software/procoil/](http://www.bioinf.jku.at/software/procoil/)
See Also
procoil, CCModel-class
Examples
```r
## load small example model file for testing purposes
## NOTE: this is an incomplete model that will probably not provide
## meaningful predictions
file <- system.file("examples", "testModel.CCModel", package="procoil")
testModel <- readCCModel(file)
testModel
## Not run:
## read original model from file
URL <- "http://www.bioinf.jku.at/software/procoil/PrOCoilModel_v1.CCModel"
PrOCoilModelV1 <- readCCModel(URL)
## display summary of example model
PrOCoilModelV1
## display 10 highest pattern weights
weights(PrOCoilModelV1)[1:10]
## End(Not run)
```
---
**CCProfile-class**
Class "CCProfile"
Description
S4 class for representing coiled coil prediction results
Objects from the Class
In principle, objects of this class can be created by calls of the form new("CCProfile"), although it is not advised to do so. Most importantly, the predict function of returns its results in objects of this type.
Slots
This class extends the class PredictionProfile from the kebabs package directly and therefore inherits all its slots and methods. The following slots are defined for CCProfile objects additionally:
- **disc**: Object of class numeric containing the discriminant function value(s) (see CCModel for details)
- **pred**: Object of class factor containing the final classification(s). Upon a call to predict, it is either “trimer” or “dimer”.
Prediction profiles
As described in CCModel, the discriminant function of the coiled coil classifier is essentially a weighted sum of numbers of occurrences of certain patterns in the sequence under consideration, i.e. every pattern occurring in the sequence contributes a certain weight to the discriminant function. Since every such occurrence is uniquely linked to two specific residues in the sequence, every amino acid in the sequence contributes a unique weight to the discriminant function value which is nothing else but half the sum of weights of matching patterns in which this amino acid is involved. If we denote the contribution of each position \(i\) with \(s_i(x)\), it follows immediately that
\[
f(x) = b + \sum_{i=1}^{L} s_i(x),
\]
where \(L\) is the length of the sequence \(x\). The values \(s_i(x)\) can then be understood as the contributions that the \(i\)-th residue makes to the overall classification of the sequence \(x\), which we call prediction profile. These profiles can either be visualized as they are without taking the offset \(b\) into account or by distributing \(b\) equally over all residues. These are the so-called baselines that are included in CCProfile objects. They are computed as \(-b/L\).
Methods
- **plot** signature(x="CCProfile",y="missing"): see plot
- **heatmap** signature(x="CCProfile",y="missing"): if the CCProfile object \(x\) contains the profiles of at least three sequences, the profiles are visualized as a heatmap. This method is inherited from the kebabs package; for details, see heatmap.
- **show** signature(object="CCProfile"): displays the most important information stored in the CCProfile object \(object\), such as, the sequences, kernel parameters, baselines, profiles, and classification results.
Accessor-like methods
The CCProfile class inherits all accessors from the PredictionProfile class, such as, sequences, baselines, profiles, and the indexing operator \(x[i]\). Additionally, the procoil package defines the following two methods:
profile signature(fitted="CCProfile"): for compatibility with previous versions, a method profile is available, too. It extracts the profile(s) in the same way as profiles fitted signature(object="CCProfile"): extracts the final classifications. This function returns a factor with levels “dimer” and “trimer”. If decision.values=TRUE is specified, a numeric vector is attached to the result as an attribute “decision.values” which also contains the discriminant function values.
Author(s)
Ulrich Bodenhofer <[email protected]>
References
http://www.bioinf.jku.at/software/procoil/
See Also
CCModel, plot, plot, PredictionProfileAccessors.
Examples
showClass("CCProfile")
## predict oligomerization of GCN4 wildtype
GCN4wt <- predict(ProCoilModel,
"MKQLEDKVEELLSKNYHLENEVARLKLVV",
"abcdefgabcdefgabcdefgabcdefga")
## display summary of result
GCN4wt
## show raw prediction profile
profile(GCN4wt)
## plot profile
plot(GCN4wt)
## define four GCN4 mutations
GCN4mSeq <- c("GCN4wt" ="MKQLEDKVEELLSKNYHLENEVARLKLVV",
"GCN4_N16Y_L19T"="MKQLEDKVEELSSKNYHLENEVARLKLVV",
"GCN4_E22R_K27E"="MKQLEDKVEELSSKNYHLENEVARLKLVV",
"GCN4_V23K_K27E"="MKQLEDKVEELSSLXHLENEVARLKLVV")
GCN4mReg <- rep("abcdefgabcdefgabcdefgabcdefga", 4)
## predict oligomerization
GCN4mut <- predict(ProCoilModel, GCN4mSeq, GCN4mReg)
## display summary of result
GCN4mut
## display predictions
fitted(GCN4mut)
## overlay plot of two profiles
plot(GCN4mut[c(1, 2)])
## show heatmap
heatmap(GCN4mut)
---
### plot-methods
#### Plotting prediction profiles
**Description**
Functions for plotting prediction profiles
**Usage**
```r
## S4 method for signature 'CCProfile,missing'
plot(x, col=c("red", "blue"),
standardize=TRUE, shades=NULL, legend="default",
legendPos="topright", xlab="", ylab="weight",
lwd.profile=1, lwd.axis=1, las=1,
heptads=TRUE, annotate=TRUE, ...)
```
**Arguments**
- `x` Object of class `CCProfile` to be plotted with `plot`
- `col` Character string containing the name(s) of the color(s) in which the profile(s) should be plotted.
- `standardize` If FALSE, the profile values $s_i$ are displayed as they are with the value $y = -b/L$ superimposed as a light gray line. If TRUE (default), the profile(s) is/are shifted by the baseline values $-b/L$ and the light gray line is displayed at $y = 0$.
- `shades` Vector of at least two color specifications (default: NULL). If not NULL, the background area above and below the base line $y = -b/L$ are shaded in colors `shades[1]` and `shades[2]`, respectively.
- `legend` A character string containing the legend:description of the profile. If “default”, the names of the sequences/profiles are used. If no names are available, the profiles are simply enumerated (as long as two profiles should be plot together; if only a single unnamed profile is to be plotted, no legend is shown). If `legend` is an empty string, no legend is displayed at all.
- `legendPos` position specification for legend (if `legend` is specified). Can either be a vector with coordinates or a single keyword like “topright” (see `legend`).
- `xlab` label of horizontal axis, empty by default.
- `ylab` label of vertical axis, defaults to “weight”.
- `lwd.profile` profile line width as described for parameter `lwd` in `par`
- `lwd.axis` axis line width as described for parameter `lwd` in `par`
- `las` see `par`
heptads if TRUE (default), the heptad structure is indicated by vertical light gray lines separating the different heptads. Heptad irregularities are indicated with red lines.
annotate if TRUE (default), the heptad annotation information is shown in the center of the plot.
... all other arguments are passed to the plot method from the kebabs package
Details
The plot function displays a prediction profile as a step function over the sequence with the steps connected by vertical lines. The sequence and the heptad register are visualized below and above the profile, respectively. The baseline value $-b/L$ and the light gray line has the following meaning: It is obvious that we can rewrite
$$f(x) = b + \sum_{i=1}^{L} s_i(x)$$
as
$$f(x) = \sum_{i=1}^{L} (s_i(x) - (-\frac{b}{L}))$$
so the discriminant function value $f(x)$ can be understood as the sum of values $s_i(x) - (-\frac{b}{L})$, i.e. the area between the constant value $-b/L$ and the prediction profile. If the area above the light gray line is greater than the area below the light gray line, the sequence is predicted as trimer, otherwise as dimer.
If plot is called for a CCPProfile object that contains profiles of two sequences, the two profiles are plotted together to facilitate a comparison of profiles (e.g. wild type sequences versus mutants). Although the plot function tolerates profiles/sequences with different lengths and/or unaligned heptad registers, it is obvious that the superimposition of profiles of two unaligned, unrelated sequences makes little sense.
The plot functions gives an error if is called for a CCPProfile object that contains profiles of three or more sequences.
The given function is only a wrapper around the plot function provided by the kebabs package. The only difference is that heptad separators (argument heptads) and the heptad annotation (argument annotate) are displayed by default. Moreover, presently, no legend is displayed by default if a single profile is plotted for an unnamed sequence.
Value
This function does not return any value.
Author(s)
Ulrich Bodenhofer <[email protected]>
References
http://www.bioinf.jku.at/software/procoil/
See Also
procoil, CCModel, CCPProfile
Examples
```r
## predict oligomerization of GCN4 wildtype
GCN4wt <- predict(PrOCoilModel,
"MKQLEDKVEELSKNYHLENEVARLKKLV",
"abcdefgabcdefgabcdefgabcdefga")
## plot profile
plot(GCN4wt)
## define two GCN4 mutations
GCN4mSeq <- c("GCN4wt" = "MKQLEDKVEELSKNYHLENEVARLKKLV",
"GCN4_N16I_L19N" = "MKQLEDKVEELSKHNYHNEVARLKKLV")
GCN4mReg <- rep("abcdefgabcdefgabcdefgabcdefga", 2)
## predict oligomerization
GCN4mut <- predict(PrOCoilModel, GCN4mSeq, GCN4mReg)
## overlay plot of the two profiles
plot(GCN4mut)
```
predict-methods
Predict oligomerization of one or more coiled coil segments
Description
Function for predicting the oligomerization of one or multiple coiled coil segments
Usage
```r
## S4 method for signature 'CCModel'
predict(object, seq, reg)
```
Arguments
object
The model to be considered; can either be one of the models included in the package (PrOCoilModel and PrOCoilModelBA) or any other model loaded or created by the user. For a detailed explanation of the two default models, see CCModel.
predict-methods
seq One or several amino acid sequences; valid characters are all uppercase letters except 'B', 'J', 'O', 'U', 'X', and 'Z'; invalid characters are tolerated, but ignored by the prediction. This argument can be a character vector, an AAString object, an AAStringSet object, or an AAVector object
reg a character vector containing the heptad register(s); valid characters are the lowercase letters 'a'-'g' and dashes '-'. Can also be omitted, see details below.
Details
The function `predict` is the most important one in the `procoil` package. It is used to apply a coiled coil prediction model to coiled coil sequences/segments. It uses the discriminant function described in `CCModel`. By default the final classification is computed on the basis of the discriminant function value \( f(x) \). If \( f(x) >= 0 \), the sequence \( x \) is predicted as trimer, otherwise as dimer.
If the `reg` argument is missing, `predict` looks whether the object passed as argument `seq` includes heptad register information, either as an attribute `reg` (if `seq` is a character vector), as metadata field `reg` (if `seq` is an AAString or AAStringSet object), or via annotation metadata (if `seq` is an AAStringSet or AAVector object; see `annotationMetadata`). In any case, the `reg` argument has priority over all other ways of specifying the heptad annotation. In other words, if `reg` is specified and `seq` contains heptad annotations in one of the ways described above, the `reg` argument has priority and the heptad annotation in `seq` is ignored.
The `reg` argument must have exactly as many elements as `seq` has sequences, and the registers must be aligned to the sequences, i.e. the first register must be exactly as long as the first sequence, and so on.
If heptad registers contain dashes, the `predict` function extracts all contiguous coiled coil segments and computes predictions for all of them. The returned CCProfile object then contains profiles/predictions of all coiled coil segments that were extracted from `seq` (see example below).
Value
returns a CCProfile object
Author(s)
Ulrich Bodenhofer <[email protected]>
References
http://www.bioinf.jku.at/software/procoil/
See Also
procoil, CCModel, CCProfile
Examples
```r
## predict oligomerization of GCN4 wildtype
GCN4wt <- predict(PrOCoilModel,
"MKQLEDKVEELLSKNYHLENEVARLKLV",
"abcdefgabcdefgabcdefgabcdefgabcdefgabcdef")```
## show result
GCN4wt
## example with four GCN4 mutations
GCN4mSeq <- c("GCN4wt" = "MKQLEDKVEELSKNYHLENEVARLKLVL",
"GCN4_N16Y_L19T" = "MKQLEDKVEELSKYYHTEENVARLKLVL",
"GCN4_E22R_K27E" = "MKQLEDKVEELSKNYHLENERLKLVL",
"GCN4_V23K_K27E" = "MKQLEDKVEELSKNYHLENEKARLKLVL")
## to illustrate the alternative interface, we convert this
## character vector to an 'AAStringSet' object and add
## heptad registers as annotation metadata
GCN4mAA <- AAStringSet(GCN4mSeq)
annotationMetadata(GCN4mAA, annCharset="abcdefg") <-
rep("abcdefgabcdefgabcdefgabcdefg", 4)
## predict oligomerization (note: no 'reg' argument!)
GCN4mut <- predict(PrOCoilModel, GCN4mAA)
## display summary of result
GCN4mut
## predict oligomerization of unknown sequence (Marcoil example)
MarcoilEx <- predict(PrOCoilModel,
"MGECDQLLFVMITNSRLVL"STLIMDSRYYLENLRQFAELENLRQNIENVHSFLENLRADLENLRRQFPGWYSEMPGRHG",
"----------------------------------abcdefgabcdefgabcdefgabcdefgabcdefg-----")
## show results
MarcoilEx
Index
* classes
CCModel-class, 3
CCProfile-class, 7
* classif
plot-methods, 10
predict-methods, 12
* data
CCModel-FileOps, 6
* manip
CCModel-FileOps, 6
* methods
plot-methods, 10
predict-methods, 12
* models
plot-methods, 10
predict-methods, 12
* package
procoil-package, 2
[,CCProfile,index,ANY,ANY-method (CCProfile-class), 7
AAString, 13
AAStringSet, 13
AAVector, 13
annotationMetadata, 13
baselines, 8
baselines,CCProfile-method (CCProfile-class), 7
baselines.CCProfile (CCProfile-class), 7
CCModel, 2, 6, 8, 9, 12, 13
CCModel (CCModel-class), 3
CCModel-class, 3
CCModel-FileOps, 6
CCProfile, 2, 10–13
CCProfile (CCProfile-class), 7
CCProfile-class, 7
fitted,CCProfile-method (CCProfile-class), 7
fitted.CCProfile (CCProfile-class), 7
heatmap, CCProfile, missing-method (CCProfile-class), 7
heatmap.CCProfile (CCProfile-class), 7
legend, 10
par, 10
plot, 2, 8, 9, 11
plot (plot-methods), 10
plot,CCProfile,missing-method (plot-methods), 10
plot-methods, 10
plot.CCProfile (plot-methods), 10
predict, 2, 4, 8
predict (predict-methods), 12
predict,CCModel-method (predict-methods), 12
predict-methods, 12
predict.CCModel (predict-methods), 12
PredictionProfile, 8
PredictionProfileAccessors, 9
procoil, 7, 12, 13
procoil (procoil-package), 2
procoil-package, 2
PrOCoilModel, 2, 6, 12
PrOCoilModel (CCModel-class), 3
PrOCoilModelBA, 2, 6, 12
PrOCoilModelBA (CCModel-class), 3
profile,CCProfile-method (CCProfile-class), 7
profile.CCProfile (CCProfile-class), 7
profiles, 8, 9
profiles,CCProfile-method (CCProfile-class), 7
profiles.CCProfile (CCProfile-class), 7
readCCModel, 2, 3
readCCModel (CCModel-FileOps), 6
sequences, 8
sequences,CCProfile-method (CCProfile-class), 7
sequences.CCProfile (CCProfile-class), 7
show,CCModel-method (CCModel-class), 3
show, CCPProfile-method (CCProfile-class), 7
show.CCModel (CCModel-class), 3
show.CCProfile (CCProfile-class), 7
weights, CCModel-method (CCModel-class), 3
weights.CCModel (CCModel-class), 3
writeCCModel (CCModel-FileOps), 6
|
{"Source-Url": "http://bioconductor.org/packages/release/bioc/manuals/procoil/man/procoil.pdf", "len_cl100k_base": 7390, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 38810, "total-output-tokens": 9417, "length": "2e12", "weborganizer": {"__label__adult": 0.0003457069396972656, "__label__art_design": 0.00046944618225097656, "__label__crime_law": 0.0004992485046386719, "__label__education_jobs": 0.0015048980712890625, "__label__entertainment": 0.000194549560546875, "__label__fashion_beauty": 0.00023376941680908203, "__label__finance_business": 0.0004680156707763672, "__label__food_dining": 0.0005154609680175781, "__label__games": 0.0008511543273925781, "__label__hardware": 0.00135040283203125, "__label__health": 0.0008997917175292969, "__label__history": 0.0003707408905029297, "__label__home_hobbies": 0.00019371509552001953, "__label__industrial": 0.0008134841918945312, "__label__literature": 0.00031638145446777344, "__label__politics": 0.0004351139068603515, "__label__religion": 0.0006289482116699219, "__label__science_tech": 0.292724609375, "__label__social_life": 0.00018203258514404297, "__label__software": 0.0465087890625, "__label__software_dev": 0.6494140625, "__label__sports_fitness": 0.00041294097900390625, "__label__transportation": 0.00039267539978027344, "__label__travel": 0.00025534629821777344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30063, 0.03836]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30063, 0.44344]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30063, 0.73636]], "google_gemma-3-12b-it_contains_pii": [[0, 1331, false], [1331, 3455, null], [3455, 5401, null], [5401, 8503, null], [8503, 10537, null], [10537, 12326, null], [12326, 13929, null], [13929, 16640, null], [16640, 18459, null], [18459, 20476, null], [20476, 22604, null], [22604, 24261, null], [24261, 26976, null], [26976, 28043, null], [28043, 29839, null], [29839, 30063, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1331, true], [1331, 3455, null], [3455, 5401, null], [5401, 8503, null], [8503, 10537, null], [10537, 12326, null], [12326, 13929, null], [13929, 16640, null], [16640, 18459, null], [18459, 20476, null], [20476, 22604, null], [22604, 24261, null], [24261, 26976, null], [26976, 28043, null], [28043, 29839, null], [29839, 30063, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 30063, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30063, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30063, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30063, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30063, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30063, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30063, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30063, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30063, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30063, null]], "pdf_page_numbers": [[0, 1331, 1], [1331, 3455, 2], [3455, 5401, 3], [5401, 8503, 4], [8503, 10537, 5], [10537, 12326, 6], [12326, 13929, 7], [13929, 16640, 8], [16640, 18459, 9], [18459, 20476, 10], [20476, 22604, 11], [22604, 24261, 12], [24261, 26976, 13], [26976, 28043, 14], [28043, 29839, 15], [29839, 30063, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30063, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
7e16964c0b272f3a9b16d9b3eab6d4250f5d9122
|
CHAPTER -2
Shortest Path From A Specified Vertex To All Vertices
2.1 INTRODUCTION
Basically a shortest path from vertex to vertex has only unique solution, that means only specific path of the graph is covered, where as if the shortest path for any particular complete graph can be calculated from vertex to all vertices.
Sometimes it is interesting to find the shortest path between all \( n(n-1) \) ordered pairs of vertices in a digraph or \( n(n-1)/2 \) unordered pairs of vertices in an undirected Graph. If we use the Dijkstra's algorithm for this purpose, the Computation time would be proportional to \( n^4 \). There are several algorithms available that can do better. Among these, two are considered best, both being equally efficient, one of them is due to Dantziz[3] and the other is due to Floyd[6], based on a procedure by Warshall[13]. Both algorithms require computation time proportional to \( n^3 \).
The Problem of finding a shortest path from vertex to all pairs of vertices is an extension of the problem which was considered in Chapter 1. In continuation of the previous chapter, we described a Computer programming in C- language for the problem of finding the shortest path from specified vertex to all pairs of vertices. First, we find the shortest path for a weighted directed graph. Least weighted directed edges are considered from source vertex to terminal vertex. As the shortest path, in continuation, we have to findout all the pairs of shortest paths that are present in the given weighted directed graph.
Finally, if we calculate their costs, the one which is having the least weight is considered to be the feasible shortest-path. If there occurs, a tie between them it is considered to be the feasible shortest path, and the one having less cost is not considered. [5,7,7,8,7 is a shortest path cost is 7]. If there occurs, a tie of costs [like 7,7,11,11,14] then we will take the least tied cost path as the feasible shortest path. [Shortest path cost is 7].
Example :1
Consider the following Directed Weighted Graph
Fig (3.1): Weighted Graph
For this Graph, the input is in the form of Adjacency Directed Weighted Graph.
<table>
<thead>
<tr>
<th></th>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
<th>E</th>
<th>F</th>
<th>G</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>0</td>
<td>∞</td>
<td>∞</td>
<td>8</td>
<td>2</td>
<td>∞</td>
<td>∞</td>
</tr>
<tr>
<td>B</td>
<td>7</td>
<td>0</td>
<td>1</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
</tr>
<tr>
<td>C</td>
<td>3</td>
<td>∞</td>
<td>0</td>
<td>∞</td>
<td>4</td>
<td>3</td>
<td>∞</td>
</tr>
<tr>
<td>D</td>
<td>2</td>
<td>∞</td>
<td>∞</td>
<td>0</td>
<td>1</td>
<td>∞</td>
<td>∞</td>
</tr>
<tr>
<td>E</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>0</td>
<td>∞</td>
<td>2</td>
</tr>
<tr>
<td>F</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>10</td>
<td>4</td>
<td>0</td>
<td>7</td>
</tr>
<tr>
<td>G</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>2</td>
<td>∞</td>
<td>∞</td>
<td>0</td>
</tr>
</tbody>
</table>
Here we consider shortest-path from vertex to all pairs.
Shortest-path from B to G
1)
Cost of Shortest - path = 11
Root is B → C → F → G
2) Cost of Shortest path = 18
Root is B → A → D → E → G
3) Cost of Shortest path = 11
Root is B → A → E → G
4) Cost of Shortest path = 7
Root is B → C → E → G
Now we will consider, least cost is the cost of the shortest-path from vertex to all vertices.
So cost is 11. (because 11, 11 tie occurs, so the tied values considered to be the Shortest-path).
Shortest-path from vertex B to another vertex G, cost is 11.
Shortest-path from vertex to all vertices costs are $\{11, 18, 11, 7, 10, 21\}$
Cost of Shortest-path = 10
Root is $B \rightarrow C \rightarrow F \rightarrow E \rightarrow G$
Cost of Shortest-path = 21
Root is $B \rightarrow A \rightarrow D \rightarrow A \rightarrow E \rightarrow G$
Example: 2
Consider the following Directed Weighted Graph
For this graph, the input is in the form of Adjacency Directed weighted Graph.
\[
\begin{array}{cccc}
v_1 & v_2 & v_3 & v_4 \\
v_1 & 0 & 2 & \infty & \infty \\
v_2 & \infty & 0 & 3 & 2 \\
v_3 & \infty & 4 & 0 & 5 \\
v_4 & 1 & 0 & 6 & 0 \\
\end{array}
\]
Here the shortest-path in the form adjacency matrix and directed graph is vertex to all vertices \( v_1 \) to \( v_3 \)
1)
Cost of Shortest-path = 5
Root is \( v_1 \to v_2 \to v_3 \)
So shortest-path from vertex to all pairs of vertices so cost's are \{5,10\}
Now we will consider, least cost is the cost of the shortest-path from vertex to all vertices.
So cost is 5. Root is \(v_1 \rightarrow v_2 \rightarrow v_4 \rightarrow v_3\)
2.2 WARSHALL - FLOYD ALGORITHM:
Starting with the n by n matrix $D = [d_{ij}]$ of direct distances, n different matrices $D_1, D_2, D_3, \ldots, D_n$ are constructed sequentially. Matrix $D_k$, $1 \leq k \leq n$ may be thought of as the matrix whose $(i, j)$th entry gives the length of the shortest directed path among all directed paths from $i$ to $j$, with vertices $1, 2, \ldots, k$ allowed as the intermediate vertices. Matrix $D_k = [d_{ij}^{(k)}]$ is constructed from $D_{k-1}$ according to the following rule.
$$d_{ij}^{(k)} = \min \{ d_{ij}^{(k-1)}, [d_{ik}^{(k-1)} + d_{kj}^{(k-1)}] \}, \text{ for } k = 1, 2, \ldots, n \text{ and } d_{ii}^{(0)} = d_{ii}.$$
That is, in Iteration 1, vertex 1 is inserted in the path from vertex $i$ to $j$ if $d_{ii} > d_{i1} + d_{1j}$.
In Iteration 2, vertex 2 is inserted, and so on.
Suppose, for example, that the Shortest directed path from vertex 7 to 3 is 741953. The following replacements occur.
Iteration 1: $d_{49}^{(0)}$ is replaced by $(d_{41}^{(0)} + d_{19}^{(0)})$
Iteration 4: $d_{79}^{(3)}$ is replaced by $(d_{74}^{(3)} + d_{49}^{(3)})$
Iteration 5: $d_{91}^{(4)}$ is replaced by $(d_{95}^{(4)} + d_{51}^{(4)})$
Iteration 9: $d_{73}^{(6)}$ is replaced by $(d_{79}^{(6)} + d_{93}^{(6)})$
Once the Shortest distance is obtained in $d_{73}^{(6)}$. The value of this entry will not be altered in subsequent operations.
The algorithm described so far does not actually list the path, it only gives the shortest distances. Obtaining the path is slightly more involved than in previous Dijkstra's algorithm, because here, in this algorithm there are $n(n-1)$ paths required, not just one.
An efficient method of obtaining the intermediate vertices in each of the shortest paths is by constructing a matrix \( Z = [ Z_{ij} ] \) referred to as the optimal policy matrix, such that entry \( Z_{ij} \) is the first vertex from \( i \) along the shortest path from \( i \) to \( j \). The optimal policy matrix \( Z \) can be constructed as follows. Initially we set \( Z_{ij} = j \), if \( d_{ij} \neq 0 \),
\[
= 0, \text{ if } d_{ij} = \infty.
\]
In the \( k \)th iteration, if vertex \( K \) is inserted between \( i \) and \( j \), element \( Z_{ij} \) is replaced by the current value of \( Z_{ik} \), for all \( i \) and \( j \). Thus updating of the \( Z \) matrix is done during each iteration \( k \), where \( k = 1,2,\ldots,n \). At the end, the shortest path \( (i,v_1,v_2,\ldots,v_q,j) \) from \( i \) to \( j \) is derived as a sequence of vertex numbers from matrix \( Z \) as follows.
\[
v_1 = Z_{ij}, v_2 = Z_{v_1 j}, v_3 = Z_{v_2 j}, \ldots, j = Z_{v_q j}.
\]
For computational purpose, we need memory space for only one \( n \) by \( n \) matrix. Other constructed matrices can be overwritten on this matrix.
To estimate the execution time, note that we have to construct \( n \) matrices \( D_1, D_2, \ldots, D_n \) in sequential order. For each matrix \( D_k \), the number of elements to be computed is \((n-1)(n-2)\) because we already know that \( i \neq j, i \neq k, j \neq k \), although for simplicity in the Flow chart we have not taken advantage of this slight saving. Thus the execution time is proportional to \( n(n-1)(n-2) \approx n^3 \).
Whenever \( d_{ik}^{(k-1)} = \infty \), it is possible to circumvent \((n-1)\) additions and comparisons in exchange for an additional test.
Fig (2.2): Shortest path between every vertex-pair
2.3 ALGORITHM: SHORTEST PATH BETWEEN EVERY VERTEX-PAIR
/* This Algorithm takes n, the number of nodes in the Graph and the D, the
adjacency matrix of the graph as input. K,i and s,j are integers and dij is a record *
1. Read n, the number of nodes in the graph and D as the adjacency matrix.
2. K is First node or starting node is assigned 1.
3. \( i=1, dik < \infty \)
4. \( i = n, \text{state}[i].\text{predecessor} = n+1, =i+1 \)
5. Repeat through Step (6) for n = 1 to i
6. If \(( dik < \infty \) then \( s = dik + dkj \)
\( \text{State}[i].\text{predecessor} = \text{state}[i].\text{length} \)
7. If(\( dkj < \infty \)) then \( s < dij \)
8. Repeat step(9) for \( j = 1 \) to \( n \)
9. if(\( \text{state}[i].\text{length} < \text{min} \)) then \( k \)
then \( \text{state}(k)\).\text{length} = \text{state}(k)\).\text{length} + a[k][i] \)
10. \( k = 0 \) path \( (i) = k; j = j+1 \)
11. Repeat through step (13) for \( i = 1 \) to \( n \)
12. if ( \( i =0, i < \text{max}, i++ \)) then go to step(13)
13. if (\( \text{step}[i].\text{label} == 0 \)) \&\& (\( \text{state}[i].\text{length} < \text{min} \))
then \( \text{state}[i].\text{length} = \text{state}(k)\).\text{length} + a[k][i] \)
14. \( a[k,i] = \text{min} \)
15. \( \text{min} = \infty \)
16. for \( i = 1 \) to \( n \) do step 8
17. \( dij = s \)
\( a[k,i] = k; i = i + 1 \)
18. PRINT shortest path for \( j = 1 \) to \( n \) and also the tied distance path of \([i],\text{min}\)
2.4 PROGRAM
```c
#include <stdio.h>
typedef struct
{
int predecessor;
int length;
int label;
} pi;
pi state[100];
int a[80][80], max1, max=1428;
main ( )
{
int n, i, k, s, t, path[80], min;
printf("ENTER THE NO OF NODES FOR SHORTEST PATH : \n");
scanf("%d", &n);
printf("READ THE MATRIX : \n");
readln(n);
printf("ENTER THE STARTING AND ENDING PATH'S : \n");
scanf("%d %d", &s, &t);
for(i=K, i<n4++)
{
state[i].predecessor = n+1;
state[i].label = 0;
state[i].length = max;
}
state[t].length = 0; state[t].label = 1;
k=t;
while(k!=s)
{
for(i=0; i<n; i++)
{
if ( ( a[k][i] != 0) && (state[i].label = = 0))
if ( ( state[k].length+a[k][i] < state[i].length) )
state[i].predecessor = k;
state[i].length = state[k].length+a[k][i];
}
}
k=0; min=max;
for(i=0; i<n; i++)
{
if ((state[i].lable = = 0) && (state[i].length < min))
{
min = state[i].length;
k = i;
}
}
state[k].lable = 1;
}
k=s; i=0;
```
do
{
path[i]=k;
k=state[k].predecessor;
i++;
}
while(k!=n+1);
max1=i;
printf("SHORTEST PATH OF THE GIVEN ADJACENT MATRIX \
");
for(i=0;i<max1;i++)
{
printf(" %d",path[i]);
}
printf(" \
");
}
readln(n)
int n
{
int i,j;
for(i=0;i<n;i++)
{
for(j=0;j<n;j++)
{
scanf("%d",&a[i][j]);
}
}
return;
}
2.5 Demonstration with an example
Consider the following Directed Weighted Graph
For this Graph, the input is in the form of Weighted Adjacency Matrix
<table>
<thead>
<tr>
<th></th>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
<th>E</th>
<th>F</th>
<th>G</th>
<th>H</th>
<th>I</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>0</td>
<td>5</td>
<td>8</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>3</td>
<td>∞</td>
</tr>
<tr>
<td>B</td>
<td>∞</td>
<td>0</td>
<td>10</td>
<td>7</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>2</td>
<td>1</td>
</tr>
<tr>
<td>C</td>
<td>∞</td>
<td>∞</td>
<td>0</td>
<td>2</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
</tr>
<tr>
<td>D</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>0</td>
<td>∞</td>
<td>5</td>
<td>∞</td>
<td>∞</td>
<td>4</td>
</tr>
<tr>
<td>E</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>2</td>
<td>0</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
</tr>
<tr>
<td>F</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>6</td>
<td>0</td>
<td>∞</td>
<td>5</td>
<td>∞</td>
</tr>
<tr>
<td>G</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>5</td>
<td>1</td>
<td>4</td>
<td>0</td>
<td>∞</td>
<td>∞</td>
</tr>
<tr>
<td>H</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>2</td>
<td>0</td>
<td>3</td>
</tr>
<tr>
<td>I</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>∞</td>
<td>4</td>
<td>∞</td>
<td>0</td>
</tr>
</tbody>
</table>
Here the shortest-path is in the form of Adjacency matrix weighted directed graph is Now we consider, shortest-path form vertex A to all vertices E i.e (A → E)
1)
\[ \text{Shortest-path cost} = 6 \]
\[ \text{Root is } A \rightarrow H \rightarrow G \rightarrow E \]
2)
\[ \text{Shortest-path cost} = 21 \]
\[ \text{Root is } A \rightarrow C \rightarrow D \rightarrow F \rightarrow E \]
3)
\[ \text{Shortest-path cost} = 28 \]
\[ \text{Root is } A \rightarrow B \rightarrow C \rightarrow D \rightarrow F \rightarrow E \]
Shortest-path cost = 20
Root is A, B, H, I, F, E
Shortest-path cost = 16
Root is A, B, I, F, E
Shortest-path cost = 26
Root is A, B, D, I, F, E
Shortest-path cost = 15
Root is A → H → G → F → E
Shortest-path cost = 21
Root is A → H → G → D → F → E
Shortest-Path from Vertex to all Vertices so cost's are \{6, 21, 28, 20, 16, 26, 15, 21\}
Now we will consider, least cost is the cost of the Shortest-path from Vertex to Vertex
so cost is 6
2.6 Result
ENTER THE NO OF NODES FOR SHORTEST PATH: 7
ENTER THE STARTING AND ENDING PATH'S : B G
GIVEN ADJACENCY MATRIX :
<table>
<thead>
<tr>
<th></th>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
<th>E</th>
<th>F</th>
<th>G</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>0</td>
<td>K</td>
<td>K</td>
<td>8</td>
<td>2</td>
<td>K</td>
<td>K</td>
</tr>
<tr>
<td>B</td>
<td>7</td>
<td>0</td>
<td>1</td>
<td>K</td>
<td>K</td>
<td>K</td>
<td>K</td>
</tr>
<tr>
<td>C</td>
<td>3</td>
<td>K</td>
<td>0</td>
<td>K</td>
<td>K</td>
<td>3</td>
<td>K</td>
</tr>
<tr>
<td>D</td>
<td>2</td>
<td>K</td>
<td>K</td>
<td>0</td>
<td>1</td>
<td>K</td>
<td>K</td>
</tr>
<tr>
<td>E</td>
<td>K</td>
<td>K</td>
<td>K</td>
<td>K</td>
<td>0</td>
<td>K</td>
<td>2</td>
</tr>
<tr>
<td>F</td>
<td>K</td>
<td>K</td>
<td>K</td>
<td>10</td>
<td>4</td>
<td>0</td>
<td>7</td>
</tr>
<tr>
<td>G</td>
<td>K</td>
<td>K</td>
<td>2</td>
<td>K</td>
<td>K</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
SHORTEST PATH OF THE GIVEN ADJACENT MATRIX : B C F G
DISTANCE = 11
|
{"Source-Url": "http://shodhganga.inflibnet.ac.in:8080/jspui/bitstream/10603/69028/8/08_chapter%203.pdf", "len_cl100k_base": 5218, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 28703, "total-output-tokens": 5429, "length": "2e12", "weborganizer": {"__label__adult": 0.0005202293395996094, "__label__art_design": 0.00029015541076660156, "__label__crime_law": 0.0005998611450195312, "__label__education_jobs": 0.0011186599731445312, "__label__entertainment": 0.0001093149185180664, "__label__fashion_beauty": 0.0002524852752685547, "__label__finance_business": 0.0002617835998535156, "__label__food_dining": 0.0006794929504394531, "__label__games": 0.002262115478515625, "__label__hardware": 0.003705978393554687, "__label__health": 0.0010528564453125, "__label__history": 0.00045609474182128906, "__label__home_hobbies": 0.00028014183044433594, "__label__industrial": 0.000751495361328125, "__label__literature": 0.0003197193145751953, "__label__politics": 0.00030159950256347656, "__label__religion": 0.0006361007690429688, "__label__science_tech": 0.093505859375, "__label__social_life": 0.00010329484939575197, "__label__software": 0.0082855224609375, "__label__software_dev": 0.88037109375, "__label__sports_fitness": 0.0009832382202148438, "__label__transportation": 0.0025005340576171875, "__label__travel": 0.0004954338073730469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 12710, 0.0459]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 12710, 0.4207]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 12710, 0.79822]], "google_gemma-3-12b-it_contains_pii": [[0, 11, false], [11, 1544, null], [1544, 2087, null], [2087, 2741, null], [2741, 2911, null], [2911, 3454, null], [3454, 3965, null], [3965, 4217, null], [4217, 5871, null], [5871, 7596, null], [7596, 7647, null], [7647, 9111, null], [9111, 10260, null], [10260, 10624, null], [10624, 11241, null], [11241, 11769, null], [11769, 11915, null], [11915, 12212, null], [12212, 12710, null]], "google_gemma-3-12b-it_is_public_document": [[0, 11, false], [11, 1544, null], [1544, 2087, null], [2087, 2741, null], [2741, 2911, null], [2911, 3454, null], [3454, 3965, null], [3965, 4217, null], [4217, 5871, null], [5871, 7596, null], [7596, 7647, null], [7647, 9111, null], [9111, 10260, null], [10260, 10624, null], [10624, 11241, null], [11241, 11769, null], [11769, 11915, null], [11915, 12212, null], [12212, 12710, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 12710, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 12710, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 12710, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 12710, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 12710, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 12710, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 12710, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 12710, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 12710, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 12710, null]], "pdf_page_numbers": [[0, 11, 1], [11, 1544, 2], [1544, 2087, 3], [2087, 2741, 4], [2741, 2911, 5], [2911, 3454, 6], [3454, 3965, 7], [3965, 4217, 8], [4217, 5871, 9], [5871, 7596, 10], [7596, 7647, 11], [7647, 9111, 12], [9111, 10260, 13], [10260, 10624, 14], [10624, 11241, 15], [11241, 11769, 16], [11769, 11915, 17], [11915, 12212, 18], [12212, 12710, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 12710, 0.12083]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
a45cda0a3126a85e7e61aed2549ed1861217aabc
|
GPU Programming with PGI CUDA Fortran
Michael Wolfe
[email protected]
http://www.pgroup.com
March 2010
PGI Workstation / Server / CDK
Linux, Windows, MacOS, 32-bit, 64-bit, AMD64, Intel 64
UNIX-heritage Command-level Compilers + Graphical Tools
<table>
<thead>
<tr>
<th>Compiler</th>
<th>Language</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>PGF95™</td>
<td>Fortran 95 w/some F2003</td>
<td>pgf95</td>
</tr>
<tr>
<td>PGCC®</td>
<td>ANSI C99, K&R C and GNU gcc Extensions</td>
<td>pgcc</td>
</tr>
<tr>
<td>PGC++®</td>
<td>ANSI/ISO C++</td>
<td>pgCC</td>
</tr>
<tr>
<td>PGDBG®</td>
<td>MPI/OpenMP debugger</td>
<td>pgdbg</td>
</tr>
<tr>
<td>PGPROF®</td>
<td>MPI/OpenMP profiler</td>
<td>pgprof</td>
</tr>
</tbody>
</table>
Self-contained OpenMP/MPI Development Solution
© Copyright 2009-2010, The Portland Group, Inc.
CUDA Fortran
- Simple introductory program
- Programming model
- Low-level Programming with CUDA Fortran
- Building CUDA Fortran programs
- Performance Tuning
Fortran VADD on Host
subroutine host_vadd(A,B,C,N)
real(4) :: A(N), B(N), C(N)
integer :: N
integer :: i
do i = 1,N
C(i) = A(i) + B(i)
enddo
end subroutine
CUDA Fortran VADD Device Code
module kmod
use cudafor
contains
attributes(global) subroutine vaddkernel(A,B,C,N)
real(4), device :: A(N), B(N), C(N)
integer, value :: N
integer :: i
i = (blockidx%x-1)*32 + threadIdx%x
if( i <= N ) C(i) = A(i) + B(i)
end subroutine
end module
CUDA Fortran VADD Host Code
```fortran
subroutine vadd( A, B, C )
use kmod
real(4), dimension(:) :: A, B, C
real(4), device, allocatable:: Ad(:), Bd(:), Cd(:)
integer :: N
N = size( A, 1 )
allocate( Ad(N), Bd(N), Cd(N) )
Ad = A(1:N)
Bd = B(1:N)
call vaddkernel<<<(N+31)/32,32>>>( Ad, Bd, Cd, N )
C(1:N) = Cd
deallocate( Ad, Bd, Cd )
end subroutine
```
CUDA Fortran Programming
- **Host code**
- Optional: select a GPU
- Allocate device memory
- Copy data to device memory
- Launch kernel(s)
- Copy data from device memory
- Deallocate device memory
- **Device code**
- Scalar thread code, limited operations
- Implicitly parallel
Elements of CUDA Fortran - Host
subroutine vadd( A, B, C )
use kmod
real(4), dimension(:) :: A, B, C
real(4), device, allocatable, dimension(:):: Ad, Bd, Cd
integer :: N
N = size( A, 1 )
allocate( Ad(N), Bd(N), Cd(N) )
Ad = A(1:N)
Bd = B(1:N)
call vaddkernel<<<(N+31)/32,32>>>( Ad, Bd, Cd, N )
C(1:N) = Cd
deallocate( Ad, Bd, Cd )
end subroutine
Allocate device memory
Copy data to device
Launch a kernel
Copy data back from device
Deallocate device memory
CUDA Programming: the GPU
- A scalar program, runs on one thread
- All threads run the same code
- Executed in a grid of thread blocks
- grid may be 1D or 2D (max 65535x65535)
- thread block may be 1D, 2D, or 3D (max size 512)
- blockidx gives block index in grid (%x,%y)
- threadidx gives thread index within block (%x,%y,%z)
- Kernel runs implicitly in parallel
- thread blocks scheduled by hardware on any multiprocessor
- runs to completion before next kernel
Elements of CUDA Fortran - Kernel
```fortran
module kmod
use cudafor
contains
attributes(global) subroutine vaddkernel(A,B,C,N)
real(4), device :: A(N), B(N), C(N)
integer, value :: N
integer :: i
i = (blockidx%x-1)*32 + threadIdx%x
if( i <= N ) C(i) = A(i) + B(i)
end subroutine
end module
```
global means kernel
device attribute implied
value vs. Fortran default
blockidx from 1..(N+31)/32
threadidx from 1..32
array bounds test
CUDA Fortran Language
- **Host code**
- Declaring and allocating device memory
- Moving data to and from device memory
- Pinned memory
- Launching kernels
- **Kernel code**
- Attributes clause
- Kernel subroutines, device subprograms
- Shared memory
- What is and what is not allowed in a kernel
- CUDA Runtime API
Declaring Device Data
- Variables / arrays with device attribute are allocated in device memory
- real, device, allocatable :: a(:)
- real, allocatable :: a(:)
attributes(device) :: a
- In a host subroutine or function
- device allocatables and automatics may be declared
- device variables and arrays may be passed to other host subroutines or functions (explicit interface)
- device variables and arrays may be passed to kernel subroutines
---
Declaring Device Data
- Variables / arrays with device attribute are allocated in device memory
- module mm
real, device, allocatable :: a(:)
real, device :: x, y(10)
real, constant :: c1, c2(10)
integer, device :: n
contains
attributes(global) subroutine s( b )
...
- Module data must be fixed size, or allocatable
Declaring Device Data
- Data declared in a Fortran module
- Device variables, arrays, allocatables allowed
- Device variables, arrays are accessible to device subprograms within that module
- Also accessible to host subprograms in that module or which use that module
- Constant attribute (not to be confused with parameter) puts variable or array in constant memory
Allocating Device Data
- Fortran allocate / deallocate statement
- real, device, allocatable :: a(:,,:), b
- allocate( a(1:n,1:m), b )
- dealloc( a, b )
- Arrays or variables with device attribute are allocated in device memory
- Allocate is done by the host subprogram
- Memory is not virtual, you can run out
- Device memory is shared among users / processes, you can have deadlock
- STAT=ivar clause to catch and test for errors
Copying Data to / from Device
- **Assignment statements**
```fortran
real, device, allocatable :: a(:,:), b
allocate( a(1:n,1:m), b )
a(1:n,1:m) = x(1:n,1:m) ! copies to device
b = 99.0
....
x(1:n,1:m) = a(1:n,1:m) ! copies from device
y = b
deallocate( a, b )
```
- Data copy may be noncontiguous, but will then be slower (multiple DMAs)
- Data copy to / from pinned memory will be faster
Using the API
```fortran
use cudafor
real, allocatable, device :: a(:)
real :: b(10), b2(2), c(10)
....
istat = cudaMalloc( a, 10 )
istat = cudaMemcpy( a, b, 10 )
istat = cudaMemcpy( a(2), b2, 2 )
istat = cudaMemcpy( c, a, 10 )
istat = cudaFree( a )
```
Pinned Memory
- **Pinned attribute for host data**
- `real, pinned, allocatable :: x(:,:,)`
- `real, device, allocatable :: a(:,:,)`
- `allocate( a(1:n,1:m), x(1:n,1:m) )`
- `a(1:n,1:m) = x(1:n,1:m)` \hspace{1cm} copies to device
- `x(1:n,1:m) = a(1:n,1:m)` \hspace{1cm} copies from device
- `deallocate( a, b )`
- **Downsides**
- Limited amount of pinned memory on the host
- May not succeed in getting pinned memory
Launching Kernels
- **Subroutine call with chevron syntax for launch configuration**
- `call vaddkernel <<< (N+31)/32, 32 >>> ( A, B, C, N )`
- `type(dim3) :: g, b`
- `g = dim3((N+31)/32, 1, 1)`
- `b = dim3( 32, 1, 1 )`
- `call vaddkernel <<< g, b >>> ( A, B, C, N )`
- **Interface must be explicit**
- In the same module as the host subprogram
- In a module that the host subprogram uses
- Declared in an interface block
Launching Kernels
- Subroutine call with chevron syntax for launch configuration
- call vaddkernel <<< (N+31)/32, 32 >>> (A, B, C, N)
- type(dim3) :: g, b
g = dim3((N+31)/32, 1, 1)
b = dim3(32, 1, 1)
call vaddkernel <<< g, b >>> (A, B, C, N)
- launch configuration
- <<< grid, block >>>
- grid, block may be scalar integer expression, or type(dim3) variable
- The launch is asynchronous
- host program continues, may issue other launches
Writing a CUDA Fortran Kernel (1)
- global attribute on the subroutine statement
- attributes(global) subroutine kernel (A, B, C, N)
- May declare scalars, fixed size arrays in local memory
- May declare shared memory arrays
- real, shared :: sm(16,16)
- Limited amount of shared memory available
- shared among all threads in the same thread block
- Data types allowed
- integer(1,2,4,8), logical(1,2,4,8), real(4,8), complex(4,8), character(len=1)
- Derived types
Writing a CUDA Fortran Kernel (2)
- **Predefined variables**
- blockidx, threadidx, griddim, blockdim, warpsize
- **Executable statements in a kernel**
- assignment
- do, if, goto, case
- call (to device subprogram, must be inlined)
- intrinsic function call, device subprogram call (inlined)
- where, forall
Modules and Scoping
- **attributes(global) subroutine kernel in a module**
- can directly access device data in the same module
- can call device subroutines / functions in the same module
- **attributes(device) subroutine / function in a module**
- can directly access device data in the same module
- can call device subroutines / functions in the same module
- implicitly private
- **attributes(global) subroutine kernel outside of a module**
- cannot directly access any global device data (just arguments)
- **host subprograms**
- can call any kernel in any module or outside module
- can access module data in any module
- can call CUDA C kernels as well (explicit interface)
Building a CUDA Fortran Program
- `pgfortran -Mcuda a.f90`
- `pgfortran -Mcuda=[emu|cc10|cc11|cc12|cc13|cc20]`
- `pgfortran a.cuf`
- .cuf suffix implies CUDA Fortran (free form)
- .CUF suffix runs preprocessor
- `-Mfixed` for F77-style fixed format
- Must use `-Mcuda` when linking from object files
- Must have appropriate gcc for preprocessor (Linux, Mac OSX)
- CL, NVCC tools bundled with compiler
CUDA C vs CUDA Fortran
**CUDA C**
- supports texture memory
- supports Runtime API
- supports Driver API
- cudaMalloc, cudaMemcpy
- directMemcp
- OpenGL interoperability
- Direct3D interoperability
- textures
- arrays zero-based
- threadidx/blockidx 0-based
- unbound pointers
- pinned allocate routines
**CUDA Fortran**
- no texture memory
- supports Runtime API
- no support for Driver API
- allocate, deallocate
- assignments
- no OpenGL interoperability
- no Direct3D interoperability
- no textures
- arrays one-based
- threadidx/blockidx 1-based
- allocatable are device/host
- pinned attribute
Interoperability with CUDA C
- CUDA Fortran uses the Runtime API
- use cudafor gets interfaces to the runtime API routines
- CUDA C can use Runtime API (cuda...) or Driver API (cu...)
- CUDA Fortran calling CUDA C kernels
- explicit interface (interface block), add BIND(C)
- interface
attributes(global) subroutine saxpy(a,x,y,n) bind(c)
real, device :: x(*), y(*)
real, value :: a
integer, value :: n
end subroutine
end interface
call saxpy<<<grid,block>>>( aa, xx, yy, nn )
Interoperability with CUDA C
- CUDA C calling CUDA Fortran kernels
- Runtime API
- make sure the name is right
- module_subroutine_ or subroutine_
- check value vs. reference arguments
- extern __global__ void saxpy_( float a,
float* x, float* y, int n );
...
saxpy_( a, x, y, n );
- attributes(global) subroutine saxpy(a,x,y,n)
real, value :: a
real :: x(*), y(*)
integer, value :: n
Interoperability with CUDA C
- CUDA Fortran kernels can be linked with nvcc
- The kernels look to nvcc just like CUDA C kernels
- CUDA C kernels can be linked with pgfortran
- Remember –Mcuda flag when linking object files
- This CUDA Fortran release uses CUDA 2.3
- CUDA 3.0 will be an option when it becomes available
CUDA Fortran Matrix Multiplication Code Walkthrough
- do i = 1, N
- do j = 1, M
- C(i,j) = 0.0
- do k = 1, L
- C(i,j) = C(i,j) + A(i,k)*B(k,j)
- Kernel computes a 16x16 submatrix
- Initially, assume matrix sizes are divisible by 16
- thread block is (16,16), grid is (N/16,M/16)
- Each thread accumulates one element of the 16x16 block of C
- K loop is strip mined in strips of size 16
- Threads cooperatively load a 16x16 block of A and B
module mmulmod
contains
attributes(global) subroutine mmul( A,B,C,N,M,L)
real,device :: A(N,L),B(L,M),C(N,M)
integer,value :: N,M,L
integer :: i,j,kb,k,tx,ty
real,shared :: Ab(16,16), Bb(16,16)
real :: Cij
tx = threadidx%x ; ty = threadidx%y
i = (blockidx%x-1) * 16 + tx
j = (blockidx%y-1) * 16 + ty
Cij = 0.0
do k = 1, L
Cij = Cij + A(i,k) * B(k,j)
enddo
C(i,j) = Cij
end subroutine
end module
subroutine mmul( A, B, C )
use cudafor
use mmulmod
real, dimension(:,:) :: A, B, C
real, device, allocatable, dimension(:,:,:) :: Ad,Bd,Cd
type(dim3) :: dimGrid, dimBlock
integer :: N, M, L
N = size(C,1) ; M = size(C,2) ; L = size(A,2)
allocate( Ad(N,L), Bd(L,M), Cd(N,M) )
Ad = A(1:N,1:L)
Bd = B(1:L,1:M)
dimGrid = dim3( N/16, M/16 )
dimBlock = dim3( 16, 16, 1 )
call mmul<<<dimGrid,dimBlock>>>( Ad,Bd,Cd,N,M,L )
C(1:N,1:M) = Cd
deallocate( Ad, Bd, Cd )
end subroutine
end module
end subroutine
Performance Tuning
- Performance Measurement
- Choose an appropriately parallel algorithm
- Optimize data movement between host and GPU
- frequency, volume, regularity
- Optimize device memory accesses
- strides, alignment
- use shared memory, avoid bank conflicts
- use constant memory
- Optimize kernel code
- redundant code elimination
- loop unrolling
- Optimize compute intensity
- unroll the parallel loop
Host-GPU Data Movement
- Avoid altogether
- Move outside of loops
- Better to move a whole array than subarray
- Update halo regions rather than whole array
- use GPU to move halo region to contiguous area?
- Use streams, overlap data / compute
- requires pinned memory
Occupancy
- How many simultaneously active warps / maximum (maximum is 24 or 32)
- Limits
- threads per multiprocessor
- thread blocks per multiprocessor
- register usage
- shared memory usage
- Low occupancy leads to low performance
- High occupancy does not guarantee high performance
Execution Configuration
- Execution configuration affects occupancy
- Want many threads per thread block
- multiple of 32
- 64, 128, 256
- Want many many thread blocks
Divergence
- Scalar threads executing in SIMD mode
- if( threadIdx%x <= 10 )then
foo = foo * 2
else
foo = 0
endif
- Each path taken
- do i = 1, threadIdx%x
a(threadIdx%x,i) = 0
enddo
- Only matters within a warp
Divergence
- Pad arrays to multiples of block size
- i = (blockIdx%x-1)*64 + threadIdx%x
- if( i <= N ) A(i) = ...
Global Memory
- **Stride-1, aligned accesses**
- address is aligned to mod(threadidx%x, 16)
- threadidx%x and threadidx%x+1 access consecutive addresses
- alignment critical for Compute Capability 1.0, 1.1
- **Using shared memory as data cache**
- Redundant data access within a thread
- Redundant data access across threads
- Stride-1 data access within a thread
Redundant access within a GPU Thread
```fortran
! threadidx%x from 1:64
! this thread block does 256 'i' iterations
ilo = (blockidx%x-1)*256
ihi = blockidx*256 - 1
...
do j = jlo, jhi
do i = ilo+threadidx%x, ihi, 64
A(i,j) = A(i,j) * B(i)
enddo
endo
do
do```
© Copyright 2009-2010, The Portland Group, Inc.
Redundant access within a GPU Thread
```fortran
real,shared :: BB(256)
...
do ii = 0, 255, 64
BB(threadidx%x+ii) = B(ilo+ii)
enddo
call syncthreads()
do j = jlo, jhi
do i = ilo+threadidx%x, ihi, 64
A(i,j) = A(i,j) * BB(i-ilo)
enddo
enddo
```
Redundant access across GPU Threads
```fortran
! threadidx%x from 1:64
i = (blockidx%x-1)*64 + threadidx%x
...
do j = jlo, jhi
A(i,j) = A(i,j) * B(j)
enddo
```
Redundant access across GPU Threads
```fortran
real, shared :: BB(64)
i = (blockidx%x-1)*64 + threadidx%x
...
do jb = jlo, jhi, 64
BB(threadidx%x) = B(jb+threadidx%x)
call syncthreads()
do j = jb, min(jhi,jb+63)
A(i,j) = A(i,j) * BB(j-jb+1)
enddo
enddo
```
Stride-1 Access within a GPU thread
```fortran
! threadIdx%x from 1:32
i = (blockidx%x-1)*32 + threadIdx%x
...
ix = indx(i)
do j = jlo, jhi
A(i,j) = A(i,j) * B(ix+j)
enddo
```
Stride-1 Access within a GPU thread
```
real, shared :: BB(33,32)
integer, shared :: IXX(32)
i = (blockidx%x-1)*32 + threadidx%x
...
ix = indx(i)
IXX(threadidx%x) = ix
call syncthreads()
do jb = jlo, jhi, 32
do j = 1, 32
BB(threadidx%x,j) = B(IXX(j)+threadidx%x)
enddo
call syncthreads()
do j = jb, min(jhi,jb+31)
A(i,j) = A(i,j) * BB(j,threadidx%x)
enddo
```
Shared Memory
- 16 memory banks
- Use threadidx%x in leading (stride-1) dimension
- Avoid stride of multiple of 16
- Shared memory also used to pass kernel arguments, affects occupancy
Unroll the Parallel Loop
- If thread ‘j’ and ‘j+1’ share data, where
- j is a parallel index
- j is not the stride-1 index
- Unroll two or more iterations of ‘j’ into the kernel
module mmulmod
contains
attributes(global) subroutine mmul( A,B,C,N,M,L)
real,device :: A(N,L),B(L,M),C(N,M)
integer,value :: N,M,L
integer :: i,j,kb,k,tx,ty
real,shared :: Ab(16,16), Bb(16,16)
real :: Cij
tx = threadIdx%x ; ty = threadIdx%y
i = (blockIdx%x-1) * 16 + tx
j = (blockIdx%y-1) * 16 + ty
Cij = 0.0
! continued
module mmulmod
contains
attributes(global) subroutine mmul(A,B,C,N,M,L)
real,device :: A(N,L),B(L,M),C(N,M)
integer,value :: N,M,L
integer :: i,j,kb,k,tx,ty
real,shared :: Ab(16,16), Bb(16,16)
real :: Cij
tx = threadIdx%x ; ty = threadIdx%y
i = (blockIdx%x-1) * 16 + tx
j = (blockIdx%y-1) * 16 + ty
Cij = 0.0
end subroutine
end module
module mmulmod
contains
! unroll two ‘j’ iterations
attributes(global) subroutine mmul(A,B,C,N,M,L)
real,device :: A(N,L),B(L,M),C(N,M)
integer,value :: N,M,L
integer :: i,j,kb,k,tx,ty
real,shared :: Ab(16,16), Bb(16,16)
real :: Cij1, Cij2
tx = threadidx%x ; ty = threadidx%y
i = (blockidx%x-1) * 16 + tx
j1 = (blockidx%y-1) * 32 + ty
j2 = j1+16
Cij1 = 0.0
Cij2 = 0.0
! continued
do kb = 1, L, 16
Ab(tx,ty) = A(i,kb+ty-1)
Bb(tx,ty) = B(kb+tx-1,j)
call syncthreads()
do k = 1,16
Cij = Cij + Ab(tx,k) * Bb(k,ty)
enddo
call syncthreads()
enddo
C(i,j) = Cij
end subroutine
end module
do kb = 1, L, 16
Ab(tx,ty) = A(i,kb+ty-1)
Bb(tx,ty) = B(kb+tx-1,j1)
call syncthreads()
do k = 1,16
Cij1 = Cij1 + Ab(tx,k) * Bb(k,ty)
call syncthreads()
enddo
Bb(tx,ty) = B(kb+tx-1,j2)
call syncthreads()
do k = 1,16
Cij2 = Cij2 + Ab(tx,k) * Bb(k,ty)
call syncthreads()
enddo
enddo
C(i,j1) = Cij1
C(i,j2) = Cij2
end subroutine
end module
Constant Memory
- Small, read-only, written by the host
- assignment or API
- Hardware cached
Low-level Optimizations
- instruction count optimizations
- loop unrolling (watch memory access patterns)
- loop fusion
- minimize global memory accesses
- use scalar temps
- scalarizing arrays
- downsides:
- increased register usage
- spills to “local memory”
Coming CUDA Fortran Features
- Module allocatable device arrays
- directly accessible by kernel routines in that module
- Device array pointers
- pointer assignment on the host
- Assumed-shape argument arrays
Fermi vs Tesla
- ECC
- double = float / 2
- two level hardware data cache
- constant memory cache
- 16/48KB shared memory
- <=16 kernels at a time
- 2*16TP * 32SM
- unified address space
- dynamic allocation (?)
- enhanced support for C++
- no ECC
- double = float / 8
- no user-visible hardware cache
- constant memory cache
- 16KB shared memory
- 1 kernel at a time
- 8TP * 30SM
- shared/local/global ptrs
- allocate from host only
Copyright Notice
© Contents copyright 2009-2010, The Portland Group, Inc. This material may not be reproduced in any manner without the expressed written permission of The Portland Group.
|
{"Source-Url": "https://www.pgroup.com/lit/presentations/pgi-cuda-fortran-intro-310.pdf", "len_cl100k_base": 6222, "olmocr-version": "0.1.50", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 56804, "total-output-tokens": 7686, "length": "2e12", "weborganizer": {"__label__adult": 0.0003542900085449219, "__label__art_design": 0.00038743019104003906, "__label__crime_law": 0.0002803802490234375, "__label__education_jobs": 0.0003719329833984375, "__label__entertainment": 6.783008575439453e-05, "__label__fashion_beauty": 0.0001493692398071289, "__label__finance_business": 0.00017762184143066406, "__label__food_dining": 0.00028586387634277344, "__label__games": 0.0008387565612792969, "__label__hardware": 0.004878997802734375, "__label__health": 0.0003063678741455078, "__label__history": 0.00019919872283935547, "__label__home_hobbies": 0.00010579824447631836, "__label__industrial": 0.0007982254028320312, "__label__literature": 0.00011712312698364258, "__label__politics": 0.00018584728240966797, "__label__religion": 0.0005006790161132812, "__label__science_tech": 0.047088623046875, "__label__social_life": 5.441904067993164e-05, "__label__software": 0.01042938232421875, "__label__software_dev": 0.93115234375, "__label__sports_fitness": 0.0003490447998046875, "__label__transportation": 0.0005011558532714844, "__label__travel": 0.00017154216766357422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19685, 0.02854]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19685, 0.65041]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19685, 0.63327]], "google_gemma-3-12b-it_contains_pii": [[0, 868, false], [868, 868, null], [868, 1028, null], [1028, 1028, null], [1028, 1575, null], [1575, 2225, null], [2225, 3167, null], [3167, 3970, null], [3970, 4782, null], [4782, 5606, null], [5606, 6280, null], [6280, 7157, null], [7157, 8103, null], [8103, 9131, null], [9131, 10154, null], [10154, 11107, null], [11107, 11901, null], [11901, 12327, null], [12327, 12825, null], [12825, 13532, null], [13532, 14002, null], [14002, 14360, null], [14360, 15060, null], [15060, 15483, null], [15483, 15948, null], [15948, 16517, null], [16517, 17064, null], [17064, 17439, null], [17439, 18080, null], [18080, 18565, null], [18565, 19060, null], [19060, 19685, null]], "google_gemma-3-12b-it_is_public_document": [[0, 868, true], [868, 868, null], [868, 1028, null], [1028, 1028, null], [1028, 1575, null], [1575, 2225, null], [2225, 3167, null], [3167, 3970, null], [3970, 4782, null], [4782, 5606, null], [5606, 6280, null], [6280, 7157, null], [7157, 8103, null], [8103, 9131, null], [9131, 10154, null], [10154, 11107, null], [11107, 11901, null], [11901, 12327, null], [12327, 12825, null], [12825, 13532, null], [13532, 14002, null], [14002, 14360, null], [14360, 15060, null], [15060, 15483, null], [15483, 15948, null], [15948, 16517, null], [16517, 17064, null], [17064, 17439, null], [17439, 18080, null], [18080, 18565, null], [18565, 19060, null], [19060, 19685, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 19685, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19685, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19685, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19685, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19685, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19685, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19685, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19685, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19685, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19685, null]], "pdf_page_numbers": [[0, 868, 1], [868, 868, 2], [868, 1028, 3], [1028, 1028, 4], [1028, 1575, 5], [1575, 2225, 6], [2225, 3167, 7], [3167, 3970, 8], [3970, 4782, 9], [4782, 5606, 10], [5606, 6280, 11], [6280, 7157, 12], [7157, 8103, 13], [8103, 9131, 14], [9131, 10154, 15], [10154, 11107, 16], [11107, 11901, 17], [11901, 12327, 18], [12327, 12825, 19], [12825, 13532, 20], [13532, 14002, 21], [14002, 14360, 22], [14360, 15060, 23], [15060, 15483, 24], [15483, 15948, 25], [15948, 16517, 26], [16517, 17064, 27], [17064, 17439, 28], [17439, 18080, 29], [18080, 18565, 30], [18565, 19060, 31], [19060, 19685, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19685, 0.01074]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
eec47ae0eac33102b674fb0281898e924dd825f7
|
[REMOVED]
|
{"len_cl100k_base": 8028, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 36303, "total-output-tokens": 9946, "length": "2e12", "weborganizer": {"__label__adult": 0.0006928443908691406, "__label__art_design": 0.020599365234375, "__label__crime_law": 0.0005025863647460938, "__label__education_jobs": 0.01074981689453125, "__label__entertainment": 0.00033926963806152344, "__label__fashion_beauty": 0.00044155120849609375, "__label__finance_business": 0.0004820823669433594, "__label__food_dining": 0.0006346702575683594, "__label__games": 0.0012102127075195312, "__label__hardware": 0.0019626617431640625, "__label__health": 0.0008497238159179688, "__label__history": 0.0010433197021484375, "__label__home_hobbies": 0.00026702880859375, "__label__industrial": 0.0008502006530761719, "__label__literature": 0.0011997222900390625, "__label__politics": 0.0003650188446044922, "__label__religion": 0.00118255615234375, "__label__science_tech": 0.1854248046875, "__label__social_life": 0.00023698806762695312, "__label__software": 0.033050537109375, "__label__software_dev": 0.736328125, "__label__sports_fitness": 0.0003833770751953125, "__label__transportation": 0.0009622573852539062, "__label__travel": 0.0003597736358642578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44032, 0.02597]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44032, 0.60101]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44032, 0.90987]], "google_gemma-3-12b-it_contains_pii": [[0, 492, false], [492, 4837, null], [4837, 9215, null], [9215, 15341, null], [15341, 21727, null], [21727, 26751, null], [26751, 30166, null], [30166, 33728, null], [33728, 40880, null], [40880, 44032, null]], "google_gemma-3-12b-it_is_public_document": [[0, 492, true], [492, 4837, null], [4837, 9215, null], [9215, 15341, null], [15341, 21727, null], [21727, 26751, null], [26751, 30166, null], [30166, 33728, null], [33728, 40880, null], [40880, 44032, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44032, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44032, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44032, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44032, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44032, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44032, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44032, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44032, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44032, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44032, null]], "pdf_page_numbers": [[0, 492, 1], [492, 4837, 2], [4837, 9215, 3], [9215, 15341, 4], [15341, 21727, 5], [21727, 26751, 6], [26751, 30166, 7], [30166, 33728, 8], [33728, 40880, 9], [40880, 44032, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44032, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
0b289ab6c9f865b935bc42c7d94d5b74852c367a
|
Compositional Verification of Business Processes by Model–Checking
Luis E. Mendoza¹, Manuel I. Capel² and María Pérez¹
¹ Processes and Systems Department, Simón Bolívar University
P.O. box 89000, Baruta, Caracas 1080-A, Venezuela
² Software Engineering Department, University of Granada
ETSI Informatics and Telecommunication, 18071 Granada, Spain
Abstract. The work presented in this article is aimed at a contribution to the Enterprise Information Systems (EIS) verification. We describe here a Formal Compositional Verification Approach (FCVA) —based on Model–Checking (MC) techniques— applied to the verification of Business Process (BP) models represented by Business Process Modelling Notation (BPMN) diagrams. FCVA is compositional and thus allows the verification of a complex BP model carried out from verification of its parts. FCVA and a proposal of temporal semantics for BPMN allows the expression of time–dependent constructs of BP Task Models (BPTM) supported by an EIS. The interpretation of the BPMN graphical modelling entities into a formal specification language (CSP+T) allows us to use state–of–the–art MC tools to verify the behavioural part of BP models. A real–life example in the field of the Customer Relationship Management (CRM) business is presented to demonstrate the FCVA application in a practical way.
1 Introduction
Enterprise Information Systems (EIS) manage enterprise business, apply strategic and economic decisions, and hold communication with business partners. In this sense, the EIS implements cross–functional Business Processes (BPs), i.e., the set of ways in which management chooses to coordinate the work to achieve their (business) objectives and user goals, which transcends the boundaries between sales, marketing, manufacturing, and research and development. Therefore, an organization must have been obtained previously, as result of the Business Process Modelling (BPM), the complete definition of the set of BPs that support the EIS. Due to BPs specific characteristics (people integration, business rules, business goals, events, information, and resources) [1], the validation of BP Task Model (BPTM) is an extremely expensive and risky activity if it is delayed until the EIS deployment phase.
The main goal of Business Process Modelling Notation (BPMN) [1] being to provide a readily understandable notation for all its users, the lack of a precise semantics of its modelling entities impedes rigorous analysis and reasoning about the models obtained [2]. To cope with the above described situation, we propose an instantiation of our compositional verification framework, called Formal Compositional Verification...
Approach (FCVA) [3], which uses MC techniques and makes it possible to verify a BPTM supported by an EIS using the formal semantics of Communicating Sequential Processes (CSP)–based process calculus. We complement our FVCA [3] with a timed semantics of BPMN defined in terms of the Communicating Sequential Processes + Time (CSP+T) [4] formal specification language, which extends BPMN modelling entities with timing constraints in order to allow the expression of BPTM time–dependent constructs. By a sound interpretation of FCVA elements into Kripke Structures (KS) [5], it then becomes feasible to verify the behaviour of global BP (i.e., the BPTM) from its local BPs’ participants.
Different works address the verification and validation of BP modelled with BPMN. In [6] is presented a extended survey of recently proposed verification techniques for verifying BPMN models and a comparison between them and with respect to motivations, methods, and logics. Differently from other research, our work is aimed at giving a systemic, integrated vision of specification, design and verification of BPTM derived from BPs, by incorporating the use of MC tools in the specification and verification of BPTM into the EIS development cycle.
The remainder of this paper is organised as follows. In section 2 short introductions to time semantics for BPMN modelling entities and to the Clocked Computation Tree Logic (CCTL) specification language are provided. In section 3 FCVA for BPMN verification is presented, followed by a formal description and validation of the compositional verification proposal. Section 4 describes the application to a BPM example related to the CRM business. Finally, in Section 5, conclusions are given and future work is described.
2 BPTM’s Behaviours in a Common Semantic Domain
Most temporal logics and other system description formalisms, used for reactive systems (as BPTM) specification, can be interpreted as KS. According to [5] the systems best suited to verification by MC are those that are easily modelled by (finite) automata, such as KS ones [5]. Accordingly, [7] states that translating formulae in temporal logics to automata is a standard approach for implementing MC. Therefore, in this paper we use Timed Büchi Automaton (TBA) because these are the simplest automata over infinite words [5] able to represent time regular processes [8].
2.1 BPTM Model
To obtain a complete description of the BPTM’s behaviour interpreted into CSP+T process terms, we apply the transformation rules that we briefly introduce below, which assume the semantics of the BPMN analysis entities given in [2] as the starting point for their definition. As a result of a mapping from BPMN [1] to CSP+T processes, each BPMN modelling entity (flow objects, connecting objects, and swimlanes) yield a syntactical sequential process term and specifies how to represent the entire participant’s behaviour, according to discrete timed events and sequences of events. Due to space limitations, Table 1 only shows a graphical example of some transformation rules used for obtaining CSP+T process terms from BPMN modelling entities. The complete rules
set is presented in [9]. We denote as \( \epsilon_x \) the invocation events of the BPMN modelling entities, \( S_x.ran.min \) and \( S_x.ran.max \) as the minimum and maximum time span of \( S_x \) activities, respectively, and \( stime \) and \( stime.ran \) as the time delay defined by \( timer \ start \) and \( timer \ intermediate \) events, respectively, according to BPMN [1]. Briefly explained, the transformation is performed by mapping: (1) every BPMN modelling entity to a prefixed CSP+T process term; (2) every discrete duration time to a CSP+T event–enabling interval; and (3) the external choice to alternative selections performed by the environment of each process is applied to ensure that all processes terminate at the end of the business process execution.
### Table 1. Some mapping rules from BPMN modelling entities to CSP+T terms.
<table>
<thead>
<tr>
<th>BPMN element</th>
<th>Description</th>
<th>CSP+T process</th>
</tr>
</thead>
<tbody>
<tr>
<td>( \star )</td>
<td>The start event corresponds to the CSP+T ( \star ) instantiation event and the ( v_e ) marker variable is used to save the occurrence time of event ( \star ).</td>
<td>( P(\text{start}) = (\star \oplus v_e \rightarrow \text{SKIP} ; P(\text{start})) ) ( \square (v_{end} \rightarrow \text{SKIP}) )</td>
</tr>
<tr>
<td>( e_1 \rightarrow e_2 \rightarrow S1 \rightarrow S2 )</td>
<td>The ( S2 ) activity begins when the ( e_1 ) event occurs and the invocation of ( S2 ) activity (i.e., the occurrence of ( e_2 ) event) must occur within the ( S1.ran.min, S1.ran.max ) time interval. The activity ( S1 ) come before activity ( S2 ).</td>
<td>( P(S1) = (v_{e_2} \oplus v_{\text{stime}} \rightarrow \text{SKIP} ; P(S1)) ) ( \square (v_{end} \rightarrow \text{SKIP}) )</td>
</tr>
<tr>
<td>( v_{\text{stime}} \rightarrow S1 \rightarrow S2 )</td>
<td>The ( \text{timer start} ) event establishes that the ( S1 ) activity must begin (i.e., the occurrence of ( e_{\text{start}} ) event), ( stime ) ran time units after the occurrence of ( \star ) instantiation event.</td>
<td>( P(\text{stime}) = (\star \oplus v_{\text{stime}} \rightarrow \text{SKiP} ; ) ) ( l(\text{stime.ran}, v_{\text{stime}}) \rightarrow \text{SKiP} ; v_{e_2} \rightarrow \text{SKiP} ; )(P(S1)) ) ( \square (v_{end} \rightarrow \text{SKIP}) )</td>
</tr>
<tr>
<td>( v_{\text{stime}} \rightarrow S1 \rightarrow S2 )</td>
<td>According to the ( \text{timer intermediate} ) event, the ( S2 ) activity must begin (i.e., the occurrence of ( e_{\text{start}} ) event), ( stime ) ran time units after the occurrence of ( v_{\text{stime}} ) event.</td>
<td>( P(\text{stime}) = (v_{e_2} \oplus v_{\text{stime}} \rightarrow \text{SKiP} ; ) ) ( l(\text{stime.ran}, v_{\text{stime}}) \rightarrow \text{SKiP} ; v_{e_2} \rightarrow \text{SKiP} ; )P(S1)) ) ( \square (v_{end} \rightarrow \text{SKIP}) )</td>
</tr>
<tr>
<td>( e_1 \rightarrow e_2 \rightarrow S1 \rightarrow S2 )</td>
<td>The ( S1 ) activity execution can be interrupted (i.e., the occurrence of ( e_{\text{abort}} ) event) at any time since its inception (i.e., the occurrence of ( e_{\text{start}} ) event) and until its total duration ends (i.e., within ( S1.ran.max ) time interval).</td>
<td>( P(S1) = (v_{e_2} \oplus v_{\text{stime}} \rightarrow \text{SKIP} ; ) ) ( l(S1.ran.max, v_{\text{stime}}) \rightarrow \text{SKiP} ; v_{\text{end}} \rightarrow \text{SKiP} ; )P(S1)) ) ( \square (v_{end} \rightarrow \text{SKIP}) )</td>
</tr>
</tbody>
</table>
#### 2.2 BPTM Properties
To specify the properties that the BPTM must exhibit, we use the CCTL [10], which is an interval temporal logic that allow us to carry out a logical reasoning at the level of time intervals, instead of instants. See [10] for more details. The algorithm described in [8] is used to construct a discrete TBA semantically equivalent to a CCTL formula \( \phi \). Afterwards, using the procedure described in [11], the TBAs of the BPTM properties described previously are transformed into CSP+T process terms. Thus, the expected behaviour of a BPTM is interpreted into a CSP+T process term \( P \). Thus, the assertion \( P \preceq \phi \) denotes that \( P \) meets the specification \( \phi \), where \( \preceq \) represents that \( P \) simulates
\( \phi \) (the simulation assertion), meaning that any behaviour of \( \phi \) can be matched by a corresponding behaviour of \( P \) (but not necessarily vice versa). Consequently, by applying the rules in Table 1 and the simulation operator, we can reason and express the BPTM properties in the same specification language as the BPTM model.
3 Compositional Verification Approach
Our approach is based on the fact that the system \( C \) has been structured into several verified components working in parallel, \( C = \prod_{i=1}^{n} C_i \), where each component \( C_i \) satisfies the property \( \phi_i \), which represents the specification of the expected behaviour for the component. Our main goal here is to make possible the verification of the entire system’s behaviour from its verified components. In this sense,
**Definition 1 (Property compositionality).** A property \( \phi \) is compositional iff for any two TBA \( A_1, A_1', A_2, A_2' \) with \( \mathcal{L}(A_2) \cap \mathcal{L}(\phi) = \emptyset \) holds
\[
(A_1 \models \phi) \Rightarrow ((A_1, A_2 \models \phi) \lor (A_1', A_2' \models \phi)) \quad \text{(1)}
\]
\[
((A_1 \subseteq A_1') \land (A_1 \models \phi)) \Rightarrow (A_1 \models \phi) \quad \text{(2)}
\]
Local properties are preserved by parallel composition when the labelling is disjoint:
**Lemma 1.** For two TBAs \( A_1, A_2 \) and properties \( \phi_1, \phi_2 \) with \( \Sigma_1 \cap \Omega_2 = \emptyset, \Sigma_2 \cap \Omega_1 = \emptyset \), \( \mathcal{L}(A_1) \cap \mathcal{L}(A_2) = \emptyset \) holds:
\[
((A_1 \models \phi_1) \land (A_2 \models \phi_2)) \Rightarrow (A_1, A_2 \models \phi_1 \land \phi_2), \quad \text{(3)}
\]
On the other hand, it is also a requirement that composition preserves refinement in the case of parallel composition:
**Lemma 2.** For two composable TBAs \( A_1, A_2 \), and any automata \( A_2' \) holds
\[
A_2 \subseteq A_2' \Rightarrow (A_1, A_2 \subseteq A_1', A_2'). \quad \text{(4)}
\]
Each component must also satisfy the “invariant” (\( \psi_i \)) expression which represents the behaviour of other system components with respect to \( C_i \). The special symbol \( \neg \delta \) is used to denote that deadlock (i.e., a state without any outgoing transition) cannot be reached. The property \( \phi \) and invariant \( \psi \) that are satisfied by the system \( C \), have been obtained from the local properties \( \phi_i \) (i.e., \( \bigwedge_{i=1}^{n} \phi_i \Rightarrow \phi \)) and invariants \( \psi_i \) (i.e., \( \bigwedge_{i=1}^{n} \psi_i \Rightarrow \psi \)), respectively. As result, we can obtain the complete verification of the system by using the Theorem 1:
**Theorem 1 (System Compositional Verification).** Let the system \( C \) be structured into several components working in parallel, \( C = \prod_{i=1}^{n} C_i \). For a set of TBA\((C_i)\) describing the behaviour of components \( C_i \), properties \( \phi_i \), invariants \( \psi_i \), and deadlock \( \delta \), with \( \bigcap_{i=1}^{n} \Sigma_i = \emptyset \), \( \bigcap_{i=1}^{n} \Omega_i = \emptyset \), and \( \bigcap_{i=1}^{n} \mathcal{L}(TBA(C_i)) = \emptyset \), the following condition holds:
\[
TBA(C) \models (\phi \land \psi \land \neg \delta) \Leftrightarrow \bigwedge_{i=1}^{n} TBA(C_i) \models (\phi_i \land \psi_i) \land \neg \delta, \quad \text{(5)}
\]
where TBA\((C)\) = \( \prod_{i=1}^{n} TBA(C_i) \).
The practical application of assertion (5) includes (manually) performing an inductive satisfaction checking process on the range of the components number \((i : 1..n)\) of the system. The FDR2 [12] model checker can automate this proof.
Based on previous concepts and ideas, we propose a possible instantiation of our conceptual scheme called FCVA [3], as shown in Fig. 1, to specify and verify BPTM derived from BPs supported by EIS. The rationale of FCVA instantiation is that the behavioural correctness of local BPs can be individually verified, in isolation, based on the well-defined communication behaviour specified by their message flows, and verification of the global BP behaviour performed using the results of the verification of local BPs. Our instantiation uses the CSP+T process calculus, which has a simple but powerful form of composition given by concurrent composition and hiding operators.
**BPTM Modelling.** Firstly, the complete description of the BPTM’s behaviour, modelled by the CSP+T process term \(T(C_i)\) is interpreted into a set of CSP+T process terms \(T(C_i)\) by using the proposed time semantics for BPMN modelling entities introduced in section 2.1.
**BPTM Behaviour Specification.** Then, requirements and temporal constraints that the BPTM must fulfill are specified in CCTL, which is based on the interval structure and time-annotated automata [10]. Afterwards, these properties are expressed by CSP+T process terms \(T(\phi_i), T(\psi_i), T(\neg \delta)\).
**Verification.** Finally, by performing the following steps, we proceed to verify the BPTM behaviour:
1. Firstly, the local process \(T(C_i)\) representing the local BPs are model checked against the set of process terms \(T(\phi_i), T(\psi_i), T(\neg \delta)\). According to the trace and failure semantics of CSP-based algebra, we proceed to verify:
\[
T(\phi_i) \sqsubseteq_T T(C_i) \land T(\psi_i) \sqsubseteq_T T(C_i) \land T(\neg \delta) \sqsubseteq_T T(C_i)
\]
\[
T(\phi_i) \sqsubseteq_T T(C_i) \land T(\psi_i) \sqsubseteq_T T(C_i) \land T(\neg \delta) \sqsubseteq_T T(C_i)
\]
2. Secondly, we obtain the verification of local BPs correctness, according to the following assertions:
---
**Fig. 1.** Integrated view of compositional verification for BPTM.
– Related to consideration of safety issues:
\[
\forall t \in \text{traces}(T(\phi_i)) \exists t' \in \text{traces}(T(C_i)) : t' \Rightarrow \phi_i \Leftrightarrow T(C_i) \models \phi_i
\]
\[
\forall t \in \text{traces}(T(\psi_i)) \exists t' \in \text{traces}(T(C_i)) : t' \Rightarrow \psi_i \Leftrightarrow T(C_i) \models \psi_i
\]
\[
\forall t \in \text{traces}(T(\neg \delta)) \exists t' \in \text{traces}(T(C_i)) : t' \Rightarrow \neg \delta \Leftrightarrow T(C_i) \models \neg \delta
\]
– Related to consideration of liveness issues:
\[
\forall (t, X) \in S^F[T(\phi_i)] \exists (t', X) \in S^F[T(C_i)] : (t', X) \Rightarrow \phi_i \Leftrightarrow T(C_i) \models \phi_i
\]
\[
\forall (t, X) \in S^F[T(\psi_i)] \exists (t', X) \in S^F[T(C_i)] : (t', X) \Rightarrow \psi_i \Leftrightarrow T(C_i) \models \psi_i
\]
\[
\forall (t, X) \in S^F[T(\neg \delta)] \exists (t', X) \in S^F[T(C_i)] : (t', X) \Rightarrow \neg \delta \Leftrightarrow T(C_i) \models \neg \delta
\]
3. Finally, by the application of Theorem 1 we obtain the complete verification of the BPTM behaviour \(T(C)\), according to the assertion (5) instantiated for CSP+T process terms \(T(C) = \|_{1..n} T(C_i)\).
4 Example of Application
To show the applicability of our proposal, it was applied to a BPM enterprise–project related to the CRM business. We will only show an example of application of the timed semantics proposed for BPMN and we only focus on the verification of one CRM BP. We selected to work with the Product/Service Sell BP, due to its importance to the CRM strategy. The required information to allow carrying out formal reasoning about the CRM participant collaboration is displayed in the Product/Service Sell BPD shown in Fig. 2, which allows a Company to perform the activities associated with selling a Product/Service requested by a Customer. As shown in Fig. 2, the BP depicts a high collaboration between the participants to achieve their execution, which means a synchronization of the activities involved in message flows.
Fig. 2. BPD of the Product/service Sell BP.
4.1 BPTM Definition and Description
To obtain the specification of the Product/Service Sell BPD in CSP+T, according to the proposal briefly described in section 2.1, we define the sets CU and CO, for indexing the processes mapped to the modelling entities of Customer (i.e., Cus) and Company (i.e., Com) participants, respectively (see Fig. 2):
\[
\begin{align*}
CU &= \{\text{start.1, cu}_1, \text{cu}_2, cu_3, cu_4, cu_5, cu_6, xgate.1, \text{end.1, abort.1}\} \\
CO &= \{\text{start.2, co}_1, \text{co}_2, co_3, co_4, co_5, co_6, co_7, co_8,agate.1, \text{agate.2, end.2, abort.2}\}
\end{align*}
\]
\[
\begin{align*}
s&= \{\text{fin.1, abort.1}\} \\
R &= \{\text{fin.2, abort.2}\}
\end{align*}
\]
\[
\begin{align*}
s &= \{\text{fin.1, abort.1}\} \\
R &= \{\text{fin.2, abort.2}\}
\end{align*}
\]
where for each \(i \in CU\) and \(j \in CO\), the processes \(P(i)\) and \(P(j)\), respectively, are defined next. Due to space limitations, we will only present some of the processes that make up the Cus and Com, to illustrate the application of the proposed semantics.\(^1\)
\[
P(\text{start.1}) = (0.s \rightarrow \text{init. Cus } [\text{cu}_1] \rightarrow \text{SKIP}) \text{\{} \text{fin.1} \} \rightarrow \text{SKIP}
\]
\[
P(\text{cu}_1) = (\text{init. Com} \cdot \text{cu}_3 \times \text{cu}_3 \rightarrow \text{SKIP} \triangleright \text{starts. Com} \cdot \text{cu}_3 \rightarrow \\
\text{msg. cu}_3, \text{cu}_3 \rightarrow \text{SKIP} \triangleright \text{msg. cu}_5, \text{cu}_3 \rightarrow \text{SKIP} \triangleright \\
\text{msg. cu}_6, \text{cu}_3 \rightarrow \text{SKIP} \triangleright \text{fin.2} \rightarrow \text{SKIP}
\]
Finally, the collaboration between the participants Customer and Company is the parallel composition of processes Cus and Com, as it is denoted by the PSS CSP+T process term, which conforms the BPTM of the Product/Service Sell BP to be verified.
\[
PSS = (\text{Cus} \triangleright (\text{Com} \triangleright \text{msg})).
\]
4.2 Properties Definition
We will work with the following property, which is connected with the obligation of receiving and obtaining the Product/Service delivery confirmation, once the Customer has initiated the communication with the Company. As we will proceed with the verification of the BPTM behaviour (previously denoted as PSS) from the sub-processes that make it up (i.e., Cus and Com), we must define the properties that each participant must fulfil, which show the execution sequence of BPMN modelling entities expected when they execute the partial processes of whom each is responsible. The participants must execute all their activities as they are pointed out in the workflow in order to achieve the functioning of the global process. The partial properties are defined below.
\[
\phi_{\text{Cus}} = \text{AG}_{\text{cu.6}}(\text{Start.1} \rightarrow A[\text{cu}_1U_{[\text{cu}_1, \text{cu}_2]} \land A[\text{cu}_2U_{[\text{cu}_2, \text{cu}_3]} \land A[\text{cu}_3U_{[\text{cu}_3, \text{cu}_4]} \land A[\text{cu}_4U_{[\text{cu}_4, \text{cu}_5]} \land A[\text{cu}_5U_{[\text{cu}_5, \text{cu}_6]} \land A[\text{cu}_6U_{[\text{cu}_6, \text{End.1}]}]]]].)
\]
\(^1\) Here, duration times are expressed in seconds, according to the function sec defined in [2]
φ_{Com} = AG_{a,b} [\text{Start.2} \rightarrow A [\text{cos1} U [a+1,b-8] (\text{cos2} \land A [\text{cos2} U [a+2,b-7] (\text{cos3} \land A [\text{cos3} U [a+3,b-6] ... msg.cus5.out, msg.cus6.out}]
Σ_{Com} = \{msg.cus1.out, msg.cus2.out, msg.cus3.out, msg.cus3.can, msg.cus8.out\}
Using the procedure described in [11], we obtained the processes \( T(\phi_{Com}) \) and \( T(\phi_{Cus}) \), which are the operational interpretation CCTL formulas previously specified. These process terms describe the expected behaviour for the processes \( Cus \) and \( Com \) that conform the BPTM, according to the CSP+T process calculus.
### 4.3 Verifying the Collaboration
According to our approach, to perform the verification of the BPTM we must verify first that the processes \( Cus \) and \( Com \) fulfil the properties specified in section 4.2. Then, according to the semantic domain to which CSP calculus, it can be checked that the following refining assertions are fulfilled:
\[
T(\phi_{Cus}) \sqsubseteq_T Cus, T(\phi_{Com}) \sqsubseteq_T Com, T(\phi_{Cus}) \sqsubseteq_F Cus, T(\phi_{Com}) \sqsubseteq_F Com
\]
(6)
To verify the above assertions, we are going to work according to the semantic model of CSP without temporal operators, since, according to the Timewise refinement, untimed safety and liveness properties of a timed system should verifiable in the untimed model and later should be used in the timed analysis. Furthermore, this allows us to integrate the use of FDR2 tool to carry out the verification of processes that represent the participants. In the sequel we use the process terms CSP \( UT(\phi_{Cus}) \) and \( UT(\phi_{Cus}) \), which correspond to the expected untimed behaviour of untimed processes \( UT(Com) \) and \( UT(Cus) \), respectively. As can be observed in the FDR2 screenshot in Fig. 3, the verification of local BP of each participant untimed model in CSP, COMPANY (i.e., \( UT(Com) \)) and CUSTOMER (i.e., \( UT(Cus) \)), of the BPTM for Product/Service Sell BP satisfies the untimed expected behaviour of each, \( COMP \) (i.e., \( UT(Com) \)) and \( CUST \) (i.e., \( UT(Cus) \)), respectively (see check marks at rows one and two, respectively). Thus, we obtained that the behaviour of the \( Cus \) and \( Com \) process terms are correct; i.e., all timed behaviour of CSP+T process terms are consistent with its description. Thus, the assertions in (6) are true.
According to assertion (5) (see section 3), to prove the correctness of the BPTM of the Product/Service Sell BP w.r.t. its expected behaviour, it must be demonstrated that:
\[
\text{PSS} \models \phi_{PSS} \iff (\text{Cus}[\alpha\text{Cus}[\alpha\text{Com}[\text{Com}]/\alpha]] \triangleright msg) \models \phi_{Cus} \land \phi_{Com}
\]
We have previously verified with FDR2 that:
\[
\text{Cus} \models \phi_{Cus} \text{ and } \text{Com} \models \phi_{Com}
\]
We must determine whether the \( Cus \) and \( Com \) local BPs are “composable”. Thus, we must verify that it fulfills the following two conditions:
1. The input signals (\( Ω_{Cus} \) and \( Ω_{Com} \)) and the output signals (\( Ω_{Cus} \) y \( Ω_{Com} \)) of both local BP are disjointed, which can be seen below:
\[
Σ_{Cus} \cap Σ_{Com} = \emptyset
\]
(7)
\[
Σ_{Cus} = \{msg.cus1.out, msg.cus2.out, msg.cus3.out, msg.cus4.out, msg.cus5.out, msg.cus6.out\}
\]
\[
Σ_{Com} = \{msg.com1.out, msg.com2.out, msg.com3.out, msg.com4.out, msg.com5.out, msg.com6.out, msg.com7.out, msg.com8.out\}
\]
Ω\text{Cus} \cap \Omega\text{Com} = \emptyset \quad (8)
Ω\text{Cus} = \{\text{msg.cus}_1\text{.in}, \text{msg.cus}_1\text{.last}, \text{msg.cus}_2\text{.in}, \text{msg.cus}_2\text{.last}, \text{msg.cancel}\text{.can}, \\
\text{msg.cus}_5\text{.in}, \text{msg.cus}_5\text{.last}, \text{msg.cus}_6\text{.in}, \text{msg.cus}_6\text{.last}\}
Ω\text{Com} = \{\text{msg.co}_1\text{.in}, \text{msg.co}_1\text{.last}, \text{msg.co}_2\text{.in}, \text{msg.co}_2\text{.last}, \text{msg.co}_3\text{.in}, \\
\text{msg.co}_5\text{.last}, \text{msg.co}_8\text{.in}, \text{msg.co}_8\text{.last}\}
2. The labelling sets of both components, \(\mathcal{L}(\text{Cus})\) and \(\mathcal{L}(\text{Com})\), are disjointed, which can also be verified as follows:
\[\mathcal{L}(\text{Cus}) \cap \mathcal{L}(\text{Com}) = \emptyset\] \quad (9)
\[
\mathcal{L}(\text{Cus}) = \{\text{start.1, cus1, cus2, cus3, cus4, cus5, cus6, xgate.1, end.1, abort.1}\}
\]
\[
\mathcal{L}(\text{Com}) = \{\text{start.2, co1, co2, co21, co3, co4, co5, co6, co7, co8, agate.1, agate.2, end.2, abort.2}\}
\]
Having verified that the assertions (7), (8), and (9), are true, we conclude that \text{Cus} and \text{Com} are “composable”. By Theorem 1 (see section 3), we have:
\[(\text{Cus}||\text{aCus}||\text{aCom})|\{\text{msg}\}] \models \phi\text{Cus} \land \phi\text{Com}\]
and because
\[\text{PSS} = (\text{Cus}||\text{aCus}||\text{aCom})|\{\text{msg}\}] \text{ and } \phi\text{PSS} = \phi\text{Cus} \land \phi\text{Com},\]
we have
\[\text{PSS} \models \phi\text{PSS}\]
Finally, we have obtained the verification of a BPTM corresponding to the \textit{Product/Service Sell} BP from their verified local BP, Customer and Company.
5 Conclusions
In this paper we have presented and validated FCVA for compositional software verification from independently verified individual components and its instantiation to specify and verify the BPTM derived from BPs supported by an EIS. The local BPs are
modelled as CSP+T process terms, since it supports syntactical composition of process terms by the concurrent composition operator. Also a timed semantics of BPMN defined in terms of CSP+T formal specification language is presented to complement the FVCA, which allows us to detail the response times of activities and tasks, temporal constraints referring to task communication and collaboration, and the valid time span to capture exception flows, according to the expected behaviour of BPs. We have shown the value and practicality of our approach by means of its application to a real–life example in the field of CRM with timed collaboration requirements. Thus, the complete BPTM, derived from its core participants, can also be proved correct by means of the formal language CSP+T that allows local verification results of CSP+T syntactical terms —representing individual local BPs— to be exported into the entire global BP verification, which is obtained as a concurrent composition of process terms. MC was used by passing the CSP+T terms through FDR2 to prove the correctness of global BPs.
Future and ongoing work will focus on the application of FCVA and the timed semantics of BPMN proposed to BPTM verification case studies; our future work will consist of doing in–depth research on the verification of these specifications, and to obtain automatic tool support for BPM by using state–of–the–art verification tools.
References
|
{"Source-Url": "http://www.scitepress.org/Papers/2010/30223/30223.pdf", "len_cl100k_base": 7974, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 39247, "total-output-tokens": 9305, "length": "2e12", "weborganizer": {"__label__adult": 0.0003740787506103515, "__label__art_design": 0.0005750656127929688, "__label__crime_law": 0.0006718635559082031, "__label__education_jobs": 0.0012989044189453125, "__label__entertainment": 9.173154830932616e-05, "__label__fashion_beauty": 0.00019609928131103516, "__label__finance_business": 0.0015964508056640625, "__label__food_dining": 0.00046443939208984375, "__label__games": 0.0008044242858886719, "__label__hardware": 0.0011730194091796875, "__label__health": 0.0007414817810058594, "__label__history": 0.0002880096435546875, "__label__home_hobbies": 0.00012755393981933594, "__label__industrial": 0.0009851455688476562, "__label__literature": 0.0003490447998046875, "__label__politics": 0.00037169456481933594, "__label__religion": 0.0004665851593017578, "__label__science_tech": 0.14599609375, "__label__social_life": 0.0001080632209777832, "__label__software": 0.0146331787109375, "__label__software_dev": 0.8271484375, "__label__sports_fitness": 0.0002636909484863281, "__label__transportation": 0.0008006095886230469, "__label__travel": 0.00021445751190185547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29847, 0.02271]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29847, 0.41105]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29847, 0.79791]], "google_gemma-3-12b-it_contains_pii": [[0, 2683, false], [2683, 5848, null], [5848, 9982, null], [9982, 13388, null], [13388, 15660, null], [15660, 17736, null], [17736, 20986, null], [20986, 24470, null], [24470, 26432, null], [26432, 29847, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2683, true], [2683, 5848, null], [5848, 9982, null], [9982, 13388, null], [13388, 15660, null], [15660, 17736, null], [17736, 20986, null], [20986, 24470, null], [24470, 26432, null], [26432, 29847, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29847, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29847, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29847, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29847, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29847, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29847, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29847, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29847, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29847, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29847, null]], "pdf_page_numbers": [[0, 2683, 1], [2683, 5848, 2], [5848, 9982, 3], [9982, 13388, 4], [13388, 15660, 5], [15660, 17736, 6], [17736, 20986, 7], [20986, 24470, 8], [24470, 26432, 9], [26432, 29847, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29847, 0.03518]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
fcf404f8c0fe84b5fea0f0f95c9d6657eefc477c
|
[REMOVED]
|
{"Source-Url": "http://dl.ifip.org/db/conf/seus/seus2010/AngelovZS10.pdf", "len_cl100k_base": 5338, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 33953, "total-output-tokens": 6485, "length": "2e12", "weborganizer": {"__label__adult": 0.0005745887756347656, "__label__art_design": 0.0007238388061523438, "__label__crime_law": 0.0006451606750488281, "__label__education_jobs": 0.0005822181701660156, "__label__entertainment": 0.00013697147369384766, "__label__fashion_beauty": 0.0002300739288330078, "__label__finance_business": 0.0003786087036132813, "__label__food_dining": 0.0005950927734375, "__label__games": 0.0010471343994140625, "__label__hardware": 0.0079498291015625, "__label__health": 0.0009007453918457032, "__label__history": 0.0004203319549560547, "__label__home_hobbies": 0.0001906156539916992, "__label__industrial": 0.0019683837890625, "__label__literature": 0.0003097057342529297, "__label__politics": 0.0004858970642089844, "__label__religion": 0.0007452964782714844, "__label__science_tech": 0.328125, "__label__social_life": 9.08970832824707e-05, "__label__software": 0.0083770751953125, "__label__software_dev": 0.642578125, "__label__sports_fitness": 0.0004737377166748047, "__label__transportation": 0.0022640228271484375, "__label__travel": 0.0002980232238769531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29455, 0.01242]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29455, 0.43708]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29455, 0.91328]], "google_gemma-3-12b-it_contains_pii": [[0, 2364, false], [2364, 5690, null], [5690, 9098, null], [9098, 10917, null], [10917, 13118, null], [13118, 15601, null], [15601, 16631, null], [16631, 19405, null], [19405, 21790, null], [21790, 23479, null], [23479, 26399, null], [26399, 29455, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2364, true], [2364, 5690, null], [5690, 9098, null], [9098, 10917, null], [10917, 13118, null], [13118, 15601, null], [15601, 16631, null], [16631, 19405, null], [19405, 21790, null], [21790, 23479, null], [23479, 26399, null], [26399, 29455, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29455, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29455, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29455, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29455, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29455, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29455, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29455, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29455, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29455, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29455, null]], "pdf_page_numbers": [[0, 2364, 1], [2364, 5690, 2], [5690, 9098, 3], [9098, 10917, 4], [10917, 13118, 5], [13118, 15601, 6], [15601, 16631, 7], [16631, 19405, 8], [19405, 21790, 9], [21790, 23479, 10], [23479, 26399, 11], [26399, 29455, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29455, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
9a34748d84182aef86638ddf0bd7043df01d5e70
|
Compositional Schedulability Analysis of An Avionics System Using UPPAAL
Abdeldjalil Boudjadar, Jin Hyun Kim, Kim G. Larsen, Ulrik Nyman
Institute of Computer Science, Aalborg University, Denmark
Abstract—We propose a compositional framework for analyzing the schedulability of hierarchical scheduling systems. The framework is realized using Parameterized Stopwatch Automata to describe tasks, whereas the schedulability analysis is performed using UPPAAL. The concrete behavior of each periodic preemptive task is given as a list of timed actions to which resources are assigned by SIRAP protocol. Our framework is reconfigurable in which the hierarchical structure, the scheduling policies, the concrete task behavior and the shared resources can all be reconfigured. Finally, we use our framework to analyze the schedulability of a real-time avionics system.
Keywords—Hierarchical scheduling systems, Parameterized stopwatch automata, Compositional analysis, Uppaal.
I. INTRODUCTION
In the area of real-time embedded systems, like avionics and automotive, it is primordial to ensure the continually correct behavior of such systems. Avionics and automotive systems consist of both safety-critical and non safety-critical features, which are implemented in components that might share resources (e.g. processors). Resource utilization represents a common challenge for both academics and practitioners, and thus it is important to have an efficient and reliable scheduling policy for the individual parts of the system. Scheduling is a widely used mechanism for guaranteeing that the different components of a system will be provided with the correct amounts of resources.
A scheduling system consists of a set of concurrent tasks (processes) competing resources according to a scheduling policy. Each task has a set of timing requirements to fulfill. A hierarchical scheduling system consists of multiple scheduling systems in a hierarchical structure. A scheduling system is said to be schedulable if all its tasks achieve their jobs without missing any deadline.
Compositional analysis has been introduced [7], [12], as a key model-checking technology, to deal with state space explosion caused by the parallel composition of components. In this paper, we propose a model-based approach for analyzing the schedulability of hierarchical scheduling systems. We profit from the technological advances made in the area of model-checking to analyze the schedulability of real-time systems. While schedulability is a liveness property, it can be checked in UPPAAL as a reachability property. In fact, this done by adding to the behavior of each task an Error state. Such a state is immediately reachable from any other state of the given task once the deadline is missed.
The research presented in this paper has been partially supported by EU Artemis Projects CRAFTERS and MBAT.
Our framework is implemented using parameterized stopwatch automata models. To enable and manage resource sharing between tasks, we use the SIRAP (Subsystem Integration and Resource Allocation Policy) protocol [4]. System tasks are instances of the same timed automaton with different input parameters. A special parameter of the task model is a list of timed actions [8], specifying the concrete behavior of the given task. This list includes abstract computation steps, locking and unlocking resources. Fig. 1 summarizes our approach, where the system aspects are separately specified in three profiles: timing requirements, resource sharing and system architecture. This separation of concerns leads our framework to be reconfigurable and flexible in the way that updating a profile does not necessary affect the other two profiles [13].
Thanks to the parameterization, the framework can easily be instantiated for a specific hierarchical scheduling application. Similarly, each scheduling policy (e.g. EDF: Earliest Deadline First, FPS: Fixed Priority Scheduling, RM: Rate Monotonic) is separately modeled and can be instantiated for any component.
We analyze the model in a compositional manner, so that the schedulability of each component is analyzed together with the interface specifications of the level directly below it. In this analysis, we non-deterministically supply the required resources of each component, i.e. each component is guaranteed to be provided its required resources for each period. This fact is viewed by the component entities as a contract by which the component has to supply the required resources, provided by the component parent level, to its sub entities. The main contribution of this paper combines:
- a compositional analysis approach where the schedulability of a system relies on the recursive schedulability analysis of its individual subsystems.
- a reconfigurable schedulability framework where a sys-
Figure 2. Example of hierarchical scheduling system.
The rest of the paper is organized as follows: Section II is an informal description of our compositional analysis technique using a running example. Section III includes both the background and the modeling theory of hierarchical scheduling systems. In section IV, we give the UPPAAL models of our framework where we consider concrete behavior of tasks. Moreover, we show how the compositional analysis can be applied on the models using the UPPAAL verification engine. Section V shows the applicability of our framework, where we analyze the schedulability of an avionics system. Section VI introduces related work. Finally, section VII concludes our paper and outlines the future work.
II. COMPOSITIONAL SCHEDULABILITY ANALYSIS
In this paper, we structure our system model as a set of hierarchical components. Each component, in turn, is the parallel composition of a set of entities (components or tasks) together with a local scheduler. Namely, each component is specified with a period (prd), a budget (budget) stating the execution time that the component should be provided with, and a scheduling policy (s) to manage the CPU allocation to the component child entities. The real-time interface of a component consists of prd and budget.
A parent component treats the real-time interface of each one of its child components as a single task with the given real-time interface. The component supplies its child entities with CPU and resource allocation according to their real-time interfaces. The analysis of a component (scheduling unit) consists of checking that its child entities can be scheduled within the component budget according to the component scheduling policy. A component can be also parameterized by a set of typed resources (R) which serve as component local resources. One can remark that the CPU can be managed by any scheduling policy s, whereas the sharing of the other resources will be managed by SIRAP.
Tasks represent the concrete behavior of the system. They are parameterized with period (prd), execution time (et), deadline (d), priority (prio) and preemption (p). The execution time (et) specifies the CPU usage time required by the task execution for each period (prd). Deadline parameter (d) represents the latest point in time at which the task execution must be done. The parameter prio specifies the user priority associated to the task. Finally, p is a Boolean flag stating whether or not the task is preemptive. The task behavior is a sequence of timed actions consuming CPU time and resources. Moreover, task and component parameters prd, budget and et can be single values or time intervals.
An example of a hierarchical scheduling system is depicted in Fig. 2. For the sake of simplicity, we omit task deadlines and consider them the same as periods. Moreover, we only consider single parameter values instead of time intervals.
In this example, the top level System schedules Component1 and Component2 with the EDF scheduling algorithm. The components are viewed by the top level System as tasks having timing requirements. Component1, respectively Component2, has the interface (100, 37), respectively (70, 25), as period and execution time. The system shown through this example is schedulable if each component, including the top level, is schedulable. Thus, for the given timing requirements Component1 and Component2 should be schedulable by the top level System according to the EDF scheduling policy. The tasks task1 and task2 should be schedulable, with respect to the timing requirement of Component1 (100, 37), also under the EDF scheduling policy. Similarly, task3, task4 and task5 should be schedulable, with respect to the timing requirements of Component2, under the RM scheduling policy.
For a given system structure, we can have many different system configurations. A system configuration consists of an instantiation of the model where each parameter has a specific value. Fig. 2 shows one such instantiation.
In order to design a framework that scales well for the analysis of larger hierarchical scheduling systems, we have decided to use a compositional approach [3], [6]. Fig. 3 shows how the scheduling system, depicted in Fig. 2, is analyzed using three independent analysis steps. These steps can be performed in any order.
The schedulability analysis of each component, including the top level, is analyzed together with the interface specifications of the level directly below it. Accordingly, we will never analyze the whole hierarchy at once. In Fig. 3, the analysis process A consists of checking whether the two components Component1 and Component2 are schedulable under the scheduling policy
EDF. In this analysis step, we only consider the interfaces of components in the form of their execution-time (budget) and period, so that we consider the component as an abstract task when performing the schedulability analysis of the level above it. In this way, we consider the component-composition problem similarly to [19] but using a non-deterministic supplier model for the interfaces. When performing an analysis step like A1, the resource supplier is not part of the analysis. In order to handle this, we add a non-deterministic supplier to the model. The supplier will guarantee to provide the amount of execution time, specified in the interface of Component1, before the end of the component period. We check all possible ways in which CPU and resources can be supplied to the subsystem in A1. The supplier of each component provides CPU resource to the child entities of that component in a non-deterministic way. During the analysis of A1, the supplier non-deterministically decides to start or stop supplying, while still guaranteeing to provide the required amount to its sub entities before the end of the period. The analysis A2 is performed in the same way as A1.
Our compositional analysis approach results in an over-approximation i.e. when performing the analysis of a subsystem, we over-approximate the behavior of the rest of the system. This can result in specific hierarchical scheduling systems that could be schedulable if one considers the entire system at once, but that is not schedulable using our compositional approach. We consider this fact as a design choice which ensures separation of concerns, meaning that small changes to one part of the system does not effect the behavior of other components. In this way, the design of the system is more stable which in turn leads to predictable system behavior. This over-approximation, which is used as a design choice, should not be confused with the over-approximation used in the verification algorithm inside the UPPAAL verification engine.
Thanks to the parameterization of system entities; scheduling policies, preemptiveness, execution times, periods and budgets can all easily be changed. In order to estimate the performance and schedulability of our running example, we have evaluated a number of different configurations of the system. This allows us to choose the best of the evaluated configurations of the system.
III. BACKGROUND AND THEORY
Hierarchical scheduling systems are structured to be one or more components running on the same execution platform. Each component, in turn, consists of a set of entities that can be developed independently and a local scheduler. Component entities are known by the component workload, and are either components or tasks. The execution platform we consider in our framework is a single processor (CPU). We specify the behavior of each task by a sequence of timed actions (computation steps, input, output, etc) that use CPU and resources. The CPU resource is arbitrated by different scheduling policies such as EDF, RM and FPS, whereas the resource sharing is managed by a resource sharing protocol.
Fig. 1 summarizes our approach. Information on the scheduling requirements of the system is combined with the hierarchical structure of the system together with a detailed description of the tasks behavior. A timed action can be specified to execute on a specific piece of hardware such as the CPU, Input or Output units. All of this information is used as parameters for Stopwatch Automata templates that are part of the framework. Once a specific instance of the framework has been created, its schedulability can be checked compositionally using UPPAAL.
The isolation of components in hierarchical scheduling systems and the separation of profiles in our framework have the advantage of making systems flexible, where components can be reused, upgraded and analyzed individually.
A. Resource Sharing in Hierarchical Scheduling Systems
The limitation of resources represents a strong factor in the setting of any software system, because resources cannot be duplicated due to their cost. So that the concurrent processes of a system compete to gain the access to resources in order to perform their jobs, and only one process is allowed to use the resource at a time. The mechanism to ensure that only one process gains the use of a resource at time is known as mutual exclusion. However in the area of hierarchical scheduling systems, due to the hierarchy the classical mutual exclusion mechanisms cannot operate fairly. Resource sharing protocols have been designed to reasonably share (non-CPU) resources between system tasks where the system architecture is hierarchical. Some popular resource sharing protocols are: Priority Inheritance Protocol (PIP) [16], the Priority Ceiling Protocol (PCP) [15], the Stack Resource Policy (SRP) [2], and Subsystem Integration and Resource Allocation Policy (SIRAP) [4]. Roughly speaking, a resource sharing protocol for hierarchical systems is equivalent to the set of local schedulers that components use to arbitrate their tasks on CPU.
Due to hierarchy, we have chosen to use the SIRAP protocol [4] to manage resource sharing in our framework. In fact, SIRAP has been developed as a way to integrate different subsystems, endowed with different scheduling policies, in one hierarchical scheduling system with the presence of shared resources. Subsystems can be isolated from each other, even though they share mutually exclusive resources, for compositional verification, validation and unit testing.
B. Modeling Theory
A task has a concrete behavior performing a sequence of timed actions. Each timed action can either be a computation step (Compute), access or release of a shared resource (Lock, Unlock) or particular statements marking the end of the period (Pend) or the end of the task execution (End).
Definition 1 (Timed action): Given a set of action names Acts = \{Compute, Lock, Unlock, Pend, End\}, a CPU and a set of resources R, a timed action A is a one step computation given by the tuple <Act, Proc, BCET, WCET> where:
- Act ∈ Acts is the action name,
- Proc ⊆ {CPU} \cup R specifies the identifiers of processor and resources that the timed action A requires for its execution,
• BCET and WCET are respectively best case and worst case execution times.
By \( \mathcal{A} \) we denote the set of all timed actions. In fact, the CPU and resources can be viewed as a multi-core execution platform. Likewise, we define the behavior \( B \) of a task as a transition system \( (L, l^0, \rightarrow) \) specifying the sequence of timed actions performed by that task, where \( L \) is a set of states, \( l^0 \in L \) is the initial state and \( \rightarrow \subseteq L \times \mathcal{A} \times L \) is the transition relation. States can be interpreted in the semantic level as valuations of the task variables together with the state of each task (ready, waiting, preempted, done, etc). The behavior of a component is given by the parallel composition of the transition systems of its nested tasks.
**Definition 2 (Task structure):** A task \( T \) is given by \( \langle \text{Prd}, \text{BCET}, \text{WCET}, \text{Pri}, B, Dln \rangle \) where \( \text{Prd} \) is the task period, BCET and WCET are respectively best case and worst case execution times of \( T \), \( \text{Pri} \) is the priority level associated to task \( T \), \( B \) is the task behavior stated above and \( Dln \) is the deadline. Therefore, the task specification is given by an interface \( \langle \text{Prd}, \text{Budget}, \text{Pri}, B, Dln \rangle \) stating the time constraints, a behavior \( B \) expressed by a sequence of timed actions and a priority \( \text{Pri} \) that will be applied for each timed action of the task in question.
Roughly speaking, a component is given by an interface stating its timing requirements and a local policy for scheduling its nested entities (workload). The interface of a component \( C' \) can be viewed by its parent component \( C \) as resource requirements that must be supplied by \( C \) to \( C' \), and it is viewed by the child entities of \( C' \) as a contract that the component \( C' \) will provide the amount of resources specified in its interface to its workload. For the sake of simplicity, we do not consider local resources for each component, i.e. all resource are global and shared by all of the system components.
**Definition 3 (Component):** A component \( C \) is a tuple \( \langle \text{Prd}, \text{Budget}, \text{Pri}, s, (e_1, .., e_n) \rangle \) where:
- \( \text{Prd} \) and \( \text{Pri} \) are the same as for tasks,
- \( \text{Budget} \) is the amount of CPU time that the component guarantees to provide to its workload,
- \( s \in \{\text{READY}, \text{RUNNING}, \text{WAITING}, \text{PREEMPTED}, \text{DONE} \} \) is a scheduling policy,
- \( (e_1, .., e_n) \) are component entities (workload), either tasks or other components.
Similarly, a system is the top level component without timing requirements \( \langle \text{Prd}, \text{Budget}, \text{Pri} \rangle \). We emphasize the fact that our framework can be instantiated for any combination of scheduling algorithms.
**IV. UPPAAL Modeling and Analysis**
The UPPAAL verification suite provides both symbolic and statistical model checking (SMC). The models which in practice can be analyzed statistically, using the UPPAAL SMC verification engine, are larger and can contain more features. Stopwatches [6] are clocks that can be stopped and resumed without a reset. They are very practical to measure the execution time of preemptable tasks. This section gathers the Parameterized Stopwatch Automata (PSA) models of our framework, as well as the UPPAAL analysis. Due to space limitations, we only explain important features.
A. PSA Resource Model
The hierarchical scheduling system structure is a set of scheduling components, each one includes a single specific scheduling algorithm and a set of entities (tasks or components). To analyze a single component by means of a compositional manner, it is necessary to consider the interrupted behavior of that component by the other concurrent components within the same system. However, it is hard to capture the interrupting behavior of the other components that influence the component under analysis. For this reason, we introduce a non-deterministic supplier to model all scenarios that the component under analysis can run. Such a non-deterministic fact simulates the influence of the other system components on the execution of the component under analysis. The scheduling policy within the component then allocates the CPU resource to tasks. It also abstracts the possibility that a task from another component of the system (not part of the current analysis step) could preempt the execution of tasks of the current component.
Fig. 4 shows the PSA model of supplier. \( \text{supplying}_\text{time[supid]} \) is a stopwatch that measures the CPU time provided by supplier during each period, so that it only progresses when the supplier is at location Supplying. In fact, the supplier keeps traveling between locations Supplying and NotSupplying while the budget is not fully provided \( \text{supplying}_\text{time[supid]} \leq \text{sup[supid]}.\text{budget} \) and the slack time \( \text{curTime} \leq \text{sup[supid]}.\text{budget} - \text{supplying}_\text{time[supid]} \) is not expired, until the component budget is fully provided \( \text{supplying}_\text{time[supid]} \geq \text{sup[supid]}.\text{budget} \) and then starts a new period from location Done.
B. PSA Model of Tasks
A task model is depicted in Fig. 5. After being started at location Idle, the task joins location WaitingOffset waiting until the task offset is expired. From that location, the task moves to location ReadingOP where it can read a PEND command and thus joins ClosingPeriod to finalize a period execution, and then moves to the location PeriodDone. At location ReadingOP, the task can also read operations COMPUTE, LOCK_SIRAP, and UNLOCK_SIRAP from its concrete behavior description. By reading a COMPUTE command, the task checks its own status if it is either READY or RUNNING. A READY status means that the task is ready to run using the CPU, whereas RUNNING means that the task is still scheduled.
to use CPU. From location ReqSched, the task updates its status to RUNNING and inserts its Id into the CPU queue. From location CheckingSupply, the task checks whether the supplier is providing the CPU resource. If the supplier is currently providing CPU resource, the task moves to location Executing, otherwise it moves to location Suspended. At location Executing, the task checks if it has been assigned a CPU via function isTaskSched(). If so, the stopwatch proTime[tid] keeps progressing while the wcet and deadline are not reached yet. The task may keep traveling between locations Executing and Suspended according to whether or not the CPU is supplied. The task joins location MissDeadline whenever the deadline is missed.
The task execution can be delayed due to the resource managed by SIRAP, once the task requests a resource via command LOCK_SIRAP. Such a delay can be one of the following:
- At location GlobalWaiting, the task is locally (designated at the component level) allocated to use the resource, but it is not globally allocated for the same resource, i.e., a task from another component is using the resource.
- At location LocalWaiting, the task is not locally allocated to use the resource.
- At location SIRAPWaiting, the task is delayed due to SIRAP protocol, i.e., in the case of a deficit of the remaining resource of the supply for a period.
From location CheckTaskPendingStatus, the task either moves to LocalWaiting by losing the resource allocation, or to location SIRAPWaiting by a deficit of the supplied resource.
By reading a UNLOCK_SIRAP command at location ReadingOP, the task withdraw its identifier tid from the resource queue managed by SIRAP.
The schedulability of a task can be checked via the reachability of location MissDeadline using the query: E<>MissDeadline.
In order to avoid checking the schedulability of each task separately, we introduce a global variable error that can be updated to true by any task missing its deadline, so that the schedulability of a component can be checked using the following query: A[]error=!1.
C. PSA Model of Resource Sharing Protocol
To share resources between the tasks of a hierarchical scheduling system, we use SIRAP protocol. In fact, SIRAP enables the isolation of system components from other even in the presence of mutually exclusive shared resources. We have modeled SIRAP protocol as shown in Fig. 6. Initially, the protocol holds in the initial location, WaitSchedReq, waiting for a resource request from one of the candidate tasks. By the reception of a new resource request run_schedu[IRAP][I] where I is the identifier of the requested resource, SIRAP checks whether the requesting task is the current scheduled one (sel_tid(tid) == req_tid(tid)) or not.
If it is not the case, the status of the requesting task will be updated to PENDING_RESOURCE and the protocol joins the initial location. Otherwise, the protocol checks that the time left from the component budget of the current task (sup[tstat[sel_tid(tid)].pid].budget) covers the amount of resource requested by the task in question. If the budget of the current task supplier is greater than the sum of time supplied by that supplier to its tasks and the resource usage time of the current request (sup[tstat[sel_tid(tid)].pid].budget > supply_time[tstat[sel_tid(tid)].pid] + tstat[sel_tid(tid)].rc_time) then the resource request will be satisfied for the current task, otherwise the current requesting task has to wait for the next supply (tstat[sel_tid(tid)].status = PENDING_BUDGET).
D. PSA CPU Model
The PSA model of the CPU template is depicted in Fig. 7. After receiving a request r_req[tid] from a task, the CPU template activates the component scheduling policy policy in order to determine to which task the CPU resource should be assigned. rid is the CPU resource identifier. Once the CPU is assigned to a task, at location Assign, such a task keeps using the CPU resource until it is done (finished[rid]?) or a new request (r_req[tid]?) to reschedule the CPU appears. Whenever a CPU schedule is done (finished[rid]?) and the CPU waiting list is not empty (rq[rid].length>0), the CPU resource moves to location ReqSched and restarts the scheduling process, otherwise it keeps waiting at location Idle until a task requests the CPU resource.
V. Case Study
To show the applicability of our compositional framework, we have modeled the avionics system introduced in [14], [10], and analyzed its schedulability. In fact, this system is a partial specification for a hypothetical avionics mission control computer (MCC) system dedicated to combat and attack aircrafts. The application is a composition of 15 tasks declared in [14].
A brief description of the avionics system tasks is given below:
- Weapon release ($T_1$): this task checks periodically if the bomb button is being pressed or the time of a scheduled release is reached to drop a weapon.
- Radar tracking ($T_2$): it explores a ground map, or performs a ground search or a single-target track.
• Target tracking ($T_3$): this task captures the target position relative to the aircraft. The radar keeps tracking a target if it is already spotted, and also designated by the aircrew for a potential attack.
• Target sweetening ($T_4$): no description provided for this task.
• HOTAS Bomb Button ($T_5$): a target is designated as an attack target by activating the Hands-On Throttle And Stick switch.
• Aircraft Flight data ($T_6$): it determines the best available estimates of aircraft position, velocity, attitude, motion through air-mass, etc.
• HUD display ($T_7$): the Head-Up Display shows the aircraft flight data (airspeed, heading, etc.), the strike point and/or seeker position.
• MPD display ($T_8$): the Multi-Purpose Display shows the tactical situation, the threat data, a display of stores remaining, radar display information, etc.
• Steering ($T_9$): it computes the steering cues for display based either on way-point steering or target attack steer-
### TABLE I
**GENERIC AVIONICS TASK ATTRIBUTES**
<table>
<thead>
<tr>
<th>Tasks</th>
<th>Prio</th>
<th>Exec</th>
<th>Din</th>
<th>PriO</th>
<th>Input Msg</th>
<th>Output Msg</th>
</tr>
</thead>
<tbody>
<tr>
<td>$T_1$</td>
<td>10</td>
<td>1</td>
<td>5</td>
<td>1</td>
<td>3, 1</td>
<td>1</td>
</tr>
<tr>
<td>$T_2$</td>
<td>40</td>
<td>2</td>
<td>40</td>
<td>2</td>
<td>24, 1</td>
<td>3</td>
</tr>
<tr>
<td>$T_3$</td>
<td>40</td>
<td>4</td>
<td>40</td>
<td>3</td>
<td>1, 4, 1, 3</td>
<td>6, 3</td>
</tr>
<tr>
<td>$T_4$</td>
<td>40</td>
<td>2</td>
<td>40</td>
<td>4</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>$T_5$</td>
<td>1</td>
<td>1</td>
<td>40</td>
<td>5</td>
<td>4</td>
<td>11</td>
</tr>
<tr>
<td>$T_6$</td>
<td>50</td>
<td>8</td>
<td>50</td>
<td>6</td>
<td>3, 12, 4</td>
<td>3, 25, 18, 18</td>
</tr>
<tr>
<td>$T_7$</td>
<td>50</td>
<td>6</td>
<td>50</td>
<td>7</td>
<td>18, 3, 4</td>
<td>7</td>
</tr>
<tr>
<td>$T_8$</td>
<td>50</td>
<td>8</td>
<td>50</td>
<td>8</td>
<td>1, 20, 20, 7, 3, 3</td>
<td>5</td>
</tr>
<tr>
<td>$T_9$</td>
<td>80</td>
<td>6</td>
<td>80</td>
<td>9</td>
<td>6, 1, 6, 3</td>
<td>3</td>
</tr>
<tr>
<td>$T_{T10}$</td>
<td>100</td>
<td>7</td>
<td>100</td>
<td>10</td>
<td>17, 3, 1, 1, 1</td>
<td>6</td>
</tr>
<tr>
<td>$T_{T11}$</td>
<td>100</td>
<td>3</td>
<td>100</td>
<td>11</td>
<td>4</td>
<td>11</td>
</tr>
<tr>
<td>$T_{T12}$</td>
<td>200</td>
<td>1</td>
<td>200</td>
<td>12</td>
<td>4</td>
<td>11</td>
</tr>
<tr>
<td>$T_{T13}$</td>
<td>200</td>
<td>2</td>
<td>200</td>
<td>13</td>
<td>20</td>
<td>2</td>
</tr>
<tr>
<td>$T_{T14}$</td>
<td>400</td>
<td>6</td>
<td>400</td>
<td>14</td>
<td>17, 3, 1, 1, 1</td>
<td>6</td>
</tr>
<tr>
<td>$T_{T15}$</td>
<td>1000</td>
<td>5</td>
<td>400</td>
<td>15</td>
<td>1, 1, 1, 1, 1</td>
<td>2</td>
</tr>
</tbody>
</table>
Table I shows the task timing requirements of the avionics system. The attributes of the avionics systems are depicted in Table I. The task timing requirements are given in milliseconds. To each task is assigned a priority level, where lower numbers indicate lower priorities. Tasks may perform Input and Output actions to communicate messages on the dedicated input and output resources, respectively. The sequence of messages that are sent or received by a task are specified in columns "Input Msg" and "Output Msg" respectively.
The architecture of the avionics system as well as the components interfaces are shown in Fig. 8. The architecture is depicted in the table. The system includes 4 tasks: 1) Sensor and Navigation (10000, 7419); 2) Control & Display (10000, 6982); 3) Fire & Stores (10000, insuf); 4) Background (10000, 442).
By seeing the counter-example generated by UPPAAL model checker, we can investigate the scenarios showing when one of the tasks of component Fire and Stores misses its deadline. Compared to analytical methods, our approach generates a counter-example that is quite useful to update the task attributes in order to achieve the schedulability of the system. We keep the way how to exploit the counter-example in updating the timing requirements of tasks as a future work.
Hierarchical scheduling systems were introduced in [11], [9]. An analytical compositional framework for hierarchical scheduling systems was presented in [18] as a formal way to elaborate a compositional approach for schedulability analysis of hierarchical scheduling systems [20]. In the same way, the authors of [17] dealt with a hierarchical scheduling framework for multiprocessors based on cluster-based scheduling. They used analytical methods to perform analysis, however both approaches [18], [17] have difficulty in dealing with complicated behavior of tasks.
Recent research within schedulability analysis increasingly uses model-based approaches, because this allows for modeling more complicated behavior of systems. The rest of the related work presented in this section focuses on model-based approaches.
In [3], the authors analyzed the schedulability of hierarchical scheduling systems, using a model-based approach with the TIMES tool [1], and implemented their model in VxWorks [3]. They constructed an abstract task model as well as scheduling algorithms, where the schedulability analysis of a component does not only consider the timing attributes of that component but also the timing attributes of the other components that can preempt the execution of the component under analysis.
In [8], the authors introduced a model-based framework using UPPAAL for the schedulability analysis of flat systems. They modeled the concrete task behavior as a sequence of timed actions, each one represents a command that uses processing and system resources and consumes time.
The authors of [5] provided a compositional framework for the verification of hierarchical scheduling systems using a model-based approach. They specified the system behavior in terms of preemptive time Petri nets and analyzed the system schedulability using different scheduling policies.
We combine and extend these approaches [5], [8] by considering hierarchy, resource sharing and concrete task behavior, while analyzing hierarchical scheduling systems in a compositional way. Moreover, our model can easily be reconfigured to fit any specific application. Comparing our model-based approach to analytical ones, our framework enables to describe more complicated and concrete systems.
VII. CONCLUSION
We have introduced a compositional framework for the schedulability analysis of hierarchical real-time systems. System tasks are modeled using Parameterized Stopwatch Automata (PSA) of UPPAAL. To perform the schedulability analysis, we profit from the advances of model-checking technology. The schedulability has been verified as a reachability property. In order to mitigate the behavior of the rest of system when analyzing an individual component, we introduced a non-deterministic supplier where the resource supply of one budget can be given on several chunks, simulating then the preemption that the rest of system may perform on the behavior of the component under analysis. We also considered resource sharing between system components and used SIRAP protocol to manage such a sharing. We have applied our schedulability analysis framework on an avionics system where components are analyzed separately even they share communication resources.
REFERENCES
|
{"Source-Url": "http://vbn.aau.dk/files/204498651/ownICAASE2014.pdf", "len_cl100k_base": 7731, "olmocr-version": "0.1.48", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 29485, "total-output-tokens": 9321, "length": "2e12", "weborganizer": {"__label__adult": 0.0004427433013916016, "__label__art_design": 0.0005955696105957031, "__label__crime_law": 0.0005311965942382812, "__label__education_jobs": 0.0009975433349609375, "__label__entertainment": 0.00015091896057128906, "__label__fashion_beauty": 0.00023603439331054688, "__label__finance_business": 0.0005612373352050781, "__label__food_dining": 0.0004630088806152344, "__label__games": 0.0014019012451171875, "__label__hardware": 0.004207611083984375, "__label__health": 0.00081634521484375, "__label__history": 0.0005393028259277344, "__label__home_hobbies": 0.0001652240753173828, "__label__industrial": 0.001323699951171875, "__label__literature": 0.000331878662109375, "__label__politics": 0.0004625320434570313, "__label__religion": 0.0005779266357421875, "__label__science_tech": 0.407958984375, "__label__social_life": 0.00010269880294799803, "__label__software": 0.01275634765625, "__label__software_dev": 0.56201171875, "__label__sports_fitness": 0.0003786087036132813, "__label__transportation": 0.0026912689208984375, "__label__travel": 0.000324249267578125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37460, 0.03773]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37460, 0.54197]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37460, 0.87272]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 4842, false], [4842, 9576, null], [9576, 15868, null], [15868, 21963, null], [21963, 27004, null], [27004, 27978, null], [27978, 30434, null], [30434, 37460, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 4842, true], [4842, 9576, null], [9576, 15868, null], [15868, 21963, null], [21963, 27004, null], [27004, 27978, null], [27978, 30434, null], [30434, 37460, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37460, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37460, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37460, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37460, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37460, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37460, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37460, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37460, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37460, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37460, null]], "pdf_page_numbers": [[0, 0, 1], [0, 4842, 2], [4842, 9576, 3], [9576, 15868, 4], [15868, 21963, 5], [21963, 27004, 6], [27004, 27978, 7], [27978, 30434, 8], [30434, 37460, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37460, 0.12409]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
640a762cb075d6c86c1a3ae3c7f86e3a5439e4a9
|
[REMOVED]
|
{"len_cl100k_base": 7662, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 55394, "total-output-tokens": 10805, "length": "2e12", "weborganizer": {"__label__adult": 0.00040030479431152344, "__label__art_design": 0.00026917457580566406, "__label__crime_law": 0.0003151893615722656, "__label__education_jobs": 0.0016651153564453125, "__label__entertainment": 5.340576171875e-05, "__label__fashion_beauty": 0.00014972686767578125, "__label__finance_business": 0.0003237724304199219, "__label__food_dining": 0.0003170967102050781, "__label__games": 0.0005292892456054688, "__label__hardware": 0.0005669593811035156, "__label__health": 0.0004503726959228515, "__label__history": 0.00015294551849365234, "__label__home_hobbies": 7.331371307373047e-05, "__label__industrial": 0.0002417564392089844, "__label__literature": 0.000255584716796875, "__label__politics": 0.0001959800720214844, "__label__religion": 0.0003528594970703125, "__label__science_tech": 0.0045318603515625, "__label__social_life": 0.00012934207916259766, "__label__software": 0.0039520263671875, "__label__software_dev": 0.984375, "__label__sports_fitness": 0.0002827644348144531, "__label__transportation": 0.00037217140197753906, "__label__travel": 0.00016605854034423828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44174, 0.05198]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44174, 0.29367]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44174, 0.90661]], "google_gemma-3-12b-it_contains_pii": [[0, 1054, false], [1054, 1287, null], [1287, 3173, null], [3173, 6588, null], [6588, 9885, null], [9885, 12198, null], [12198, 13837, null], [13837, 16859, null], [16859, 19291, null], [19291, 22073, null], [22073, 24891, null], [24891, 25063, null], [25063, 25223, null], [25223, 26000, null], [26000, 28879, null], [28879, 32236, null], [32236, 35356, null], [35356, 38190, null], [38190, 41132, null], [41132, 43073, null], [43073, 44034, null], [44034, 44174, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1054, true], [1054, 1287, null], [1287, 3173, null], [3173, 6588, null], [6588, 9885, null], [9885, 12198, null], [12198, 13837, null], [13837, 16859, null], [16859, 19291, null], [19291, 22073, null], [22073, 24891, null], [24891, 25063, null], [25063, 25223, null], [25223, 26000, null], [26000, 28879, null], [28879, 32236, null], [32236, 35356, null], [35356, 38190, null], [38190, 41132, null], [41132, 43073, null], [43073, 44034, null], [44034, 44174, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44174, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44174, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44174, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44174, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44174, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44174, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44174, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44174, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44174, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44174, null]], "pdf_page_numbers": [[0, 1054, 1], [1054, 1287, 2], [1287, 3173, 3], [3173, 6588, 4], [6588, 9885, 5], [9885, 12198, 6], [12198, 13837, 7], [13837, 16859, 8], [16859, 19291, 9], [19291, 22073, 10], [22073, 24891, 11], [24891, 25063, 12], [25063, 25223, 13], [25223, 26000, 14], [26000, 28879, 15], [28879, 32236, 16], [32236, 35356, 17], [35356, 38190, 18], [38190, 41132, 19], [41132, 43073, 20], [43073, 44034, 21], [44034, 44174, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44174, 0.10811]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
f3effd1ccafb804418fdffc05c2e272803230091
|
The Development of Malang Virtual Tourism for Preservation of Traditional Culture using React 360
Herman Tolle, Primananda Kurnia S, Wibisono Sukmo Wardhono, Ratih Kartika Dewi, Lutfi Fanani, Tri Afirianto
Abstract— Malang is one of the tourist destinations in East Java, and always visited by both domestic and foreign tourists. However, Malang isn’t the first destination in tourism, this is due to the lack of intense promotion. Virtual tourism is a virtual reality technology for promoting tourism. Based on these problems, the researchers developed a virtual reality application that can provide immersive experiences to users assembled with tourist destinations in Malang. Application was developed using React 360 framework, because of its ease of development and can be developed for various platforms. The Software Development Life Cycle (SDLC) is the waterfall method. The implementation using VR Headset and VR Controller. In testing, researchers conducted functional testing and non-functional testing. Functional testing is done using black box testing and non-functional testing is done by the System Usability Scale (SUS) method. The results of functional testing using black box are 100% valid. In the usability test, the System Usability Scale (SUS) questionnaire obtained a score of 89. From the scores that have been obtained, this application is acceptable, with Grade A and adjective rating Excellent.
Index Terms— Tourism, Malang, Virtual Reality, Mobile Application
I. INTRODUCTION
Malang is one of the big cities in East Java. Malang is famous for its cool weather, due to its geographical conditions that surrounded by mountains, so it’s known as Switzerland van Java. This city, which is also known as the City of Flowers, is positioned as a city that is beautiful, comfortable, friendly to live in, a tourist destination, has many heritages and culinary spots [1].
In 2018, Malang became a destination for both domestic and foreign tourists. There were 15,034 foreign tourists and 4.8 million domestic tourists throughout 2018 [2]. Compared to previous years, Malang has increased the number of tourists, both domestic and foreign. With the number increasing visitors every year, it can attract even more tourists for the next year.
In 2017, the Culture and Tourism Office has scheduled as many as 35 tourism events to be held, because Malang has interesting potential in the tourism sector. However, Malang has not become a major destination in tourism. This is due to a lack of intense promotional support [3].
Technology is one of the fields that can be used as a media for tourism promotion. One of the technologies that can be used by tourism promotion media is Virtual Reality (VR), which over the past few years has experienced an increase in the business tourism sector [4]. VR is a technology that induces the creation of experiences designed by the creator to the user in artificial sensory stimulation, when the user has little or no awareness of distraction [5]. VR allows users to interact with virtual or virtual environments created by computers. VR has the power to visualize the spatial environment, which provides an experience for the user to choose the visual, sound and most importantly the spatial aspect of the destination without actually being in place [6]. This is the advantage of VR as a medium in tourism promotion. Virtual experiences are also more effective than brochures, because they are rich in information and interactions with users [7]. The use of VR for tourism is virtual tourism. Virtual tourism can simulate tourist destinations, using a virtual environment (VE) with VR technology. With virtual tourism, it can enhance the experience for tourists, and attract new markets to tourist destinations [4].
In making VR, we can use various technologies, one
of which is web technology. With VR technology on the web or called Web-VR, it allows the accessibility of VR content widely from various platforms [8]. React 360 is a JavaScript-based framework for creating virtual reality from Facebook [9].
React 360 is one of the most widely used frameworks for developing Web-VR. React 360 can also be used to develop VR in various platforms such as desktop, mobile and VR Headset. With the help of the library from React it also makes it easier to create 3D elements and VR-based UI in React 360. One of the advantages of React 360 is that it can improve the user experience by adding 2D UI, audio, video, and 360 images [10].
Previous research on the development of virtual reality and virtual tourism was mostly done by previous researchers. The use of virtual reality for tourism facilities has also been carried out in the research “Virtual reality: Applications and implications for tourism” by [11]. The research explains the development of virtual reality for the tourism sector which is called virtual tourism. The use of virtual reality in the tourism sector can be used for planning and management, marketing, entertainment, education, accessibility and tourism.
Research on virtual tourism has also been conducted in the journal “Tourist Experience of Virtual Reality Application” by [4] which explains the use of virtual reality to attract the tourist market about one of the tourist attractions in England, namely the Lake District National Park, also provide new experiences to tourists. The results of the research were made virtual reality about the Lake District with several places in it, then equipped with natural sound or ambience sound to provide a natural experience through sound.
Based on the problems that have been described, the authors conducted research entitled the development of Malang virtual tourism. Malang virtual tourism is a VR-based application to introduce tourism destinations in Malang. Various kinds of tourist attractions in Malang are projected using 360 photos in virtual reality. So, users can interact with objects and environments in virtual reality to improve user experience. This interaction will create the experience of traveling in a virtual environment. Development of VR is using React 360 technology to increase user experience. The application goal is to introduce and provide information about tourist destinations in Malang through VR technology. So, it can be used to promote tourism in Malang and preserve the traditional cultures.
II. PROPOSED METHOD
Virtual reality that used in the tourism sector, also known as virtual tour or virtual tourism is detailed in figure 1.

The photo or capturing process is the process of capturing several images or photos in order to get a picture of the object. The photos taken are 360 photos. The Construct process is the process of making spherical photography for virtual reality based on photos that have been taken based on the previous process. The content delivery process is the process of delivering the results of the previous process through digital media such as virtual reality.
Malang is one of the cities in East Java which has a variety of potential regional assets and must be developed optimally. Malang has many places as a tourist destination. Some of them have places that have good photo objects, as well as historical sites [12].

Fig. 2. Malang Tourism Map [13]
React 360 is a framework used to create VR. React 360 is a derivative of the framework created by Facebook called React [14]. React is a JavaScript-based framework used to create a user interface (UI) for mobile or Single Page Application (SPA). An example of VR development using React 360 is shown in Figure 3.

Fig. 3. React 360 Development example
The research conducted is an implementation of development research, in which researchers will build Malang virtual tourism applications in virtual reality using React 360 and conduct interviews with application users. The research location was conducted in Malang, by conducting studies in tourist attractions in Malang. Development is carried out using a personal
computer (PC) for the preparation of source code for research implementation. Data collection was carried out on tourists who live outside the city of Malang, and was carried out using interview techniques.
There are five phases of research done in this research as can be seen in figure 4. The Requirement analysis used elicitation technique with interview technique to potential users. Prospective users are selected based on their domicile outside Malang and interview questions that focus on tourist destinations in Malang and virtual tourism. Then from the elicitation results, it will be considered which ones can be used to create functional and non-functional requirements needed for system development in this study. Then identify the actors and needs by numbering each need that has been made with use case diagrams, use case scenarios, and activity diagrams. Software design phase is done by using elicitation techniques with interview techniques to users. Then from the elicitation results will be considered to make functional and nonfunctional requirements. Based on the requirements model before, transformed into a system design model and virtual reality interface design. Design software is done by system architecture design and the interface is done by designing a virtual reality sequence. At the Implementation phase, the results of program code implementation and user interface implementation will be obtained. At testing and analysis phase, a testing phase will be carried out which aims to find out whether the software is in accordance with the needs. Testing in this research are done using validation testing and reusability testing. Validation testing uses black box testing. Usability testing is carried out using the System Usability Scale (SUS) method in which respondents will be asked to rate the system through a questionnaire. Then analysis is carried out to determine conclusions of research. Figure 4 is a flowchart of the research methodology.
**III. RESULT AND DISCUSSION**
In this part there are requirements that will be implemented in the application. Each requirement is described in Table 1.
<table>
<thead>
<tr>
<th>Use Case Name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>See tourism destination choice</td>
<td>System can display a choice of tourism destinations.</td>
</tr>
<tr>
<td>See 360 photos of tourism destinations</td>
<td>System can display 360 photos of tourism destinations</td>
</tr>
<tr>
<td>See information of tourism destination</td>
<td>System can display information about tourism destinations</td>
</tr>
<tr>
<td>Play tourism destinations ambience sound</td>
<td>System can play ambience sound from tourism destinations</td>
</tr>
<tr>
<td>See photo gallery of tourism destinations</td>
<td>System can display photo gallery from tourism destinations</td>
</tr>
</tbody>
</table>
In this study, the Malang virtual tourism system was built using web-based virtual reality using a framework with the JavaScript programming language. The purpose of making this system is to introduce existing destinations in Malang and provide a new tourist experience by providing an immersive experience. In Figure 5 is an overview of the use of the application. User uses the application using a VR Headset and enters the virtual environment. Users interact with virtual environments that exist in virtual reality. Starting from choosing a tourist destination in Malang that is in a virtual environment in the application, by inputting using a VR Controller that functions as a pointer. Then when selected, it will display 360 photos of the selected tourist destination. The selected tourist destination will also display information such as a brief explanation of the tourist destination, and can also play the ambience sound of the selected tourist destination. In the selected tourist destination there is a photo gallery. The photo gallery here has a selection of 360 photos of the tourist destinations displayed.
**Fig. 4. Research flowchart**
**Table1. Functional Requirement**
**Fig. 5. System description**
A. Storyboard
Storyboards are drawn to help understand the workflow of using the app. The storyboard consists of two main parts, namely the scenario and description. Figure 6 is a storyboard that explains the actor's workflow using the Malang virtual tourism application.
Fig 6. Storyboard
B. Interface Implementation
The interface implementation in the Main Menu scene is the interface used to select from a list of tourist destinations in the application, which can be seen in Figure 7.
Fig. 7. Main Menu Scene
The implementation of the interface on the Information Menu scene is an interface that contains information about tourist destinations selected from a list of tourist destinations, which can be seen in Figure 8.
Fig. 8. Information Menu Scene
The implementation of the interface on the Information Menu scene is an interface that contains a photo gallery of the selected tourist destination. The photo used is a 360 photo that can be seen in Figure 9.
Fig. 9. Photo Gallery Scene
C. Functional Testing
Functional testing is carried out validation testing using Blackbox testing. Blackbox testing serves to validate the functional requirements of the software that has been made. Blackbox testing is done by designing test cases first. The design of test cases is made based on previously defined functional requirements. Then check the results of the test cases with the expected results to determine whether the status is valid or not. The results of the validation test using Blackbox testing can be seen in Table 2.
<table>
<thead>
<tr>
<th>No</th>
<th>Test Name</th>
<th>Test Case</th>
<th>Result</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>See tourism destination choice</td>
<td>Login to the system</td>
<td>Valid</td>
</tr>
<tr>
<td>2</td>
<td>See 360 photos of tourism destinations</td>
<td>Push the tourist destination selection button</td>
<td>Valid</td>
</tr>
<tr>
<td>3</td>
<td>See information of tourism destination</td>
<td>Push the information menu button</td>
<td>Valid</td>
</tr>
<tr>
<td>4</td>
<td>Play tourism destinations ambience sound</td>
<td>Push the button plays sound</td>
<td>Valid</td>
</tr>
<tr>
<td>5</td>
<td>See photo gallery of tourism destinations</td>
<td>Push the photo gallery button</td>
<td>Valid</td>
</tr>
</tbody>
</table>
Based on the results of functional testing using Blackbox testing that has been carried out on the software, it is known that the output results have been tested:
1. System can display a selection of tourist destinations.
2. System can display 360 photos of tourist destinations.
3. System can display info about tourist destinations.
4. System can play sound from tourist destinations.
5. System can display photo galleries from tourist destinations.
So, it can be concluded that the results of functional testing using black box are 100% valid.
D. Usability Testing
Non-functional testing is done with usability testing by asking users to test it. After using the application, users need to fill out a questionnaire based on the System Usability Scale (SUS). There are 10 questions for users to answer. Each question contains a Likert scale consisting of a scale of 1 to 5 which contains information between strongly disagree, disagree, neutral, agree and strongly agree [15]. Table 3 shows the result of Usability Testing using System Usability Scale questionnaire with 10 Questionnaire.

**Table 3. SUS Score**
<table>
<thead>
<tr>
<th>R</th>
<th>Q1</th>
<th>Q2</th>
<th>Q3</th>
<th>Q4</th>
<th>Q5</th>
<th>Q6</th>
<th>Q7</th>
<th>Q8</th>
<th>Q9</th>
<th>Q10</th>
<th>Total</th>
<th>Usability Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>R1</td>
<td>4</td>
<td>3</td>
<td>4</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>3</td>
<td>3</td>
<td>1</td>
<td></td>
<td>34</td>
<td>85</td>
</tr>
<tr>
<td>R2</td>
<td>4</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>2</td>
<td></td>
<td></td>
<td>32</td>
<td>80</td>
</tr>
<tr>
<td>R3</td>
<td>4</td>
<td>1</td>
<td>3</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>0</td>
<td>1</td>
<td></td>
<td></td>
<td>33</td>
<td>82.5</td>
</tr>
<tr>
<td>R4</td>
<td>4</td>
<td>3</td>
<td>4</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>4</td>
<td>2</td>
<td></td>
<td></td>
<td>33</td>
<td>82.5</td>
</tr>
<tr>
<td>R5</td>
<td>4</td>
<td>2</td>
<td>3</td>
<td>2</td>
<td>3</td>
<td>2</td>
<td>4</td>
<td>3</td>
<td></td>
<td></td>
<td>34</td>
<td>85</td>
</tr>
<tr>
<td>R6</td>
<td>4</td>
<td>3</td>
<td>4</td>
<td>3</td>
<td>3</td>
<td>4</td>
<td>4</td>
<td>3</td>
<td></td>
<td></td>
<td>35</td>
<td>87.5</td>
</tr>
</tbody>
</table>
Total Score | 502.5
Average Usability Score | 83.75
The usability testing obtained 89 score based on the SUS measuring instrument in Figure 10, the usability system is in grade scale A category, the Adjective rate category is in Excellent category, the Acceptability Range category is in Acceptable category.

Fig. 10. SUS Instrument Measure
IV. CONCLUSION
Requirement analysis of Malang virtual tourism application begin with interviewing the respondents. The questions given are focused on the most popular destinations in Malang and virtual tourism. At this phase, 5 functional requirements and 1 non-functional requirement obtained. The non-functional requirement is the system usability.
Application design is done by making use cases and use case scenarios. Then an activity diagram is made based on the use case scenario that has been made. Followed by making a class diagram to describe what components are used. In the design phase, a virtual reality sequence is also created as an interface design in VR.
After that, the implementation phase is done based on the design results. It’s been done with React 360 framework using the JavaScript programming language. The results of functional testing using black box are 100% valid. Usability testing is done with the System Usability Scale (SUS) questionnaire in which respondents are given a questionnaire with 10 questions and answers using Likert scale and the results will be calculated to determine the usability system. In this research, the system has a usability value of 83.75 (Acceptable).
V. ACKNOWLEDGEMENTS
This research is granted by the Faculty of Computer Science, Brawijaya University in Superior Research Grant (Contract number: 2272/UN10.F15/PN/2020). Authors would like to express their appreciation to all colleagues and participants who are willing to participate in the study.
REFERENCES
|
{"Source-Url": "https://ejournal.uin-malang.ac.id/index.php/saintek/article/viewFile/23000/10599", "len_cl100k_base": 4171, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 19771, "total-output-tokens": 5295, "length": "2e12", "weborganizer": {"__label__adult": 0.00391387939453125, "__label__art_design": 0.002071380615234375, "__label__crime_law": 0.002468109130859375, "__label__education_jobs": 0.0085601806640625, "__label__entertainment": 0.0007104873657226562, "__label__fashion_beauty": 0.0008463859558105469, "__label__finance_business": 0.002239227294921875, "__label__food_dining": 0.00789642333984375, "__label__games": 0.01500701904296875, "__label__hardware": 0.004573822021484375, "__label__health": 0.00859832763671875, "__label__history": 0.00875091552734375, "__label__home_hobbies": 0.0007090568542480469, "__label__industrial": 0.001800537109375, "__label__literature": 0.0014295578002929688, "__label__politics": 0.0020618438720703125, "__label__religion": 0.0022449493408203125, "__label__science_tech": 0.040374755859375, "__label__social_life": 0.0006079673767089844, "__label__software": 0.0355224609375, "__label__software_dev": 0.48291015625, "__label__sports_fitness": 0.00334930419921875, "__label__transportation": 0.007244110107421875, "__label__travel": 0.356201171875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20988, 0.04534]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20988, 0.15536]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20988, 0.88217]], "google_gemma-3-12b-it_contains_pii": [[0, 3831, false], [3831, 8126, null], [8126, 12329, null], [12329, 15173, null], [15173, 20009, null], [20009, 20988, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3831, true], [3831, 8126, null], [8126, 12329, null], [12329, 15173, null], [15173, 20009, null], [20009, 20988, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20988, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20988, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20988, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20988, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20988, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20988, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20988, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20988, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20988, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20988, null]], "pdf_page_numbers": [[0, 3831, 1], [3831, 8126, 2], [8126, 12329, 3], [12329, 15173, 4], [15173, 20009, 5], [20009, 20988, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20988, 0.20755]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
35bdda4f4cfacc9e9815ce771c5e1a880d36a6b9
|
Distributed genetic algorithm implementation by means of Remote Methods Invocation technique – Java RMI
Łukasz Maciura*
The Bronisław Markiewicz State School of Higher Vocational Education in Jarosław, Czarnieckiego 16, 37-500 Jarosław, Poland
Abstract
The aim of this work is distributed genetic algorithm implementation (so called island algorithm) to accelerate the optimum searching process in space of solutions. The distributed genetic algorithm has also smaller chances to fall in local optimum. This conception depends on mutual cooperation of the clients who perform separate work of genetic algorithms on local machines.
As a tool for implementation of distributed genetic algorithm, created to produce net application Java technology was chosen. In Java technology, there is a technique of remote methods invocation – Java RMI. By means of invoking remote methods, objects between the clients and the server RMI can be sent.
To test the work of genetic algorithm, searching for maximum function of two variables which possess a lot of local maxima and can be written by means of mathematical formula was chosen.
The work of the whole system depends on existence of the server on which there are registered remote services (methods) RMI and clients, each one on a separate machine. Each of the clients has two threads, one of them accomplishes the work of local genetic algorithm whilst the other accomplishes the communication with the server. It sends to the server a new best individual which was found by the local genetic algorithm and takes the server form with the individuals, left there by other clients.
To sum up there was created an engine of distributed genetic algorithm which searches the maximum of function and after a not large modification can be used to solve every optimization problem.
1. Introduction
To accelerate the optimum searching process in space of solutions for genetic algorithm, distributed genetic algorithm (so called island algorithm [1]) was implemented. On the whole, a characteristic feature of classical genetic algorithm is that it has small chances to fall in local optimum. It is a very positive feature which distinguishes it from other heuristic algorithms, but
*E-mail address: [email protected]
nothing is without a defect. Unfortunately, this algorithm has this negative feature, that searching of global optimum lasts much longer than in other heuristic algorithms. It is essential to aim at the acceleration of these algorithms. The simplest solution appears to be descent to low-levelled programming, although, this solution makes it difficult to put into practice genetic algorithm for solution of different optimization problems, besides, the speed of working increases only several times.
To accelerate repeatedly working of any algorithm it is necessary to parallel or distribute it. Paralleling depends on the fact, that application which accomplishes this algorithm has a lot of threads and each of them accomplishes the separate kind of working. In order to parallel leads to acceleration of the algorithm, this machine on which the application is run must have a lot of processors or multi-threading processors, so that each thread can be run on a separate processor or a core of processor. Distributing an algorithm consists in working which is divided using lot of machines and each of them accomplishes its separate part. These machines communicate with each other through the local net or the Internet. Most often there is also the main server on which there are common resources and which manages the work of the whole distributed system. Distribution of algorithm has this advantage in comparison to its parallelization that the number of machines in net is unlimited, however, in the multiprocessor machine the more processors are is, the more complications occur with the selection of the proper hardware to operate with any number of processors. Therefore distribution of the algorithm not its parallelization was chosen.
Nowadays, there are a lot of technologies which assist in creation of distributed systems. Some of them are independent of platform and programming language as DCOM, CORBA, others are created for specific programming language or platform as RMI mechanism in the Java technology [2,3] or Remote mechanism in the .NET platform [4]. There is also a possibility of using ordinary TCP/IP sockets but it would be work from basis in coding/decoding of objects and its packetizing through net, so the best way is to use the already checked solution. As a technology to work out the distributed system, in this work Java and its mechanism of Remote Method Invocation (RMI) were chosen. Although they say that Java, despite its improvements, is still slower than C++, in fact, the speed of programs which are working on Virtual Java Machine, systematically is approaching with its next versions to the speed of programmes created in C++. There is also a possibility, that in the future programmes in Java will be faster than those in C++. It can take place if Virtual Java Machine is realized by hardware. Besides, Java is very convenient in programming and object orientated in higher level than C++. Regarding this, for accomplishment of this work this technology was chosen.
The presented distributed genetic algorithm took its pattern from the island algorithms. They have a specific number of population processed on separated clients’ machines which resemble islands. From time to time, the best individuals exchange among islands. This system differs from a classical genetic algorithm in such a way, that in this system each of the clients communicates only with server, by sending there the best of individuals and taking individuals which were left there by other clients. However, in the classical island algorithm exchange of individuals takes place between clients-neighbours which are organized in ring’s topology.
2. Working of local genetic algorithm
Genetic algorithm belongs to the group of heuristic algorithms [5], which do not search whole space of solutions, but they work systematically going in some direction or directions of searching, which in a particular moment seems to be the most optimal. To the group of heuristic algorithms belong, among others, Taboo search, ant’s algorithms, evolutionary algorithms. Most of these techniques were created on the basis of observation of nature and man. Evolutionary and genetic algorithms were created on the basis of transferring nature evolution methods on the computer science area. The area of working genetic algorithms is, among others, solving optimization problems.
The whole idea of this solution depends on the existence of population of specific number of individuals and each of them has one or several chromosomes which are a sequence of bits or other data representing single genes, thanks to which they can intersect with each other and be mutated. Each of the individuals presents a specific solution of the problem which is suitably coded in a chromosome. Besides a chromosome, each of the individuals has a function of the adaptation which determines, which of the individuals (solution of the optimization problem) are better and which are worse. The individuals or descendants of the individuals which have the best function of the adaptation, have the highest chance of passage to the next epoch, however, the individuals or descendants of the individuals which have a worse function of the adaptation have small or no chances depending on a method of the selection which was applied. Thanks to it every next epoch we have better and better collection of the individuals – the evolution of whole population lasts. Thanks to this strategy, separate solution is not favouring but a lot of best solutions that decrease chance of falling in local optimum. This feature of genetic algorithm presents it in favourable light in relation to other heuristic algorithms.
The local genetic algorithm that works on single client’s machine presented in this work distributed system, works in the same way as a classical genetic algorithm, with such a difference that sometimes the number of individuals in
population is higher, when taking the best individuals from server coming from other clients. As a problem of genetic algorithm, necessary to test its working searching for a maximum function of two variables which possess a lot of local maxima [6] was chosen.
\[
F(x, y) = 2000 - 64 \cdot \left( \sin \frac{x \cdot \pi}{16} + \sin \frac{y \cdot \pi}{16} \right) - 0.185 \cdot \left( (x - 64)^2 + (y - 64)^2 \right).
\]
For this problem it is easy to determine the function of adaptation, because it can be written as a form of mathematical formula. A single individual of created algorithm has one binary chromosome in which, there are coded two real values which make solution to the problem. We assume that these values belong to the range \(<0,128>\). If we assum that \(c_1 - c_n\) are genes of chromosome, so values \(x\) and \(y\) [6] can form the equations:
\[
x = \sum_{i=1}^{n} c_i \cdot 2^{-i} \cdot 128 ,
\]
\[
y = \sum_{i=n+1}^{n} c_i \cdot 2^{(n-i)} \cdot 128 .
\]
As a selection technique to the next epoch of individuals designed for reproduction the most popular method – roulette was applied. It depends on application of virtual roulette in which each of the individuals has its own segment proportional to the value of its function of adaptation. In practice there are ranges of real values from the range \(<0,1>\). Then, there is a drawing of a value from this range and checking to what range it belongs. An individual which is associated with this range is admitted to the reproduction. Depending on this, if intersection follows or not, either it or its descendants come to the next epoch.
Source code for selection of a number of the individual:
```java
private int SelectIndividual()
{
double rand=Math.random();
int id_individual=0;
for( int i=0; i<individuals.size(); i++ )
if( segments[i] > rand )
{
id_individual=i;
break;
}
return id_individual;
}
```
Reproduction occurs always on individuals sorted in this way. If the intersection occurs (the random value is smaller than the probability of the intersection) the descendants of the parents move to the next epoch. However, if the intersection does not occur, parents move to the next epoch. If the intersection occurs, the position of the intersection is randomized and the first of the descendants receives a fragment of the chromosome from the beginning of the position of the intersection from the first parent, however, from the second parent it receives a fragment of the chromosome from the position of the intersection to the end of the chromosome.
The example of the intersection:
1st parent: 00100111 | 1101011101111111100010100
2nd parent: 11111000 | 0010101110011001110101101101
(“|” - position of the intersection which was chosen as a result of the drawing).
As a result of the intersection, if it takes place, the following individuals arise:
1st descendant: 00100111 0010101110011001110101101101
2nd descendant: 11111000 11010111011111111000010100
This algorithm uses a selection of the parental pool and creates new individuals, as long as the population of the new epoch files. With every reproduction there is some probability that after carried or not carried out operation of intersection mutation occurs.
The class “Family” which realise the genetic operators:
```java
import java.util.Random;
public class Family
{
private Individual parent1;
private Individual parent2;
public Individual descendant1;
public Individual descendant2;
private static double pintersection=0.3;
private static double pmutation=0.1;
public Family(Individual p1, Individual p2)
{
parent1=p1;
parent2=p2;
Operators();
}
}
```
public static void initp(double pi, double pm)
{
pintersection=pi;
pmutation=pm;
}
private void Operators()
{
if(Math.random()<pintersection)
{
Random rand=new Random();
int p=rand.nextInt(34)+1;
boolean [] chromosome1=new boolean[36];
boolean [] chromosome2=new boolean[36];
for(int i=0;i<p;i++)
{
chromosome1[i]=parent1.Gene(i);
chromosome2[i]=parent2.Gene(i);
}
for(int i=p;i<36;i++)
{
chromosome1[i]=parent2.Gene(i);
chromosome2[i]=parent1.Gene(i);
}
descendant1=new Individual(chromosome1);
descendant2=new Individual(chromosome2);
}
else
{
descendant1=parent1;
descendant2=parent2;
}
if(Math.random()<pmutation)
descendant1.Mutation();
if(Math.random()<pmutation)
descendant2.Mutation();
}
To recapitulate, the single genetic algorithm epoch which comes from this work and works on the local machine operates as following:
1. Work out the value of the function of adaptation for each of the individuals in the population.
2. If the best individual in this population is better than the best actual individual from the whole algorithm, then choose it as the best and set a flag which informs about this, that it has to be sent on the server.
3. Do genetic operators (intersection and mutation) as long as a new population will create from nothing.
3. The description of the distributed genetic algorithm
The system created in this work is based on mutual communication of clients – agents that accomplish the local genetic algorithm. This communication consists in an exchange of the best individuals among the agents. Each of the clients communicates with the server by means of mechanism of the Remote Method Invocation – Java RMI. By means of an invocation of the appropriate methods (functions) on the server, it can place there its best individuals as well as take those left by other agents. Mechanism of serialization of objects in Java technology allows sending in this way whole structures of objects which are placed on RAM of the computer, not only to reference to them.
4. The description of a single client
The single client is realized by means of two threads
a) The thread which accomplishes an operation of genetic algorithm.
b) The thread which accomplishes a communication with the server and an exchange of individuals.
Both threads communicate with each other by means of appropriate flags. The division to the threads is necessary not to interrupt the working of genetic algorithm during the communication with the server because it can work at this time.
5. The description of the algorithm on client’s side
1. Client’s thread B is logging to the server, invoking the remote method:
\textit{int login()}, and a name tag in the form of the number is assigned to it.
2. Thread B serially checks, by means of invoking on the server the remote method:
\textit{boolean permission()}, which turns a value ‘true’ if the expected number of logged clients to a server will be achieved. In such a case algorithm comes to point 3.
3. To establish the individuals of the population in such a way that every now and then a new best individual does not occur, thread A carries out a specific number of genetic algorithm epochs, and at the same time updates the best individual from the algorithm’s start.
4. After carrying out a specific number of epochs, threads A and B start to work simultaneously (Fig. 1).
Fig. 1. The algorithm of the client
6. The description of the server
The server is in this distributed system as a relay of the best individuals among clients. It makes registered remote methods available for communication with clients and transfer of objects between them. Besides remote methods it has a table of individuals on which individuals from clients are saved. In this table each of the clients has assigned its index received at the time of logging. Besides this table there is also a matrix of value logical type, on which there is saved which of the clients load an individual which comes from another specific client. It will be needed so as a given client does not load repeatedly the same individuals from the server which could overload the server and the whole algorithm. The best global individual is also saved on the server. After the start of the server, the number of the clients which should log in, should be inserted in order to start the whole algorithm after a log in all clients.
7. The description of the remote methods
*int login()* – the method which is used to log in a client to the server and give to it a name tag.
*boolean permission()* – the method which returns the value of the logical type ‘true’ if a client can start its algorithm and ‘false’ if not. It depends if the established earlier number of the logged in clients is achieved and the flag ‘start’ is set.
*boolean endcondition()* – the method which returns the value ‘true’ if the condition of the end of the algorithm was fulfilled. This is established on the server and different conditions of the ending can be considered.
*void send(int ID, Individual i)* – the method which is used to send the best individual to the server. It is placed in a table in a position determined by ID. Additionally, there will be set suitable logical values in a matrix that specifies which of the clients loads the individuals which comes from individual clients. In the whole column there are set values ‘false’ because the new individual has not been loaded by anybody yet (Table 1). If necessary there is also actualized the best global individual on the server.
<table>
<thead>
<tr>
<th>Client 0</th>
<th>Individual 0</th>
<th>Individual 1</th>
<th>Individual 2</th>
<th>Individual 3</th>
</tr>
</thead>
<tbody>
<tr>
<td>Client 0</td>
<td>true</td>
<td>false</td>
<td>false</td>
<td>true</td>
</tr>
<tr>
<td>Client 1</td>
<td>true</td>
<td>false</td>
<td>true</td>
<td>true</td>
</tr>
<tr>
<td>Client 2</td>
<td>false</td>
<td>false</td>
<td>false</td>
<td>true</td>
</tr>
<tr>
<td>Client 3</td>
<td>false</td>
<td>false</td>
<td>true</td>
<td>false</td>
</tr>
</tbody>
</table>
public void send(int ID, Individual i) {
if (ID >= 0 && ID < n_clients && !theend) {
table_of_individuals[ID] = i;
for (int ind = 0; ind < n_clients; ind++)
matrix_of_loadind[ind][ID] = false;
i.Print();
if (i.Value() > threshold)
System.out.println("The end, MAX Value = " + i.Value());
theend = true;
}
}
Individual[] get(int ID) – the method which is used to load the table of the individuals saved by all other clients except for our own. ID is used to prevent an individual from actual client and to set the suitable value in the matrix that determines which of the individuals were loaded by specific clients. This matrix is especially needed here because it enables us to load a table only of these individuals which were not loaded by a given client (Table 2). Thanks to this, it is not possible to load the same individuals by a client. The load is possible only when an individual is on a given position.
Table 2. The example of a content of this matrix after loading an individual by client no. 1
<table>
<thead>
<tr>
<th></th>
<th>Individual 0</th>
<th>Individual 1</th>
<th>Individual 2</th>
<th>Individual 3</th>
</tr>
</thead>
<tbody>
<tr>
<td>Client 0</td>
<td>true</td>
<td>false</td>
<td>false</td>
<td>true</td>
</tr>
<tr>
<td>Client 1</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
</tr>
<tr>
<td>Client 2</td>
<td>false</td>
<td>true</td>
<td>false</td>
<td>true</td>
</tr>
<tr>
<td>Client 3</td>
<td>false</td>
<td>true</td>
<td>true</td>
<td>false</td>
</tr>
</tbody>
</table>
The value ‘false’ means that a given individual has not been loaded by a given client, yet, however, ‘true’ means that there are no individuals yet or a given individual was loaded to a given client. Individual 0 comes from client 0, individual 1 from client 1 etc. When a new individual is sent by client (e.g. Client 1) in the whole column ‘Individual 1’ values ‘false’ are written.
public Individual [] get(int ID)
{
if(ID>=0 && ID<n_clients && !theend)
{
ArrayList list = new ArrayList();
for(int i=0;i<n_clients;i++)
{
if(!matrix_of_loading[ID][i])
{
list.add(table_of_individuals[i]);
matrix_of_loading[ID][i]=true;
}
}
Individual [] tab = new Individual[list.size()];
for(int i=0;i<list.size();i++)
tab[i]=(Individual)list.get(i);
return tab;
}
else return null;
}
8. The algorithm on the server’s side
1. Initiation of all the pools of the matrix that specify which of the clients loads the individuals which comes from individual clients to ‘true’, in order to block loading from the server because there has been no individuals sent by clients, yet.
2. Loading of the required number of logged in clients.
3. Waiting as long as the required number of logged in clients will be achieved.
4. Setting of the flag ‘start’ thanks to which the clients get to know through the ‘permission’ method that they can start.
5. At this moment the main programme of a server does nothing, besides the continuous checking if the condition of the ending of the algorithm is not fulfilled. If it fulfils, the best individual is introduced and the flag ‘stop’ is set. Remaining work is done by remote methods invoked by clients on the server’s objects.
9. The system testing
To check to what extent the system increases the speed of finding the optimum in the space of solutions series of experiment, which depends on
testing the working of an algorithm on many computers, was carried out and in which the number of computers was constantly increasing. On the server, there is a threshold. After crossing this threshold the algorithm finishes its working and displays the time of working. Thanks to this, we can compare periods of algorithm calculation for different number of clients which work on separate computers. For a given number of computers, 5 tests were performed and the median of working time was calculated.
1st series of the experiment:
Settings of the algorithm:
- probability of intersection: 0.6
- probability of mutation: 0.1
- number of individuals: 150
- threshold of algorithm end: 2107.417
<table>
<thead>
<tr>
<th>Test</th>
<th>Number of clients</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>2</td>
</tr>
<tr>
<td>1</td>
<td>32.5s</td>
</tr>
<tr>
<td>2</td>
<td>2.2s</td>
</tr>
<tr>
<td>3</td>
<td>7.05s</td>
</tr>
<tr>
<td>4</td>
<td>5.53s</td>
</tr>
<tr>
<td>5</td>
<td>3.11s</td>
</tr>
<tr>
<td>Median</td>
<td><strong>5.53s</strong></td>
</tr>
</tbody>
</table>
As follows (Table 3) it is noticeable that when the number of clients working on separate machines is increasing, the speed of finding of the optimum about function of adaptation higher from the given threshold is also increasing. However, it is noticeable that approximately for 4 or 5 computers the time of finding the optimum is the same. It is due to the fact that at the beginning of the algorithm working there is a big movement in web because very often a new best individual is found. It slows down working of the algorithm, because finding the optimum is short, so this delay is here very essential and it equalities time of finding the optimum for 4-5 clients. In order to see the difference for a larger number of computers we should extend the time of searching the space of the solutions. It can be caused by establishing the threshold of ending the algorithm which is adequately higher.
2\textsuperscript{nd} series of experiments:
Settings of the algorithm:
- probability of intersection: 0.6
- probability of mutation: 0.1
- number of individuals: 150
- threshold of algorithm end: 2107.4173
Table 4. 2\textsuperscript{nd} series of experiments
<table>
<thead>
<tr>
<th>Test</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1.92s</td>
<td>1s</td>
<td>0.52s</td>
<td>0.61s</td>
</tr>
<tr>
<td>2</td>
<td>0.72s</td>
<td>1.41s</td>
<td>0.91s</td>
<td>1.31s</td>
</tr>
<tr>
<td>3</td>
<td>19.42s</td>
<td>1.11s</td>
<td>0.7s</td>
<td>0.61s</td>
</tr>
<tr>
<td>4</td>
<td>16.41s</td>
<td>0.52s</td>
<td>2.02s</td>
<td>0.91s</td>
</tr>
<tr>
<td>5</td>
<td>25.05s</td>
<td>0.7s</td>
<td>1.31s</td>
<td>0.92s</td>
</tr>
<tr>
<td>Median</td>
<td>16.41s</td>
<td>1s</td>
<td>0.91s</td>
<td>0.91s</td>
</tr>
</tbody>
</table>
After this series of experiments (Table 4) it is noticeable that when the number of computers is increasing finding the optimum speeds up, and when the time of searching is short for a different number of clients, changes are invisible.
Conclusion
The distributed model of genetic algorithm in Java technology was implemented. It accelerates finding of the optimum in the space of solutions. The speed of searching increases along with the number of clients working on separate machines.
In the implemented example, this algorithm solves the problem of searching the maximum of the function which can be written by means of mathematical formula, but nothing stands in the way to solve any other problems by this algorithm. It is implemented by means of objected technique of Java language, so it is easy to adapt through the modification of some classes.
Acknowledgments
I would like to express my thanks to Galina Setlak, D.Sc., Associate Professor for helpful remarks.
References
|
{"Source-Url": "http://journals.umcs.pl/ai/article/download/3162/2358", "len_cl100k_base": 5770, "olmocr-version": "0.1.49", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 27455, "total-output-tokens": 6530, "length": "2e12", "weborganizer": {"__label__adult": 0.00045943260192871094, "__label__art_design": 0.0002799034118652344, "__label__crime_law": 0.00047898292541503906, "__label__education_jobs": 0.00075531005859375, "__label__entertainment": 6.759166717529297e-05, "__label__fashion_beauty": 0.000186920166015625, "__label__finance_business": 0.00021541118621826172, "__label__food_dining": 0.00047206878662109375, "__label__games": 0.0007476806640625, "__label__hardware": 0.0012159347534179688, "__label__health": 0.0008893013000488281, "__label__history": 0.0002913475036621094, "__label__home_hobbies": 0.0001112818717956543, "__label__industrial": 0.0005478858947753906, "__label__literature": 0.0002605915069580078, "__label__politics": 0.00032067298889160156, "__label__religion": 0.0006732940673828125, "__label__science_tech": 0.0294189453125, "__label__social_life": 9.965896606445312e-05, "__label__software": 0.0037841796875, "__label__software_dev": 0.95751953125, "__label__sports_fitness": 0.0005064010620117188, "__label__transportation": 0.0006623268127441406, "__label__travel": 0.00025773048400878906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25560, 0.04486]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25560, 0.44959]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25560, 0.91183]], "google_gemma-3-12b-it_contains_pii": [[0, 2271, false], [2271, 5289, null], [5289, 8198, null], [8198, 10153, null], [10153, 11948, null], [11948, 12987, null], [12987, 15393, null], [15393, 15536, null], [15536, 18109, null], [18109, 20012, null], [20012, 21590, null], [21590, 23528, null], [23528, 25069, null], [25069, 25560, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2271, true], [2271, 5289, null], [5289, 8198, null], [8198, 10153, null], [10153, 11948, null], [11948, 12987, null], [12987, 15393, null], [15393, 15536, null], [15536, 18109, null], [18109, 20012, null], [20012, 21590, null], [21590, 23528, null], [23528, 25069, null], [25069, 25560, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25560, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25560, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25560, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25560, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25560, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25560, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25560, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25560, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25560, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25560, null]], "pdf_page_numbers": [[0, 2271, 1], [2271, 5289, 2], [5289, 8198, 3], [8198, 10153, 4], [10153, 11948, 5], [11948, 12987, 6], [12987, 15393, 7], [15393, 15536, 8], [15536, 18109, 9], [18109, 20012, 10], [20012, 21590, 11], [21590, 23528, 12], [23528, 25069, 13], [25069, 25560, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25560, 0.12288]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
57f0aeebc00413705e7c4d98275e810bf5954a96
|
Text Prediction and Classification Using String Matching
Byron Knoll
Department of Computer Science
University of British Columbia
Abstract
This paper introduces a simple dynamic programming algorithm for performing text prediction. The algorithm is based on the Knuth-Morris-Pratt string matching algorithm. It is well established that there is a close relationship between the tasks of prediction, compression, and classification. A compression technique called Prediction by Partial Matching (PPM) is very similar to the algorithm introduced in this paper. However, most variants of PPM have a higher space complexity and are significantly more difficult to implement. The algorithm is evaluated on a text classification task and outperforms several existing classification techniques.
1 Introduction
This paper introduces a simple dynamic programming algorithm for performing text prediction. The algorithm is very similar to a compression technique called Prediction by Partial Matching (PPM) (Cleary and Witten, 1984). PPM is one of the best lossless text compression techniques, so has been subject to a substantial amount of research. It consistently performs well on data compression benchmarks. There are a large variety of PPM implementations. However, the specific variation of PPM discussed in this paper does not appear to have been previously published. The algorithm is based on the Knuth-Morris-Pratt (KMP) string matching algorithm (Knuth, Morris, and Pratt, 1977).
The remainder of this paper is organized into five main sections. Section 2 provides background on the text prediction problem and its relationship to data compression and classification. Section 3 gives an overview of PPM and KMP, two algorithms which are closely related to this work. Section 4 provides a high level description of the algorithm. Section 5 evaluates the algorithm on prediction and classification tasks. Finally, section 6 discusses the results and speculates at possible future work. The appendix provides a Java implementation of the algorithm introduced in this paper.
2 Background
Text prediction can be considered as a sequential process over time with an input stream of characters. The task is to predict the next character given a string representing the input history. In this paper the first character of a string represents oldest input and the last character represents the newest input. For example, given the string “abababa” a good guess for the next character would be ‘b’ since ‘b’ always follows ‘a’ in the input history. It is well established that there is a close relationship between the tasks of prediction, compression, and classification (Marton, Wu, and Hellerstein, 2005). An algorithm which is good at text prediction will also be good at text classification and text compression. The task of text prediction is not necessarily limited to natural language. For example, an alphabet of two letters can be used for any binary file, regardless of the type of data it contains.
Text prediction can be used for a variety of applications. For example, it can be used to minimize the number of keystrokes required to type a given text (Garay-Victoria and Abascal, 2006). This can be especially useful for slow input methods such as mobile phone keypads. In addition, it can help increase the communication rate for people with disabilities. Another use for text prediction is for denoising data. Character predictions can actually be more valid than the actual input in certain scenarios, such as the case of spelling mistakes. Spelling mistakes can be considered as “noise” in the data which can be corrected using a predictive filter. Finally, another example of a common application for text prediction is the automated completion of search terms used by several search engines.
Classification and compression also have fairly obvious applications. Considering the three tasks of prediction, classification, and compression together covers a large range of problems encountered in the fields of artificial intelligence and machine learning. The relationship between these three tasks is explored in more detail below.
2.1 Relationship Between Prediction and Compression
Text prediction algorithms can assign a probability distribution to characters in an alphabet corresponding to the probability of each character being next in the input stream. This probability distribution can be combined with a coding scheme such as arithmetic coding or Huffman coding to compress data. In fact, a measurement called cross entropy can be used to estimate the average number of bits needed to code the data. For a sequence of \( N \) characters \( x_i \), and a probability \( p(x_i) \) assigned to each character by the prediction algorithm, the cross entropy can be defined as:
\[
- \sum_{i=1}^{N} \frac{1}{N} \log_2 p(x_i)
\]
This gives the expected number of bits needed to code the string. Another common metric used to compare text prediction algorithms is perplexity, which can be defined as two to the power of cross entropy:
\[
2^{-\sum_{i=1}^{N} \frac{1}{N} \log_2 p(x_i)}
\]
In 1991, a trigram model was used on a large corpus of one million English words to achieve a perplexity score of 247 per word, corresponding to a cross entropy of 7.95 bits per word or 1.75 bits per letter (Brown, Della Pietra, Della Pietra, Lai, Mercer, 1992). On this corpus, ASCII coding has a cross entropy of 8 bits per character, Huffman coding has 4.46, and the UNIX command `compress` has 4.43. On more specialized corpora it is possible to achieve lower perplexity scores than for more general corpora. Recently, a word perplexity score of 96.9 was reported on the Associated Press corpus using a technique called stochastic memoization (Wood, Archambeau, Gasthaus, James, and Teh, 2009). This is significantly lower than the perplexity scores reported for competing approaches.
2.2 Relationship Between Prediction and Classification
Classification is a task in which items must be categorized into groups based on a training set of previously labelled items. Any prediction or compression algorithm can be used for classification. This can be done by first separating the training data into categories based on their labels. When unlabelled data needs to be classified into a category, each training category can be used as a separate training set for the prediction/classification algorithm. In the case of prediction, the prediction error for the data is compared using each category as a training set. The data can be classified as being in the category which results in the lowest prediction error. Similarly, in the case of compression the file size of the data is compared when compressing it using each training category. The data can be classified as being in the category which results in the lowest compressed file size.
Consider a concrete example of binary classification. Suppose there are a set of documents which have been labelled as being either funny or unfunny. These training documents can be separated into the two categories. Given a new unlabelled document, the goal is to classify it as either being funny or unfunny. One approach to doing this is using a text prediction algorithm. The prediction error of the document can be tested using the funny training data by first inputting all the funny training data as a string to the prediction algorithm. The prediction error of the document can be calculated from the number of incorrect character predictions made when sequentially inputting the document’s text. Similarly, the prediction error of the document using the unfunny training set can be computed. Finally, the document can be classified as being in the category which results in the lowest prediction error. Another classification approach is using a data compression algorithm. First, the file size of the funny training data compressed alone can be compared to the file size of the document appended to the funny training data. Subtracting the two sizes results in the amount of data needed to code the document (using the funny training set). Similarly, the amount of data needed to code the document using the unfunny training set can be computed. The document can be classified as being in the category which results in the smallest amount of data needed to code it.
Given the choice between classification using a text prediction algorithm and the same algorithm used for compression, there is a practical advantage to using prediction error instead of compressed file size. Using prediction error avoids the computational overhead involved when performing compression. This overhead includes using a coding scheme (such as arithmetic coding) and writing compressed files to disk, which can be a very slow operation.
Figure 1 summarizes the directed relationships between prediction, classification, and compression which have been discussed in this paper. That is, any prediction algorithm can be used for compression. Additionally, any prediction or compression algorithm can be used for classification. An argument might be made for the bidi-
3 Related Work
3.1 Knuth-Morris-Pratt Algorithm
The naïve approach to matching a string $S$ of length $M$ to a text $T$ of length $N$ has a time complexity of $O(M \times N)$. This approach involves simply iterating through $N$ positions of $T$ and for each position checking whether the next $M$ characters match $S$. The KMP algorithm decreases this time complexity to $O(M + N)$. There are two phases of the KMP algorithm. The first involves iterating through the $M$ characters of $S$ and building a table of size $M$. The second involves iterating through the $N$ characters of $T$ and finding matches.
An intuitive understanding of the KMP algorithm can be gained by considering a simple example. Suppose we are trying to match a string $S = \text{"abcabz"}$ to the text $T = \text{"abcabczab"}$. Iterating through $T$, the first five characters match exactly to $S$. However, as soon as we reach the sixth character there is a mismatch between the ‘z’ in $S$ and ‘c’ in $T$. In the naïve approach this mismatch would force us to return to the second index of $T$ and try matching it to the beginning of $S$. However, the observation can be made that when we found the mismatch at position six, we already matched “ab” at positions four and five. These happen to be the first two characters of $S$. This means we can just continue trying to match position six of $T$ with position three of $S$.
The purpose of building the table for $S$ is that it acts as a failure function for when we encounter a mismatched character. It contains an index in $S$ which allows us to continue matching the original text $T$ without backtracking. This table can be constructed in $O(M)$ time. After the table is constructed, it takes $O(N)$ time to iterate through $T$.
3.2 Prediction by Partial Matching
Although there are many variants of the PPM algorithm, they all share a common concept. The idea is that a good way to make a prediction about the next character in a sequence is to try to match the sequence to some part of the input history and make the prediction based on what character comes next in the history. For example, consider the string “abczabczab”. A good guess for the next character in the sequence would be ‘z’ since ‘z’ always comes after ‘c’ in the history. In addition, ‘z’ always comes after the string “bc”. Furthermore, ‘z’ always comes after the string “abc”. It should be clear that making longer matches is preferable since they are less likely to occur by chance. Under the assumption that patterns exist in the input stream, longer matches will lead to better predictions. This is essentially a task of temporal pattern recognition.
Matching the recent input sequence to the input history can be represented as a string matching problem. The task is to find the longest matching string between recent input and the history string. When the longest match is found, a prediction can be returned as the character which occurs immediately after the match in the history string.
This model is actually equivalent to the use of n-grams. In fact, n-grams are actually (n-1) order Markov models. The length of the match made between the recent input sequence and the history string determines the order of the Markov model. This means that in PPM the order of the model is adaptively changed based on the length of matches occurring in the input string. If no matches of a particular length occur in the string, the order of the model must be reduced.
Most PPM implementations use a fixed maximum size for the order of the Markov model. This is done to reduce the time and space complexity of the algorithm. The majority of PPM implementations use exponential memory in relation to the maximum length of the Markov model. In 1997 a variant of PPM was introduced called PPM* which uses an unbounded order Markov model (Cleary and Witten, 1997). The stochastic memoizer mentioned in section 2.1 also uses an unbounded order Markov model to achieve record breaking perplexity scores.
4 Algorithm Description
The algorithm introduced in this paper uses an unbounded order Markov model. In addition, it uses linear memory which is an improvement over the exponential memory needed by most PPM implementations. However, for predicting each new character of an input stream it has a time complexity of $O(N)$ where $N$ is the history size. This means that for a given file of length $N$, the time complexity to predict every byte of that file is $O(N^2)$. The time complexity for most PPM variations range between $O(N)$ and $O(N^2)$. Time complexities below $O(N^2)$ can be achieved using suffix tries.
The appendix of this paper has a full Java implementation of the algorithm. Of particular note is the simplicity
of the algorithm in comparison to other PPM implementa-
tions. If the amount of data to be processed is relatively
small so that the $O(N^2)$ time complexity is not a con-
cern, this algorithm could be preferable to the use of other PPM
implementations. This is due to its linear memory usage,
unbounded Markov order, and simple implementation.
The algorithm relies upon KMP string matching. Con-
sider the input string “zabracadabra”. Reversing the
string results in another string $S$ = “arbadacarbaz”. Now
consider matching $S$ to the text $T$ = “rbadacarbaz” (the
same as $S$ except missing the first character). Running
KMP string matching on $S$ and $T$ will result in a se-
quence of character mismatch events. Each mismatch has
a corresponding match length indicating how much of $S$
matches $T$. The mismatches $(S:T):length$ triples for this
example are a:r:0, a:b:0, r:d:1, r:c:1, and d:z:4. These
mismatches indicate that the longest match in the string
is four. Given the longest match of “abra”, a prediction
of ‘c’ can be made as the next character.
In the Java implementation provided in the appendix,
a map data structure is used to store information about
the longest matches. If there are multiple matches of
the same length, a prediction for the next character can
be made by using the most frequent character prediction
among the matches. The stored matches can also be used
to optimize the speed of future predictions. If a new char-
acter was correctly predicted by one of the matches stored
in the data structure, the longest matches do not need to
be recomputed. This allows a prediction to be returned
in $O(1)$ time. However, if the new character was not pre-
dicted by one of the stored matches, new matches will
need to be recomputed from the entire history in $O(N)$
time. Therefore, this algorithm will run faster when it is
good at predicting the input data.
5 Evaluation
Since PPM has already been extensively studied for the
task of compression, this paper will focus on evaluating
the algorithm on prediction and classification tasks.
5.1 Prediction
The majority of evaluation on text prediction algorithms
is done by comparing perplexity metrics. Since the al-
gorithm introduced in this paper simply outputs the most
likely character instead of assigning a probability distri-
bution to all the characters, the perplexity metric cannot
be used. It should be noted that it is not necessarily dif-
ficult to assign a probability distribution to the characters
but this was not discussed in this paper to avoid addi-
tional complexity. Instead, the error rate of character pre-
dictions on various data can be reported. The Calgary
corpus is a popular dataset to compare the performance
of compression algorithms. Unfortunately, most publica-
tions about this corpus report perplexity scores and com-
pressed file sizes instead of error rates for character pre-
diction. However, examining the error rates for files in
the Calgary corpus is still a useful exercise because it al-
lows comparison of error between different types of data.
In addition, the error scores reported in this paper can be
used as a comparison metric for future work.
Table 1 provides a description of the different files con-
tained in the Calgary corpus. Table 2 provides the aver-
age byte prediction error for files in the Calgary corpus.
Comparing byte prediction error instead of binary error or
some other granularity was chosen purely for implementa-
tion convenience. Of course, choosing a smaller granu-
arity such as binary prediction error results in lower er-
ror rates, but should preserve the same relative perform-
ance between different types of data. The ‘pic’ file
stands out as having an extremely good prediction rate.
This is likely due to the fact that it is not compressed so
has a lot of redundant information in comparison to its
underlying Kolmogorov complexity. It is also interesting
to compare the error rates between similar types of data.
For example, the ‘book1’ novel has a much higher error
rate than the scientific text ‘book2’.
Figure 2 shows how the average prediction error
rate for a novel changes sequentially from the be-
inning of the text to the end. The novel is in
ASCII format and was obtained from Project Gutenberg
(http://www.gutenberg.org). The prediction error at the
end of the text was 41.066%. The graph indicates that the
rate of change of error decreases over time. One use for
this type of information is that it can help provide an es-
timate of the maximum number of bytes that are needed
for decreasing the error rate. For example, if there is no
significant drop in error rate after 1MiB of input history,
then the history size can be limited to 1MiB to help in-
crease computational performance of the algorithm. By
<table>
<thead>
<tr>
<th>File</th>
<th>Size (KiB)</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>bib</td>
<td>111.261</td>
<td>structured text (bibliography)</td>
</tr>
<tr>
<td>book1</td>
<td>768.771</td>
<td>text, novel</td>
</tr>
<tr>
<td>book2</td>
<td>610.856</td>
<td>formatted text, scientific</td>
</tr>
<tr>
<td>geo</td>
<td>102.400</td>
<td>geophysical data</td>
</tr>
<tr>
<td>news</td>
<td>377.109</td>
<td>formatted text, script with news</td>
</tr>
<tr>
<td>obj1</td>
<td>21.504</td>
<td>executable machine code</td>
</tr>
<tr>
<td>obj2</td>
<td>246.814</td>
<td>executable machine code</td>
</tr>
<tr>
<td>paper1</td>
<td>53.161</td>
<td>formatted text, scientific</td>
</tr>
<tr>
<td>paper2</td>
<td>82.199</td>
<td>formatted text, scientific</td>
</tr>
<tr>
<td>pic</td>
<td>513.216</td>
<td>image data (black and white)</td>
</tr>
<tr>
<td>progc</td>
<td>39.611</td>
<td>source code</td>
</tr>
<tr>
<td>progl</td>
<td>71.646</td>
<td>source code</td>
</tr>
<tr>
<td>progp</td>
<td>49.379</td>
<td>source code</td>
</tr>
<tr>
<td>trans</td>
<td>93.695</td>
<td>transcript terminal data</td>
</tr>
</tbody>
</table>
Table 1: File size and description of Calgary corpus files.
Table 2: Average byte prediction error on Calgary corpus files.
<table>
<thead>
<tr>
<th>File</th>
<th>Error Percentage</th>
</tr>
</thead>
<tbody>
<tr>
<td>bib</td>
<td>32.768%</td>
</tr>
<tr>
<td>book1</td>
<td>47.149%</td>
</tr>
<tr>
<td>book2</td>
<td>37.032%</td>
</tr>
<tr>
<td>geo</td>
<td>64.346%</td>
</tr>
<tr>
<td>news</td>
<td>39.693%</td>
</tr>
<tr>
<td>obj1</td>
<td>45.999%</td>
</tr>
<tr>
<td>obj2</td>
<td>33.513%</td>
</tr>
<tr>
<td>paper1</td>
<td>39.916%</td>
</tr>
<tr>
<td>paper2</td>
<td>43.023%</td>
</tr>
<tr>
<td>pic</td>
<td>11.289%</td>
</tr>
<tr>
<td>progc</td>
<td>37.713%</td>
</tr>
<tr>
<td>progl</td>
<td>26.928%</td>
</tr>
<tr>
<td>progp</td>
<td>24.066%</td>
</tr>
<tr>
<td>trans</td>
<td>20.476%</td>
</tr>
</tbody>
</table>
Table 3: Classification error on the test set (200 articles). Three different training sets were used: original ASCII text (ORIG), text converted to lowercase characters (LOW), and text converted to word stems (STEM).
<table>
<thead>
<tr>
<th>Method</th>
<th>Error Percentage</th>
</tr>
</thead>
<tbody>
<tr>
<td>PRED ORIG</td>
<td>12%</td>
</tr>
<tr>
<td>PRED LOW</td>
<td>10.5%</td>
</tr>
<tr>
<td>PRED STEM</td>
<td>10%</td>
</tr>
<tr>
<td>PAQ ORIG</td>
<td>11%</td>
</tr>
<tr>
<td>PAQ LOW</td>
<td>16%</td>
</tr>
<tr>
<td>PAQ STEM</td>
<td>23.5%</td>
</tr>
</tbody>
</table>
Figure 2: Average error rate over time for each byte of the novel Twenty Thousand Leagues Under the Sea (Jules Verne, 1870).
limiting the history size, the computational complexity of the algorithm is reduced from $O(N^2)$ to $O(N)$.
5.2 Classification
The dataset chosen for this classification task is from an undergraduate machine learning course at the University of British Columbia. The dataset was used in a class competition to give bonus marks to the students with the lowest test error. Since a substantial amount of bonus marks were available to students for performing well in the contest, the incentive for students to invest a lot of time/effort in the competition was significantly increased. The task was to classify Wikipedia articles as either being part of a particular category or not part of it. In the training set the articles were labelled as either positive (part of the category) or negative (not part of the category). For example, if the category was “hobbies” then all of the positive training articles belong to the hobbies category and none of the negative training articles belong to the hobbies category. The negative training articles were not necessarily part of the same category. The actual category was not given. There were 100 positive training articles, 100 negative training articles, 100 positive test articles, and 100 negative test articles.
The results from six classification methods are summarized in table 3 and table 4. The technique used to do the classification was the same as that described in section 2.2. PRED refers to the text prediction algorithm described in section 4 and PAQ refers to the PAQ8L data compression algorithm. The PAQ data compression algorithm was chosen because it has top rankings on several benchmarks measuring compression ratio. Performing preprocessing on the training set had a significant effect on the error rates. The two preprocessing steps used were converting all characters to lowercase (LOW) and performing word stemming (STEM) on the lowercase letters. Word stemming was done using the Porter algorithm (Porter, 1980). Examining table 4 indicates that several of the methods were significantly better at classifying documents in the POS set than the NEG set. This bias is especially noticeable for the PAQ STEM method. The bias does not appear to be present in PAQ ORIG. It is unclear why the PAQ algorithm performed worse when the preprocessing steps were performed. In the case of PRED, it is expected that the preprocessing steps should decrease the error because they allow longer matches to be discovered.
Table 5 provides the error percentages of the top six participants in the class competition. A variety of different classification approaches were used including neural networks, decision trees, n-gram based methods, and support vector machines. It should be noted that all of these techniques ran significantly faster than any of the PRED or PAQ methods. All six of the methods in table 5 ran in the order of a few minutes, while the six PRED/PAQ methods took several hours to run. Slower
Table 4: Number of errors in the positive test set (POS) and negative test set (NEG). There are 100 articles in each test set.
<table>
<thead>
<tr>
<th>Method</th>
<th>Wrong in POS</th>
<th>Wrong in NEG</th>
</tr>
</thead>
<tbody>
<tr>
<td>PRED ORIG</td>
<td>7</td>
<td>17</td>
</tr>
<tr>
<td>PRED LOW</td>
<td>5</td>
<td>16</td>
</tr>
<tr>
<td>PRED STEM</td>
<td>5</td>
<td>15</td>
</tr>
<tr>
<td>PAQ ORIG</td>
<td>11</td>
<td>11</td>
</tr>
<tr>
<td>PAQ LOW</td>
<td>3</td>
<td>29</td>
</tr>
<tr>
<td>PAQ STEM</td>
<td>3</td>
<td>44</td>
</tr>
</tbody>
</table>
Table 5: Top six results from classification competition. There were a total of 18 entries.
<table>
<thead>
<tr>
<th>Name</th>
<th>Error Percentage</th>
</tr>
</thead>
<tbody>
<tr>
<td>Fisher LD Hill Climb & NBayes</td>
<td>9%</td>
</tr>
<tr>
<td>NaiveB</td>
<td>12.5%</td>
</tr>
<tr>
<td>turtle_star</td>
<td>12.5%</td>
</tr>
<tr>
<td>Classifoo</td>
<td>12.5%</td>
</tr>
<tr>
<td>boosted?_perceptron</td>
<td>13%</td>
</tr>
<tr>
<td>BetterLateThanNever</td>
<td>13%</td>
</tr>
</tbody>
</table>
runtime performance is one of the disadvantages to using compression techniques with a high compression ratio. When comparing table 3 to table 5 we can see that all three of the PRED methods beat 17 out of the 18 entries in the class contest.
6 Discussion and Future Work
The classification results in section 5.2 seem very promising. Compared to the majority of the entries in the class competition, the approach used in this paper is very easy to implement. Coding complexity could conceivably be an important factor for the development of certain applications under time pressure. In addition, the lack of any parameters to tune also decreases the amount of time needed to deploy the code. However, the high time complexity of the algorithm may make it unsuitable for certain large datasets.
The performance of several other approaches were evaluated before choosing to focus on PPM in this paper. These approaches included linear predictive coding, temporal neural networks, and hierarchical temporal memory (HTM) (Hawkins and Blakeslee, 2004). For the task of binary prediction on the Wikipedia dataset used in section 5.2, linear predictive coding had an error rate of 32%, hierarchical temporal memory 26%, and temporal neural networks 28%. In comparison, the algorithm in this paper had an error rate of 11%.
Two modifications to the algorithm in this paper were explored. One modification involved using approximate string matching instead of exact string matching. The motivation for this idea was that approximate string matching allows for longer matches which could potentially decrease the error rate of predictions. A dynamic programming algorithm was implemented for approximate string matching using the minimum Levenshtein distance. This algorithm was significantly more computationally expensive than the approach used in this paper and appeared to have a significantly higher error rate. Based on these results, this approach was abandoned.
Another modification to the algorithm explored was the use of ensemble voting to make predictions. The ensemble voting was performed between different orders of Markov models (different match lengths). Longer matches were given a higher vote and shorter matches were given a smaller vote. Imagine if there is only one match of length 50 and 100 matches of length 49. The matches of length 49 are likely to contain some predictive value, so it makes intuitive sense to give them some weight. Several weighting functions were experimented with. Overall, the results of the ensemble voting seem to be slightly better than the algorithm presented in this paper. However, it was not clear whether this difference was statistically significant. This is a promising area for future work. Assigning weights to the different order Markov models also simplifies the task of creating a probability distribution over the characters. However, the task of assigning good weights is a difficult problem and remains an active area of research.
One approach to assigning the weights is to base them upon the empirical prediction accuracy of the different match lengths. For example, if the longest match correctly predicts the next character 42% of the time, 0.42 would be a good weight assigned to the character predicted by the longest match. Since a given match length can predict multiple characters, if the second most likely character predicted by the longest match is correct 5% of the time, a corresponding probability of 0.05 can be assigned to the second most likely character of the longest match. Similarly this can be done for lower match lengths.
Another potential area for future work is limiting the history size. The results in figure 2 and corresponding discussion in section 5.1 indicate that a limited history size may not necessarily have a significant impact on prediction accuracy. One approach to limiting the history size is to simply use a sliding window and forget everything before a certain point in history. However, this naive approach can be improved upon. Ideally, only portions of the history which will never be matched should be removed. However, it is impossible to determine whether a particular portion of the history may be matched at some point in the future. If we keep track of statistics on how often different characters in the history string are matched, this might provide some indication of how likely those characters will be matched in the future.
This statistic will also need to be weighted by how recently the character occurred, since recent sections of the history have less opportunity to be matched compared to older sections of the history. This statistic may be used as a heuristic for determining which sections of the history can be forgotten.
Certain properties of PPM can be compared to how the human brain operates. It is clear that the brain stores memories of the past and that these memories can be retrieved based on their similarity to recent events. This is exactly the same principle that PPM operates on. There is also evidence that the task of prediction plays a fundamental role in human intelligence and behaviour (Hawkins and Blakeslee, 2004). However, there are clearly differences in the capabilities of the human brain when compared to PPM.
One remarkable property of the brain is its massive parallelism in information processing. In contrast, PPM works on a single sequential character stream. Another difference is that the brain can perform higher level functions which require accessing memories in a non-sequential order. That means that certain predictions may be a function of several non-contiguous segments of the input history. For example, consider the task of adding “46+54”. Although this particular string sequence may never have occurred in a person’s input history, the individual components of numbers and the addition operator may have been encountered at different points in time. In order to predict what the answer of this operation is, a non-sequential function of memory access is required. It is possible that by stacking PPM predictors on top of each other, higher level patterns can be recognized. HTM provides a framework which may help parallelize and stack individual PPM predictors. In fact, PPM performs a function very similar to what is required by nodes in HTM. This would be another interesting area for future research.
References
import java.util.Iterator;
import java.util.LinkedList;
import java.util.TreeMap;
public class Predictor {
private TreeMap<Character, LinkedList<Integer>> tree = new TreeMap<Character, LinkedList<Integer>>();
private int longestMatch = -1;
/**
* @return predicted next character of the string
*/
public char predict(String str) {
if (str.length() == 0)
return '0';
if (str.length() == 1) {
longestMatch = 0;
return str.charAt(0);
}
if (tree.containsKey(str.charAt(str.length() - 1))) {
longestMatch++;
LinkedList<Integer> pred = tree.get(str.charAt(str.length() - 1));
tree.clear();
for (int pos : pred) {
char c = str.charAt(pos + 1);
LinkedList<Integer> temp;
if (tree.containsKey(c))
temp = tree.get(c);
else
temp = new LinkedList<Integer>();
temp.add(pos + 1);
tree.put(c, temp);
}
} else {
longestMatch = -1;
int m = 1;
int i = 0;
int[] table = createTable(str);
while (m + i < str.length()) {
if (str.charAt(str.length() - i - 1) == str.charAt(str.length() - (m + i) - 1))
i++;
else {
insertPrediction(str.charAt(str.length() - m), i, str.length() - m);
m = m + i - table[i];
if (table[i] >= 0)
i = table[i];
}
}
if (i > 0)
insertPrediction(str.charAt(str.length() - m), i, str.length() - m);
}
char prediction = '0';
int maxCount = 0;
Iterator<Character> it = tree.keySet().iterator();
while (it.hasNext()) {
char key = it.next();
int count = tree.get(key).size();
if (count > maxCount) {
prediction = key;
maxCount = count;
}
}
return prediction;
}
private void insertPrediction(char c, int i, int pos) {
if (i > longestMatch) {
tree.clear();
longestMatch = i;
}
if (i < longestMatch)
return;
LinkedList<Integer> pred = null;
if (tree.containsKey(c))
pred = tree.get(c);
else
pred = new LinkedList<Integer>();
pred.add(pos);
tree.put(c, pred);
}
}
private int[] createTable(String str) {
int pos = 2;
int cnd = 0;
int[] table = new int[str.length() - 1];
table[0] = -1;
while (pos < str.length() - 1) {
if (str.charAt(str.length() - pos) == str.charAt(str.length() - cnd - 1)) {
table[pos] = cnd + 1;
pos++;
cnd++;
} else if (cnd > 0)
cnd = table[cnd];
else {
table[pos] = 0;
pos++;
}
}
return table;
}
|
{"Source-Url": "http://www.cs.ubc.ca/projects/knoll/cpsc503.pdf", "len_cl100k_base": 7834, "olmocr-version": "0.1.49", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 26152, "total-output-tokens": 8469, "length": "2e12", "weborganizer": {"__label__adult": 0.0004134178161621094, "__label__art_design": 0.00033211708068847656, "__label__crime_law": 0.0005135536193847656, "__label__education_jobs": 0.0010290145874023438, "__label__entertainment": 0.00010603666305541992, "__label__fashion_beauty": 0.00019693374633789065, "__label__finance_business": 0.00019657611846923828, "__label__food_dining": 0.0004286766052246094, "__label__games": 0.0007076263427734375, "__label__hardware": 0.0011892318725585938, "__label__health": 0.0008282661437988281, "__label__history": 0.0002624988555908203, "__label__home_hobbies": 9.608268737792967e-05, "__label__industrial": 0.0003807544708251953, "__label__literature": 0.0005612373352050781, "__label__politics": 0.0002987384796142578, "__label__religion": 0.0005049705505371094, "__label__science_tech": 0.06829833984375, "__label__social_life": 0.00010538101196289062, "__label__software": 0.0089569091796875, "__label__software_dev": 0.91357421875, "__label__sports_fitness": 0.00038242340087890625, "__label__transportation": 0.0004949569702148438, "__label__travel": 0.0001842975616455078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35520, 0.02861]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35520, 0.77211]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35520, 0.89882]], "google_gemma-3-12b-it_contains_pii": [[0, 3812, false], [3812, 9116, null], [9116, 13853, null], [13853, 19586, null], [19586, 23674, null], [23674, 29290, null], [29290, 32453, null], [32453, 35034, null], [35034, 35520, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3812, true], [3812, 9116, null], [9116, 13853, null], [13853, 19586, null], [19586, 23674, null], [23674, 29290, null], [29290, 32453, null], [32453, 35034, null], [35034, 35520, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35520, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35520, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35520, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35520, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35520, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35520, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35520, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35520, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35520, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35520, null]], "pdf_page_numbers": [[0, 3812, 1], [3812, 9116, 2], [9116, 13853, 3], [13853, 19586, 4], [19586, 23674, 5], [23674, 29290, 6], [29290, 32453, 7], [32453, 35034, 8], [35034, 35520, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35520, 0.17949]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
973f8d602623a77ba948966e02df6518b30cd420
|
An Efficient Greedy Local Search Algorithm for Boolean Satisfiability Based on Extension Rules
Huanhuan Peng, Liming Zhang and Ziming Ye
College of Software, Jilin University, Changchun 130012, China
Keywords: Boolean Satisfiability (SAT) problem, local search, extension rule, subScore, clause weighting
Abstract: The extension rule is a method for solving Boolean Satisfiability (SAT) problems by using maximum terms, but it is not mature enough at present. Many efficient local search algorithms apply true value assignment search solutions. Based on the relationship between extension rules and truth value assignment and some efficient heuristic strategies, this paper mainly does the following three aspects: 1) analyze two different methods of extension rule and truth value assignment and compare them. On this basis, a local search algorithm based on extension rules (LSER) is proposed. 2) Apply configuration checking strategy, propose the greedy search algorithm method (GSER) of extension rules. 3) We also improve the GSER by the subScore strategy and the clause weighting strategy, and design a new strategy called maximum score upper limit strategy. And then a new extension rule method (IGSER) based on greedy local search is proposed. Experiments show that the proposed algorithm outperforms the general extension rule inference method and algorithm GSER on 3-SAT and CBS SAT instances.
1. Introduction
SAT is the first proved NP-complete problem. And almost all NP-complete problems from a variety of domains can be transformed into SAT problem. The SAT problem is now widely used in the field of artificial intelligence, and many efficient solvers have emerged.
In 1992 Selman proposed the greedy local search algorithm GSAT [1] for the SAT problem. GSAT algorithm proves that the random local search algorithm can effectively solve the SAT problem and becomes the groundbreaking algorithm of the random local search algorithm.
Cai et al. presented configuration strategy in the field of local search algorithm in recent years. This strategy is one of the most influential strategies proposed by Chinese in the international SAT competition [2-7]. In 2011, Cai et al. used configuration strategy for the local search for the SAT problem for the first time, and the designed SWCC achieved amazing results [2]. Subsequently, based on the strategy of configuration, Cai et al. successively designed many powerful solvers such as SWCCA [3], CCASat [4], CScoreSAT [5], CCAnn [6] and CSCCSat [7]. The configuration strategy has greatly improved the performance of the SAT local search algorithm.
In 2003, Lin et al. proposed a new rule of automatic reasoning, called extension rule [8]. Martin Davis, an expert in artificial intelligence, regards it as a "complementary" reasoning method, which shows that this method has been widely recognized all over the world.
Yang et al. proposed a new extension rule reasoning method based on local search, applying local search to extension rules, and constructing an incomplete reasoning framework based on extension rules for the first time [9]. On this basis, this paper proposes LSER, GSER, IGSER algorithm by using some existing local search strategies and some new strategies, hoping to further improve the efficiency of SAT problem.
2. Preliminaries
In this section, we will introduce the basic concepts, including SAT problem, extension rule, etc.
2.1 Basic concept of the SAT problem
**Definition 1** (CNF formula): Give a set of n Boolean variables \( X = \{x_1, x_2, \ldots, x_n\} \), a literal is either a Boolean variable \( x \) or its negation \( \neg x \). A clause is a disjunction of literals. A Conjunctive Normal Form (CNF) formula \( F = C_1 \land C_2 \land \ldots \land C_m \) is a conjunction of clauses. For a literal \( v \in X \), we use \( ref(C_i, v) \) to represent the truth value of variable \( v \) in \( C_i \).
**Definition 2** (SAT problem): The Boolean satisfiability problem (SAT) is to find a set of truth assignment so that all clauses in a CNF formula \( F \) can be satisfied.
2.2 Extension rule
**Definition 3** (Extension rule) [8]: Given a clause \( C \) and a variable set \( X \), \( D = \{C \lor x, C \lor \neg x\} \) where \( x \) is a variable that doesn't appear in \( C \) and \( x \in X \). At this time we call the operation proceeding from \( C \) to \( D \) is using the extension rule on \( C \). \( D \) is the result of using the extension rule on \( C \).
Obviously, the clause \( C \) is logically equivalent to the result \( D \) in truth assignment.
2.3 Comparison of extension rule with truth assignment
Applying extension rule to solve the SAT problem is to find a non-expandable maximum term for all clauses, that is, to find a truth value assignment, so that the truth value assignment has at least one variable with different values for each clause. Using truth value assignment to determine whether the SAT problem is satisfiable is to find a true value assignment with at least one variable having the same value for each clause. We can find that the two thoughts are just opposite but equivalent. Therefore, many heuristic strategies applied in the truth assignment algorithm can also be applied to the extension rule.
2.4 Local Search
Although the local search algorithm has many shortcomings in principle, such as the possibility of falling into the local best, it is still simple and effective.
Applying the idea of classical local search algorithm, Yang proposed a framework of local search algorithm based on extension rules, as follows [9]:
**Algorithm Local Search in Extension Rule (LSER) [9]**
Input: CNF formula \( F = C_1 \land C_2 \land \ldots \land C_m \)
maxTries
Output: Non-expandable maximum term \( T = \{x_1, x_2, \ldots, x_n\} \)
or no solution found
1. attempt ← 0
2. \( T \leftarrow \) randomly generated maximum term
3. WHILE attempt < maxTries DO
4. IF CanBeExpand(\( T, F \))
5. THEN \( T \leftarrow \) Transform(\( T \))
6. attempt++
7. ELSE
8. RETURN \( T \)
9. RETURN "no solution found"
Among them, the function "Transform" is the neighborhood conversion function of local search, and the neighborhood is 1 neighborhood, that is, only two maximum terms with one different value of variables are adjacent; the objective function is the number of non-expandable clauses under the value of the current maximum term. If the objective function takes the value \( m \) (the number of clauses), then the maximum term \( T \) is directly returned.
490
2.5 Configuration checking [4]
**Definition 4** (configuration)[4]: Given a CNF formula $F$ and an truth assignment $\alpha$, the configuration of a variable $x$ under $\alpha$ is a vector called $Conf_\alpha(x)$, which contains a set of truth values of all variables in $N(x)$ under $\alpha$, i.e., $Conf_\alpha(x) = \alpha|_{N(x)}$, where $N(x)$ is the set of neighboring variables of $x$.
Applying the concept of configuration to an extension rule, you only need to think of assignment $\alpha$ as a maximum term $T$.
**Definition 5** (configuration checking strategy) [4]: For a SAT local search algorithm solving a CNF formula $F$, if the configuration of the variable $x$ has not been changed (which means none of its neighboring variables has been flipped) since it was last selected to flip, then it isn’t allowed to be flipped. The above is called configuration checking strategy.
In the following description, we use conf array to represent a change in the configuration of a variable. If the configuration of variable $x$ has been changed, we use $\text{conf}[x] = 1$ to indicate it, otherwise set $\text{conf}[x]$ to 0 [4].
The algorithm first initializes the whole conf array to 1 for each variable. In every local search process, when the variable $x$ is selected to flip, $\text{conf}[x]$ is reset to 0, and for each variable $y \in N(x)$, $\text{conf}[y]$ is set to 1[4].
In our proposed algorithm, this strategy is not strictly followed. If the configuration of $x$ has not been changed, then we use some other strategies to select the variable that will be flipped.
3. Greedy Local Search and Improvement
In this part, we will introduce greedy strategies and propose a greedy local search algorithm based on extension rule and improve it.
3.1 Greedy search algorithm under extension rule
In the algorithm LSER, we don’t give a specific meaning of the Transform function. In fact, it’s related to the score of each variable (which is given below) [9]:
For a formula $F$ and a maximum term $T$, $\text{cost}(F, T)$ represents the total weight of extensible clauses under maximum term $T$. The evaluation function of each variable is defined as $\text{Score}(x) = \text{cost}(F, T) - \text{cost}(F, T')$, where $T'$ is obtained from $T$ by flipping $x$ [9]. In the local search process, we try to select variables with highest scores and satisfying $\text{conf}[v] = 1$.
Based on the evaluation function and the configuration checking strategy, it is easy to define the local search algorithm in greedy mode. The algorithm is as follows:
**Algorithm Greedy Search under Extension Rule (GSER) [9]**
Input: CNF formula $F = C_1 \land C_2 \land \ldots \land C_m$
maxTries
Output: Non-expandable maximum term $T = \{x_1, x_2, \ldots, x_n\}$ or no solution found
1. attempt $\leftarrow 0$;
2. $T \leftarrow \text{initializeMaxTerm}(F)$
3. WHILE attempt < maxTries DO
4. IF $\sim \text{CanBeExpand}(T, F)$ THEN RETURN $T$
5. compute $P = \{v|\text{Score}(v) > 0 \text{ and } \text{conf}[v] = 1\}$
6. IF $P \neq \emptyset$ THEN
7. $x \leftarrow v \in P$ with the highest score
8. $T \leftarrow T$ with $x$ flipped
9. ELSE
10. use other rules to select the flipped variable $x$
11. attempt++
12. update the configuration of each variable
13. RETURN "no solution found"
3.2 Use subScore-assisted greedy search [4]
Definition 6 (critical variable): If \( x \in V(C) \), then \( x \) is a critical variable in \( C \), or \( x \) is an unrelated variable in \( C \).
The critical variable of a clause is all the variables it contains, and such a variable will really determine whether the clause is true or false.
Obviously, the search direction of the algorithm is to make the value of more clause critical variables different from the maximum term.
We divide the clauses into three categories: extensible, critical and stable. If critical variables of the clause are as same as in the maximum term, then it’s extensible. If there is only one critical variable that differs from the maximum term, then it’s critical. In other cases, the clause is stable, that is, there are at least two of the critical variables that differ from the maximum term.
For example, \( T = \{x_1, x_2, \neg x_3, \neg x_4\} \), \( C_1 = \{x_1, x_2, x_3\} \), it can be said that \( C_1 \) is critical in \( T \).
In the following discussion, we use \( \text{flag}[i] \) to represent the number of variables in the clause \( C_i \) that differ from the maximum term under the current maximum term, \( \text{flag}[i] = 0 \) means that \( C_i \) is extensible, \( \text{flag}[i] = 1 \), indicating that \( C_i \) is critical, \( \text{flag}[i] \geq 2 \), indicating that \( C_i \) is stable.
With the above definition, we introduce the subScore function as an auxiliary scoring function to help in the following situation. In the previous algorithm GSER, there may be a case where there are multiple variables satisfying \( \text{conf}[v] = 1 \) and \( \text{score}(v) \) are the greatest. At this time, a random selection strategy can be enabled, but from the perspective of the whole algorithm, we hope that the search direction of the maximum term proceeds along the direction that makes more clauses become critical or even stable. Therefore, we use subScore to record this tendency.
The definition of \( \text{subScore}(v, C_i) \) is given as follows:
Given a maximum term \( T \), if the variable \( v \) is flipped, \( \text{flag}[i] \) is incremented, then \( \text{subScore}(v, C_i) = 1 \), if \( \text{flag}[i] \) is decreased, then \( \text{subScore}(v, C_i) = -1 \). If \( \text{flag}[i] \) is unchanged, then \( \text{subScore}(v, C_i) = 0 \).
It can be seen that when \( v \) is not a critical variable of \( C_i \), \( \text{subScore}(v, C_i) = 0 \); when \( v \) is a critical variable of \( C_i \) and the value of \( v \) in \( T \) is different from \( C_i \), flipping it will reduce \( \text{flag}[i] \), \( \text{subScore}(v, C_i) = -1 \); \( \text{subScore}(v, C_i) = 1 \) when \( v \) is a critical variable of \( C_i \) and the value of \( v \) in \( T \) is the same as \( C_i \).
So, the above definition can be written as:
\[
\text{subScore}(v, C_i) = \begin{cases}
0, & \text{if } v \text{ is not a critical variable in } C_i \\
1, & \text{if } v \text{ is a critical variable in } C_i \\
& \text{and ref}(T, v) = \text{ref}(C_i, v) \\
-1, & \text{if } v \text{ is a critical variable in } C_i \\
& \text{and ref}(T, v) \neq \text{ref}(C_i, v)
\end{cases}
\]
(1)
and \( \text{subScore}(v) = \sum_{i=1}^{n} \text{subScore}(v, C_i) \).
(2)
When there are multiple variables with \( \text{conf}[v] = 1 \) and the same score value, calculate their subScore value, flip the subScore maximum variable, if there are multiple subScore maximum variables, flipped according to the following strategy.
3.3 Maximum Score Upper Limit Strategy
In Algorithm GSER using subScore strategy, there may still be multiple variables with the largest Score and subScore. We propose the maximum score upper limit strategy to deal with this situation. The basic idea is to assume that the flipped variable is \( v \), and the maximum term after flipping \( v \) is \( T' \). Calculate the maximum value of Score under \( T' \) as the upper limit of Score after flipping \( v \). For each variable, calculate the upper limit of the Score after the flip to help select the current variable, that is, select the variable flipped with the Maximum Score limit.
Algorithm MaxScoreUpperLimit
Input: CNF formula \( F = C_1 \land C_2 \land \ldots \land C_m \)
Current maximum term $T = \{x_1, x_2, \ldots, x_n\}$
Candidate Set $S = \{x_p, x_{p+1}, \ldots, x_q\}$
Output: flipped variable $x$
1. $\text{max} \leftarrow 0$ $x \leftarrow 0$
2. FOR $x_i$ in $S$ DO
3. \quad $T' \leftarrow T$ with $x_i$ flipped
4. \quad $\text{max}[x_i] \leftarrow 0$
5. \quad FOR $j \leftarrow 1$ TO $n$ DO
6. \quad \quad compute Score($x_j$)
7. \quad \quad IF Score($x_j$) > $\text{max}[x_j]$ THEN $\text{max}[x_j] \leftarrow \text{Score}(x_j)$
8. \quad IF $\text{max}[x_i] > \text{max}$ THEN $\text{max} \leftarrow \text{max}[x_i]$ $x \leftarrow x_i$
9. RETURN $x$
3.4 Clause weighting strategy [4]
The previous algorithm is performed when there is always a variable that its configuration is satisfied. When there is no variable that satisfies $\text{conf} [v] = 1$, the algorithm uses clause weighting strategy which is used to update the weight of some clauses. We first increase the weights of extensible clauses by one, and then a clause that can be expanded by $T$ is randomly picked, lastly select the variable that has not been flipped for the longest time in $C_i$ to flip [4].
The framework of the overall algorithm is as follows:
**Algorithm Improved Greedy Search algorithm based on Extension Rules (IGSER)**
Input: CNF formula $F = C_1 \land C_2 \land \ldots \land C_m$
maxTries
Output: Non-expandable maximum term $T = \{x_1, x_2, \ldots, x_n\}$ or no solution found
1. attempt $\leftarrow 0$
2. $T \leftarrow \text{initializeMaxTerm}(F)$
3. WHILE attempt < maxTries DO
4. \quad IF $\sim \text{CanBeExpand}(T, F)$ THEN RETURN $T$
5. \quad compute $P = \{v | \text{Score}(v) > 0$ and $\text{conf}[v] = 1\}$
6. \quad IF $P \neq \emptyset$ THEN
7. \quad \quad compute $Q = \{v | \text{Score}(v)$ is the highest and $v \in P\}$
8. \quad \quad IF $|Q| = 1$ THEN $T \leftarrow T$ with $x \in Q$ flipped
9. \quad ELSE
10. \quad \quad FOR $v \in P$ DO compute the subScore($v$)
11. \quad \quad compute $S = \{v | \text{subScore}(v)$ is the highest and $v \in Q\}$
12. \quad \quad IF $|S| = 1$ THEN $T \leftarrow T$ with $x \in S$ flipped
13. \quad \quad ELSE $x \leftarrow \text{MaxScoreUpperLimit}(F, T, S)$ $T \leftarrow T$ with $x \in S$ flipped
14. \quad ELSE
15. \quad \quad update the weights of extensible clauses
16. \quad x $\leftarrow$ the oldest variable in a random extensible clause
17. \quad attempt++
18. \quad update the configuration of each variable
19. RETURN "no solution found"
4. Evaluations of IGSER
In this subsection, we carry out experiments to evaluate the performance of IGSER on random 3-SAT instances.
4.1 Benchmarks and experiment preliminaries
Uniform Random-3-SAT, phase transition region, unforced filtered: \(100 \leq \text{variables} \leq 200, 300 \leq \text{clauses} \leq 500\). All instances used here are CNF formula encoded in DIMACS CNF format. This format is supported by most of the solvers provided in the SATLIB Solvers Collection\(^1\).
Random-3-SAT Instances with Controlled Backbone Size: in order to fully test the performance of the IGSER, we choose some SAT problems with Controlled Backbone Size instances to test and verify the performance of IGSER.
IGSER is implemented in C++ and compiled by g++. Experiments in this section are run on a machine with a 4 cores of 3.0 GHz Intel(R) Core(TM) I5-7400 CPU and 4 GB RAM under Linux (Ubuntu 14.4). The cutoff time is set to 10000ms for the Uniform Random-3-SAT and CBS SAT. For each instance, each SAT solver is performed 100 times with a cutoff time of 10000ms, and we take the average to each of them.
4.2 Comparing IGSER on random 3-SAT
Table I presents the results of comparing these algorithms on the random 3-SAT (Comparative performance results of IGSER and NER, SER, GSER on the Uniform Random-3-SAT (variables =100, clauses=430).
<table>
<thead>
<tr>
<th>Problems</th>
<th>Algorithm (CPU Time /ms)</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>NER</td>
</tr>
<tr>
<td>Uf100-01</td>
<td>-(^a)</td>
</tr>
<tr>
<td>Uf100-02</td>
<td>-</td>
</tr>
<tr>
<td>Uf100-03</td>
<td>-</td>
</tr>
<tr>
<td>Uf100-04</td>
<td>-</td>
</tr>
<tr>
<td>Uf100-05</td>
<td>-</td>
</tr>
<tr>
<td>Uf100-06</td>
<td>-</td>
</tr>
<tr>
<td>Uf100-07</td>
<td>-</td>
</tr>
<tr>
<td>Uf100-08</td>
<td>-</td>
</tr>
<tr>
<td>Uf100-09</td>
<td>-</td>
</tr>
<tr>
<td>Uf100-10</td>
<td>-</td>
</tr>
</tbody>
</table>
\(^a\) - means time out.
GSER shows a substantial improvement over NER and SER on these random 3-SAT instances [10-11]. On most instance classes, GSER achieves a higher effectiveness than NER and SER does. Table I also indicates that IGSER significantly outperforms GSER in terms of running time. And it can success in finding the result on all test cases. So, to great extent it means IGSER has been improved from GSER.
4.3 Comparing IGSER on CBS SAT problems
Table II presents the results of comparing these algorithms on the CBS SAT problems (Comparative performance results of IGSER and NER, SER on the CBS SAT). IGSER shows an absolute advantage over NER and SER on these CBS 3-SAT instances. On all instances, IGSER achieves a maximum term which meets the requirements, but NER and SER are all time out.
Table II also indicates that IGSER has the ability to solve different kinds of SAT problems and at the same time, it also runs successfully with good performance.
\(^1\) https://www.cs.ubc.ca/~hoos/SATLIB/index-ubc.html
4.4 The summary of this experiment
The experiments demonstrate that IGSER consistently outperforms its competitors (NER, SER) on both random 3-SAT and CBS SAT instances. And it also shows superiority over GSER, because in some solutions GSER only finds the local optima resulting in looping without the real maximum term but IGSER can do it by Maximum Score Upper Limit Strategy and subScore-assisted greedy search. So IGSER gives the best performance on those 3-SAT instances and CBS SAT instances.
5. Conclusion and Future Work
This paper proposes a new extension rule method (IGSER) based on greedy local search. Experiments show that the proposed algorithm outperforms the general extension rule inference method and algorithm GSER on 3-SAT and CBS SAT instances. In future work, we will continue to improve the effectiveness of our algorithm. And we can parallelize the algorithm to improve the efficiency of the SAT problems.
Table 2. Cbs Sat Problems
<table>
<thead>
<tr>
<th>Problems</th>
<th>Algorithm (CPU Time /ms)</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>NER</td>
</tr>
<tr>
<td>CBS_k3_n100_m403_b10_0</td>
<td>-</td>
</tr>
<tr>
<td>CBS_k3_n100_m403_b10_1</td>
<td>-</td>
</tr>
<tr>
<td>CBS_k3_n100_m403_b10_2</td>
<td>-</td>
</tr>
<tr>
<td>CBS_k3_n100_m403_b10_3</td>
<td>-</td>
</tr>
<tr>
<td>CBS_k3_n100_m403_b10_4</td>
<td>-</td>
</tr>
<tr>
<td>CBS_k3_n100_m403_b10_5</td>
<td>-</td>
</tr>
<tr>
<td>CBS_k3_n100_m403_b10_6</td>
<td>-</td>
</tr>
<tr>
<td>CBS_k3_n100_m403_b10_7</td>
<td>-</td>
</tr>
<tr>
<td>CBS_k3_n100_m403_b10_8</td>
<td>-</td>
</tr>
<tr>
<td>CBS_k3_n100_m403_b10_9</td>
<td>-</td>
</tr>
</tbody>
</table>
a. - means time out.
References
|
{"Source-Url": "https://www.clausiuspress.com/conferences/AEASR/MEIMIE%202019/MEIMIE78.pdf", "len_cl100k_base": 5854, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 23392, "total-output-tokens": 7230, "length": "2e12", "weborganizer": {"__label__adult": 0.0004494190216064453, "__label__art_design": 0.0003812313079833984, "__label__crime_law": 0.0009388923645019532, "__label__education_jobs": 0.00429534912109375, "__label__entertainment": 0.00015342235565185547, "__label__fashion_beauty": 0.0002956390380859375, "__label__finance_business": 0.0007052421569824219, "__label__food_dining": 0.0006504058837890625, "__label__games": 0.0019588470458984375, "__label__hardware": 0.0012445449829101562, "__label__health": 0.0011892318725585938, "__label__history": 0.0005192756652832031, "__label__home_hobbies": 0.00016891956329345703, "__label__industrial": 0.0009946823120117188, "__label__literature": 0.00055694580078125, "__label__politics": 0.0006246566772460938, "__label__religion": 0.0006747245788574219, "__label__science_tech": 0.333251953125, "__label__social_life": 0.0002002716064453125, "__label__software": 0.01410675048828125, "__label__software_dev": 0.634765625, "__label__sports_fitness": 0.0005645751953125, "__label__transportation": 0.000896453857421875, "__label__travel": 0.00026607513427734375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23565, 0.05952]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23565, 0.52964]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23565, 0.81545]], "google_gemma-3-12b-it_contains_pii": [[0, 3422, false], [3422, 6527, null], [6527, 9820, null], [9820, 14101, null], [14101, 16535, null], [16535, 19446, null], [19446, 22742, null], [22742, 23565, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3422, true], [3422, 6527, null], [6527, 9820, null], [9820, 14101, null], [14101, 16535, null], [16535, 19446, null], [19446, 22742, null], [22742, 23565, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23565, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23565, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23565, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23565, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23565, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23565, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23565, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23565, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23565, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23565, null]], "pdf_page_numbers": [[0, 3422, 1], [3422, 6527, 2], [6527, 9820, 3], [9820, 14101, 4], [14101, 16535, 5], [16535, 19446, 6], [19446, 22742, 7], [22742, 23565, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23565, 0.13333]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
2bf9eb39e6fa362914f862992988b6851a3296f3
|
Association Rules Algorithm Based on the Intersection
Xuegang Chen* and Jie Xiao
College of Software and Communication Engineering, Xiangnan University, Chenzhou, 423000, China
Abstract: Mining association rules in the database is one of important study in data mining research. Traditional association rules consist of some redundant information, and need scan database many times and generate lots of candidate item sets. Aiming at low efficiency in association rules mining using traditional methods, this paper proposes the algorithm (ISMFP), which is based on intersection for mining the maximum frequent patterns. Firstly, applying the intersection theory of mathematics, put forwards a number of concepts and definitions. Then gives the process of association rules mining, and analyzes its performance. After that, the example describes the process of implementation of the algorithm. Finally, the experimental results show that the algorithm ISMFP is efficient on mining frequent patterns, especially there exists low threshold of support degree or long patterns.
Keywords: Association rules, data mining, intersection, maximum frequent pattern.
1. INTRODUCTION
T Agrawal firstly proposes the apriori algorithm for mining association rules from the customer transactional databases in 1993 [1], and the algorithm uses an iterative method of layer by layer search, and the connection-pruning method is used from the bottom to up, But it takes lots of time to connect and scan the database, then filter the candidate terms generated after the connection. At the same time, it is only suitable for mining the shorter patterns. In view of this situation, many researchers have done a lot of study [2-6], and the apriori algorithm is optimized, and the typical improved algorithms are as follows: a parallel mining algorithm, and it does not need pruning, and not generate candidate itemsets. Han et al propose the FP-Growth algorithm [7], and it does not generate candidate itemsets, Only need to scan the database for two times, Therefore, the mining efficiency is obviously improved. However, the FP-Growth algorithm needs to create recursively a large number of conditional pattern bases in the mining process. When a database is very large, the construction of FP-Tree based memory is not realistic. In order to overcome high cost calculation of the frequent pattern and avoid generating a lot of frequent items, Hu et al propose the LFIMiner algorithm based on FP-tree [8], this algorithm is used to deal with the pruning and optimization to shrink the search space, thus improve the performance of the algorithm effectively. VIPER puts forward a different algorithm [9], the data in the database is used by the representation of the longitudinal, Although the algorithm is the better efficiency, but still looks for from frequent pattern. Then an improved algorithm for mining the maximum frequent itemsets based of frequent pattern tree for mining the maximum frequent itemsets based on FP-tree was proposed, and this algorithm uses bottom-up search to mine the maximum frequent itemsets, thus accelerated the count of candidates. If producing infrequent itemsets with lower dimension according to conditional pattern base on every layer when mining, cutting and reducing dimensions of candidate itemsets can largely reduce the amount of candidate itemsets [10]. An efficient CAR mining algorithm which is based on equivalence class-rules tree was proposed, and designs a frequent itemsets for the storage datasets with tree structure, and it is an efficient algorithms based on equivalence class-rules tree [11]. Song proposed an improvement of apriori algorithm based on the KAF factor and the CHF factor to mine multi-valued association rules and established a complete mining parameters adjustment mechanism acting very well in improving the speed and efficiency of mining [12]. A lattice-based approach for fast mining most generalization association rules is proposed, and a new algorithm for building a frequent-closed-itemset lattice is introduced, a theorem on pruning nodes in the lattice for rule generation is derived, and an algorithm for fast mining MGARs from the lattice constructed is developed [13]. Loan T et al propose an incremental method for mining class association rules when records are inserted into the dataset. A modified equivalence class-rules tree is created from the original dataset, and it has some advantages, for example, The MECR-tree structure is used to generate rules quickly. The concept of pre-large itemsets is applied to CAR mining to reduce the number of re-scans on the original dataset. A theorem for quickly pruning infrequent nodes in the tree is developed to improve the process of updating the tree [14]. An efficient approach for mining cross-level closed itemsets and minimal association rules using closed itemset lattices is proposed, and design an efficient algorithm using a closed itemset lattice-based approach, which can mine the most relevant minimal cross-level association rules, and the parent–child relationship of the lattices has been exploited while mining cross-level closed itemset lattices [15]. A new approach based FEM and DFE for frequent pattern mining (FPM) that runs fast for both sparse and dense databases is presented, and optimiza-
tion techniques for the proposed algorithms to speed the mining process, reduce the memory usage, and optimize the I/O cost [16]. The redefinition and classification of multi-valued attribute data by using conceptual lattice is presented, and proposes an improvement of Apriori algorithm based on the KAF factor and the CHF factor to mine multi-valued attribute association rules, and establishes a complete mining process, reduce the memory usage, and optimize the I/O cost [16]. The redefinition and classification of multi-valued attribute data by using conceptual lattice is presented, and proposes an improvement of Apriori algorithm based on the KAF factor and the CHF factor to mine multi-valued attribute association rules, and establishes a complete mining process, reduce the memory usage, and optimize the I/O cost [16].
Aiming at the problem of low efficiency about association rules mining, this paper studies and analyzes the method of mining association rule, and puts forward a kind of association rule mining algorithm based on the intersection, finally, the effectiveness of the algorithm is verified by the examples and experiments, to a certain extent, improves the algorithm efficiency.
Next, we describe the algorithm used in more detail.
2. CORRELATION ALGORITHM FOR ASSOCIATION RULE MINING
The most classical algorithm of association rules is Apriori algorithm. Since it has many inherent defects, and researchers have proposed various improved algorithms based on Apriori [1].
2.1. Apriori Algorithm
The apriori algorithm is the most simple and basic algorithm when searching for frequent itemsets, and R. Agrawal and R. Srikant proposed the algorithm in 1994, and it is an original algorithm for mining frequent item sets of Boolean association rules. Apriori is used an iterative method of layer by layer search, the K item set for searching (k+1) item. First, the count of each item is accumulated by scanning the database, and the item that satisfies the minimum support degree is collected, then find out the collection of frequent 1 sets, the collection is recorded as L1. Then, L1 is used to find a collection of frequent 2 sets, recorded as L2, so go on, until you can no longer find frequent K itemsets.
The mining of association rules is divided into two steps: (1) find all frequent itemsets. (2) produce strong association rules from frequent itemsets. And the first step decides its overall performance. The first step is introduced as follow:
Algorithm: apriori algorithm
Input: the transactions database for D and min-sup
Output: frequent itemsets L of D
L1 = find_frequent_1-itemsets (D);
for (k = 2; L\text{k-1} \neq \emptyset; k++) {
C\text{k} = apriori_gen (L\text{k-1}, min\_sup);
for each transaction t \text{D} { //scan D for counts
C\text{t} = \text{subset} (C\text{k}, t); //get the subsets of t that are candidates
for each candidate c \in C\text{t}
c.count++;
}
L\text{k} = \{c \in C\text{k} | c.count \geq \text{min}\_sup\}
}
return L = \bigcup \text{k} L\text{k};
Procedure apriori_gen (L\text{k-1}: frequent (k-1)-itemsets, min\_sup)
for each itemsets l\text{i} \in L\text{k-1}
for each itemsets l\text{j} \in L\text{k-1}
if (l\text{i}[1]=l\text{j}[1])^\wedge (l\text{i}[2]=l\text{j}[2])^\wedge \ldots ^\wedge (l\text{i}[k-2]=l\text{j}[k-2])^\wedge (l\text{i}[k-1]<l\text{j}[k-1]) then{
c = l\text{i} \rightarrow l\text{j}; //join step: generate candidates
if has\_infrequent\_subset (c, L\text{k-1}) then
delete c; // prune step: remove unfruitful candidate
else add c to C\text{k};
}
return C\text{k};
Procedure has\_infrequent\_subset (c: candidate k-itemset; L\text{k-1}: frequent (k-1)-itemsets) //use prior knowledge
for each (k-1)-subset s of c
If s \in L\text{k-1} then
return TRUE;
return FALSE;
The core idea of apriori algorithm is divided into two steps, and they are join and prune.
Step1: join
In order to find out the L\text{k} (frequent K itemsets), the K itemsets is generated by L\text{k-1} and its own connection, recorded as C\text{k}, and the elements of L\text{k-1} are connected.
Step2: prune
C\text{k} is a superset of L\text{k}, i.e. its members may or may not be frequent, but all frequent itemsets are contained in the C\text{k}. Once scan database, determine the count of each candidate in C\text{k}, so as to determine the L\text{k}. However, C\text{k} may be very large, so the amount of computation involved is very large. For compressed C\text{k}, using the Apriori property, and any non frequent (k-1) terms are not likely to be subset of frequent K itemsets. Thus, if (k-1) item of a candidate K itemset is not in L\text{k}, the candidate item is not likely to be frequent, so it can be deleted from the C\text{k}.
Apriori algorithm based on frequent item set uses the iterative method of layer by layer search. The algorithm is
simple and easy to implement. But there are some difficult to overcome. The number of scanning the database is too much. In the Apriori algorithm, when each candidate set is generated, we must carry out a comprehensive search in the database. If you want to generate frequent itemsets of a maximum length for N, we scan for N in the database. When a large number of transaction data is stored in the database, the load of the system I/O is very large, and the time of each scan database will be very long, so the efficiency is very low.
Apriori algorithm generates a large number of intermediate itemsets. Only using support, important degree of the various attributes are not considered. In real life, some affairs occur very frequently, and some affairs are very sparse, so there exists a problem for mining, if the minimum support threshold is set too high, meaningful rules may not be found. if the minimum support threshold is set too low, then a lot of non-practical rules will fill in the whole data mining process, greatly reducing the availability and efficiency of mining rules.
The algorithm only considers the single dimensional Boolean association rules mining, but in practical application, it may appear the multi-dimensional, the number, the multi-level association rules and Inaccurate. At this time, the algorithm is no longer applicable, need to improve, and even need to redesign algorithm, but this algorithm is the basis of the improved algorithm later.
3. CONCEPTS
Suppose that I = \{i_1, i_2, i_3, \ldots, i_m\} is a set of m different projects. Given a transaction data for DB, there into, T \subseteq I, T has a unique identifier of TID, If X is an item of the I, and \( X \subseteq T \), says that the transaction T support itemsets X. Association rules is implicitive expression of \( X \rightarrow Y \), there into, \( X \subseteq I \), \( Y \subseteq I \), moreover \( X \cap Y = \emptyset \).
Under normal circumstances, the support degree and credibility are two very important parameters in mining association rules, and they are looked as a standard of measuring the quality of mining association rules. The support degree (Support(\( X \rightarrow Y \))), that is, s % the transactions support both item sets A and B in the set D, s % is called the support degree of association rules \( X \rightarrow Y \), The support degree describes the probability of the union for two items in all transactions, and been expressed as:
\[
\text{Support}(X \rightarrow Y) = \frac{\text{Support}(X \cup Y)}{100} \quad (1)
\]
The credibility degree (Credibility(\( X \rightarrow Y \))), that is, the transactions of in the set D support the transactions of the set X, at the same time c% support the set Y, c% is called the credibility degree of the association rules \( X \rightarrow Y \), that is, how much the probability of the item set Y does appear, at the same time the item set X appear in the transaction T, and is expressed as follows:
\[
\text{Credibility}(X \rightarrow Y) = \frac{\text{Support}(X \cup Y) \times 100}{\text{Support}(X)\times100} \quad (2)
\]
The user specifies the minimum support degree and credibility according to the mining demand, and were recorded respectively as \text{min\_con} and \text{min\_sup}. The degree of support and credibility are a measure of the user’s interest in knowledge. Only when \text{Support}(X \rightarrow Y) \geq \text{min\_sup} and \text{Credibility}(X \rightarrow Y) \geq \text{min\_con}, are called strong rules, and they are very important for the user, and are called the useful association rules. So we just need to find strong association rules, and calculate all the frequent itemsets according to the minimum support degree, then we may mine some long patterns of frequent item sets from all the frequent item sets, and finally calculate the maximum frequent sets.
Definition1. Suppose that T1 and T2 are item sets, S is composed of common item sets of T1 and T2, and is called the intersection of T1 and T2, and is recorded T1 \cap T2, and as follows:
\[
S = T1 \cap T2 = \{ x | (x \in T1) \land (x \in T2) \} \quad (3)
\]
For example, T1 = \{a, b, c, d\}, T2 = \{a, b, d\}, then T1 \cap T2 = \{a, b, d\}. T1 and T2, for any two itemsets, definition1 can be generalized, if the minimum support degree is m, the frequent items is Fl = T1 \cap T2 \cap \ldots \cap Tm.
Definition2. Set any two limited itemsets T1 and T2, the number of its transactions are recorded respectively \| T1 \|, \| T2 \|, then \| T1 \cap T2 \| \leq \min (\| T1 \|, \| T2 \|).
Definition3. Suppose that X is a set of projects in the I, if X contains k items, X is called k - itemsets.
Definition4. If the itemsets \( X \subseteq T \), is called transaction T satisfies itemsets X; the support degree of the itemsets X in the transaction database DB is recorded \text{Sup} (X), that is, the transaction databases DB contain the number of the transactions of X.
Definition5. If the support degree \text{Sup} (X) of the item set X in the transaction databases DB is greater than given minimum support degree threshold min-Sup for the user or the experts, the item set X is called X large itemsets or frequent itemsets.
Definition6. If all superset of the frequent itemsets L are frequent, then L is called the maximum frequent itemsets (or is called the maximum frequent patterns), all collections of L is called the maximum frequent itemsets, and is recorded MFI (Maximal Frequent Itemsets).
Definition7. If the itemsets L is the maximum frequent itemsets, all subsets in the itemsets L are frequent itemsets Fl (the frequent itemsets).
4. MAXIMUM FREQUENT PATTERN ALGORITHM BASED ON THE INTERSECTION
4.1. The Algorithm
The algorithm ISMFP is used to set theory and the idea of top-down search, and the implementation process as shown in Fig. (1).
Algorithm: ISMFP algorithm
Input: the transactions database for DB and min-sup
Output: MFI
To determine the number of maximum and minimum for these items in the transaction
The mum support degree is designed according to expert experience and the number of item sets in the transaction
The intersection operation is performed between the min-sup transactions
Maximum frequent itemsets (MFI)
Fig. (1). The algorithm implementation process.
1. \( L_x = \text{NULL} \)
2. The number of the transactions is determined in the DB, and \( |DB| \) is the number of the transactions, count up the number of each transaction, Therefore, \( T\). count is the number of the transactions, \( \text{Min} \).count is the minimum value of items, \( \text{Max} \).count is the maximum value of items;
3. For \( (T\). count=\( \text{Max} \).count; \( T\). count >= \( \text{Min} \).count \) \( T\). count--
\{ // Taking out \( T\). count itemsets from the transaction database DB, \( T\). count is the number of itemsets\}
4. Saving the number of the transactions for \( T\). count items to an array of memory.
5. Invoking function Intersection \( (T\). count, \text{min-sup}) \)
Intersection \( (T\). count, \text{min-sup}) \{
FI=\text{NULL} // FI is a frequent itemsets
For \((i=\text{Max}.\text{count}; \ i > = \text{min-sup}; i--)\)
\( FI=T_i \cap T_{i-1} \cap \ldots \cap T_{\text{min-sup}} //\text{Max}.\text{count itemsets and each itemsets of low order perform intersection operation. Therefore, a superset of FI is the non frequent itemsets} \)
\( L_x = FI \)
IF \( (FI \subseteq \ L_x) \)
Delete FI from the frequent itemsets\}
The algorithm can be divided into three parts: The first part of the statistical transactions, and its time complexity is \( O(\text{Max}.\text{count} \times |DB|) \), among them, \( |DB| \) is the number of the transactions, \( \text{Max}.\text{count} \) is the maximum length of the transactions; The second part time complexity of intersection operation is \( O(\text{Max}.\text{count} \times \text{Max}.\text{count} \times |DB|) \); And the third part, time of judgment superset and statistics transaction are similar, therefore, and their time complexity of the algorithm are \( O(\text{Max}.\text{count} \times \text{Max}.\text{count} \times |DB|) \).
The algorithm of the maximum frequent patterns mining based on the intersection is a new method, Avoiding the problem of the traditional Apriori algorithm to generate a large number of candidate itemsets and repeat scanning a database, and proposed algorithm improves the efficiency for mining association rules, the example analysis and the experiments results show that the algorithm is effective in this paper.
4.2. Example Analysis
The transaction data of All Electronics [1], As shown in Table 1, there is nine transactions in the database, That is \( |DB| = 9 \). Set threshold value about the minimum support degree, and \( \text{min-sup}=2 \).
According to the ISMFP algorithm, we obtain the maximum frequent itemsets \( MFI=\{ \{I_1, I_2, I_3\}, \{I_1, I_2, I_5\}, \{I_2, I_4\} \} \) and the implementation process of ISMFP algorithm is as follows:
**Step 1:** Counting the number of each transaction in the transaction databases DB, as shown in Table 2.
**Step 2:** Starting from T8 of the largest number the items, but the number is less than \( \text{min-sup} \), 4 itemsets of T8 and any of 3 itemsets are performed intersection operation, \( FI_1= T_8 \cap T_9=\{I_1, I_2, I_3\}, FI_2= T_8 \cap T_4=\{I_1, I_2, I_5\}, FI_3= T_8 \cap T_1=\{I_1, I_2, I_5\} \), after judging for the maximum frequent patterns, and \( FI_2 \) is a subset of the intersection for subsequent itemsets, then get rid of, as follows:
\( MFI=\{ \{I_1, I_2, I_3\}, \{I_1, I_2, I_5\} \} \).
**Step 3:** After the intersection of 4 itemsets and 3 itemsets, then intersection operation about 3 sets and 2 sets are performed, Get the \( FI=\{I_2, I_4\} \), after the judgment of the maximal frequent patterns, added to the \( MFI \), and as shown in Table 3, and continue to execute the operation repeatedly by this method.
**Step 4:** Finally use ISMFP algorithm to get the maximum frequent item sets, as follows \( \{I_2, I_4\}, \{I_1, I_2, I_3\}, \{I_1, I_2, I_5\} \), and as shown in Table 4.
**Step 5:** End.
Table 1. Transaction data.
<table>
<thead>
<tr>
<th>TID</th>
<th>Item</th>
</tr>
</thead>
<tbody>
<tr>
<td>T1</td>
<td>11, 12, 15</td>
</tr>
<tr>
<td>T2</td>
<td>12, 14</td>
</tr>
<tr>
<td>T3</td>
<td>12, 13</td>
</tr>
<tr>
<td>T4</td>
<td>11, 12, 14</td>
</tr>
<tr>
<td>T5</td>
<td>11, 13</td>
</tr>
<tr>
<td>T6</td>
<td>12, 13</td>
</tr>
<tr>
<td>T7</td>
<td>11, 13</td>
</tr>
<tr>
<td>T8</td>
<td>11, 12, 13, 11, 15</td>
</tr>
<tr>
<td>T9</td>
<td>11, 12, 13</td>
</tr>
</tbody>
</table>
Table 2. Transaction statistics.
<table>
<thead>
<tr>
<th>Items</th>
<th>TID Count</th>
</tr>
</thead>
<tbody>
<tr>
<td>T1</td>
<td>3</td>
</tr>
<tr>
<td>T2</td>
<td>2</td>
</tr>
<tr>
<td>T3</td>
<td>2</td>
</tr>
<tr>
<td>T4</td>
<td>3</td>
</tr>
<tr>
<td>T5</td>
<td>2</td>
</tr>
<tr>
<td>T6</td>
<td>2</td>
</tr>
<tr>
<td>T7</td>
<td>2</td>
</tr>
<tr>
<td>T8</td>
<td>4</td>
</tr>
<tr>
<td>T9</td>
<td>3</td>
</tr>
</tbody>
</table>
Table 3. FI
<table>
<thead>
<tr>
<th>FI</th>
</tr>
</thead>
<tbody>
<tr>
<td>I1, I2, I3</td>
</tr>
<tr>
<td>I1, I2, I5</td>
</tr>
<tr>
<td>I1, I2</td>
</tr>
<tr>
<td>I2, I4</td>
</tr>
</tbody>
</table>
Table 4. MFI
<table>
<thead>
<tr>
<th>MFI</th>
</tr>
</thead>
<tbody>
<tr>
<td>I2, I4</td>
</tr>
<tr>
<td>I1, I2, I5</td>
</tr>
<tr>
<td>I1, I2, I3</td>
</tr>
</tbody>
</table>
5. EXPERIMENT AND DISCUSSION
Based on CPU Pentium 4 3.06 GHz, 256 MB of memory, operating system for Windows server 2000 PC, 160G hard drive, using Microsoft Visual C++ edit program, The database system is the experimental environment of Microsoft Server SQL 2000, The test database is the same as the paper [18] in the test. The database has 8124 records, which record 23 the attributes of a mushroom has. Under the same conditions, we regard the DMFIA algorithm as the reference of the experiment, The Minimum support are respectively set: 30%, 20%, 10%, 1%, the experimental results are shown in Fig. (2) and effectiveness of the algorithm is verified in this paper.
According to the idea of algorithm, statistics the number of any item set at first, starting with the maximum terms, with the low one set min-sup transaction intersection operation, due to reduced cycle times and search space, Obviously reduce the cost of the need time, If the frequent item sets are the subset of the maximal frequent item sets, they are not added to the maximum frequent item sets, or there are such subsets, we will delete them. Certainly, the detection of the subset should be further optimized, this intersection is particularly suitable for the transaction support threshold is smaller or long patterns, and the efficiency is more obvious.
CONCLUSION
Since the maximum frequent patterns have already implied all frequent patterns, we convert the problem of discovering frequent patterns to find the maximum frequent patterns, avoiding the production of a large number of candidate sets. This paper puts forward a kind of association rule mining algorithm based on the intersection, which reduces the number of cycles and the search space by using the principle of intersection and the maximum frequent pattern. Experimental results show that the proposed algorithm can improve the performance effectively.
CONFLICT OF INTEREST
The authors confirm that this article content has no conflict of interest.
ACKNOWLEDGEMENTS
The authors would like to thank for financial support by youth fund project of the humanities and social sciences of Education Ministry for research on online social network public opinion mining and risk management based on big data analysis, Social science fund project of Hunan Province No. 13YBA302 science and technology plan project of Hunan Province No. 2014FJ3010, 2013FJ3032, Education department scientific research key projects of Hunan Province No.2014A135, education science "twelfth five-year" plan project of Hunan province No. XJK014CGD081, Xiangnan University research fund No.2012Y45 and Key projects of teaching reform of Xiangnan University.
REFERENCES
Association Rules Algorithm Based on the Intersection
© Chen and Xiao; Licensee Bentham Open.
This is an open access article licensed under the terms of the (https://creativecommons.org/licenses/by/4.0/legalcode), which permits unrestricted, non-commercial use, distribution and reproduction in any medium, provided the work is properly cited.
|
{"Source-Url": "https://benthamopen.com/contents/pdf/TOCSJ/TOCSJ-8-1152.pdf", "len_cl100k_base": 6237, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 21269, "total-output-tokens": 7116, "length": "2e12", "weborganizer": {"__label__adult": 0.00035262107849121094, "__label__art_design": 0.00033664703369140625, "__label__crime_law": 0.0007758140563964844, "__label__education_jobs": 0.0016155242919921875, "__label__entertainment": 9.429454803466796e-05, "__label__fashion_beauty": 0.0002007484436035156, "__label__finance_business": 0.0009226799011230468, "__label__food_dining": 0.0003795623779296875, "__label__games": 0.0014019012451171875, "__label__hardware": 0.0017538070678710938, "__label__health": 0.0008764266967773438, "__label__history": 0.0003740787506103515, "__label__home_hobbies": 0.00024008750915527344, "__label__industrial": 0.0011892318725585938, "__label__literature": 0.00030732154846191406, "__label__politics": 0.00030350685119628906, "__label__religion": 0.0005307197570800781, "__label__science_tech": 0.394775390625, "__label__social_life": 0.0001800060272216797, "__label__software": 0.039794921875, "__label__software_dev": 0.55224609375, "__label__sports_fitness": 0.00037789344787597656, "__label__transportation": 0.0004887580871582031, "__label__travel": 0.0002181529998779297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25769, 0.04929]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25769, 0.6706]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25769, 0.86767]], "google_gemma-3-12b-it_contains_pii": [[0, 5332, false], [5332, 10166, null], [10166, 16087, null], [16087, 20540, null], [20540, 23789, null], [23789, 25769, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5332, true], [5332, 10166, null], [10166, 16087, null], [16087, 20540, null], [20540, 23789, null], [23789, 25769, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25769, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25769, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25769, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25769, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25769, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25769, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25769, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25769, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25769, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25769, null]], "pdf_page_numbers": [[0, 5332, 1], [5332, 10166, 2], [10166, 16087, 3], [16087, 20540, 4], [20540, 23789, 5], [23789, 25769, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25769, 0.19298]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
4fca1b173667c407427d347819985bbfb41a2c17
|
Java Game Tutorial - Part 1
In this and following tutorials you will learn about Applets, Threads, Graphics and a few other things. By the end of this tutorial you should have the skills to make basic games in Java. For this tutorial you will be making a simple 'space invaders' type game.
I assume that you have basic knowledge of Java as I won't be going into the basic details of how some things work.
First we will be starting by creating an applet and drawing a circle to the applet area.
1. Create a file called 'Game.java'.
2. Open the file.
The next step is to import the necessary packages. For now we will only be requiring 2 packages:
```
import java.applet.*;
import java.awt.*;
```
Now that the importing has been taken care of we will need to set up the Java applet by the following:
```
public class Game extends Applet implements Runnable
{
}
```
This basically gives access to the Applet class and the 'Runnable' makes it so we can implement threads.
The variables come next as we wish to make these ones global:
```
Thread gameThread;
int width=400, height=400, MAX=1;
int currentX[] = new int[MAX];
int currentY[] = new int[MAX];
```
I have decided to use arrays for the X and Y cords now because they will be used at a later stage. It makes it easier to set it up now rather than changing it later.
Next comes the methods. I have included methods that are currently not used at this stage but they are used later.
Start() is used for starting a new thread for the class.
```
public void start()
{
```
Thread gameThread = new Thread(this);
gameThread.start();
}
init() is used for setting the initial values
public void init()
{
currentX[0]=0;
currentY[0]=0;
}
run() is the main method we will use later. It is initialized after a new thread is started.
public void run()
{
}
paint() calls update().
public void paint(Graphics g)
{
update(g);
}
update () is where all the actual drawing is done.
public void update(Graphics g)
{
Graphics2D g2 = (Graphics2D)g;
// Set the background color.
g2.setBackground(Color.black);
// Clear the applet.
g2.clearRect(0, 0, width, height);
// Set the drawing color to green.
g2.setColor(Color.green);
// (X pos, Y pos, Width, Height)
g2.fillOval(currentX[0], currentY[0], 20,20);
}
* Note this is an applet which means you must run it from a HTML file. The HTML code to run this applet is as follows and I will use the same code throughout this series of tutorials.
As you will see if you compile and run this applet it will draw a green circle on a black background. This is pretty simple and pretty boring but it sets the basis up for doing things later. At this stage you do not even need the Thread but I thought I would put it in now to make it more easy later.
Now I will go into something more interesting but still rather useless. We will now make the circle bounce around the screen.
In this version I have added more global variables:
```
int speed=10; // Speed at which we will move the objects
// Which direction to move the object
int directionX[] = new int[MAX];
int directionY[] = new int[MAX];
```
These variables are used for making sure the applet doesn't go psycho fast on very fast computers and also to make sure it doesn't run to slow on slow computers.
```
long start=0;
long tick_end_time;
long tick_duration;
long sleep_duration;
static final int MIN_SLEEP_TIME = 10;
static final int MAX_FPS = 20;
static final int MAX_MS_PER_FRAME = 1000 / MAX_FPS;
float fps=0;
```
Only two functions are modified in this version. The first modification is a simple one. All it does is sets the value in directionX[0] to 1 which means that it will travel to the left first, and directionY[0] to 0 which means it will travel to the top first:
```
public void init()
{
currentX[0]=100;
currentY[0]=0;
```
directionX[0]=1;
directionY[0]=0;
}
This module as I said earlier this is the main function and as you will notice this one has the biggest amount of code in it now, so this may take a bit to explain. First here is the run() code in full.
```java
public void run()
{
while(true)
{
start = System.currentTimeMillis();
for(int i=0; i < MAX; i++)
{
if(directionX[i]==1)
currentX[i]+=speed;
if(directionX[i]==0)
currentX[i]-=speed;
if(currentX[i] <= 0)
directionX[i]=1;
if(currentX[i]+20 >= width)
directionX[i]=0;
if(directionY[i]==1)
currentY[i]+=speed;
if(directionY[i]==0)
currentY[i]-=speed;
if(currentY[i] <= 0)
directionY[i]=1;
if(currentY[i]+20 >= height)
directionY[i]=0;
}
repaint();
tick_end_time = System.currentTimeMillis();
tick_duration = tick_end_time - start;
sleep_duration = MAX_MS_PER_FRAME - tick_duration;
```
if (sleep_duration < MIN_SLEEP_TIME)
{
sleep_duration = MIN_SLEEP_TIME;
}
fps = 1000 / (sleep_duration + tick_duration);
try{
Thread.sleep(sleep_duration);
} catch(InterruptedException e) {}$
} $
Now onto explaining the function. First this function will continually loop. This is what gives our game moving objects.
The next line is:
start = System.currentTimeMillis();
This line is part of our frame rate calculations. It will set the start time to the system time at the start of the frame.
The next section is a for loop containing the code for calculating the objects position and where to move it to next. The X and Y cords are similar so I will only explain the X cord:
These first two if statements are used for calculating the new position in relation to its current X value:
if(directionX[i]==1)
currentX[i]+=speed;
This says if the current object is moving right then add 'speed' to the current position of the object. Basically if the object is at X:0 and the speed is 10, then the new X position will be 10 (0 + 10).
if(directionX[i]==0)
currentX[i]-=speed;
This section does the opposite to the previous if statement (moves to left).
These last two detect if the object has hit the side of the applet and if so change the direction:
if(currentX[i] <= 0)
directionX[i]=1;
If the current X position is less than or equal to zero then change the direction so it now moves to the right.
if(currentX[i]+20 >= width)
directionX[i]=0;
Again this one does similar but it detects if the object has hit the right side of the applet. This one has a small variation in the checking tho. You will notice it has:
currentX[i]+20
The current X value of the object is at cord 0 of the object (its left most side). This means that X will only be greater than or equal to the width after all of the right hand size has gone out of the applet. So we must add 20 which is the width of the object. Feel free to remove the +20 or change it to higher and smaller values to observe the effects it has.
That concludes the collision detection section.
Next you will see:
repaint();
Basically all this line does is calls the functions for painting on the screen paint() and update(). I think it does some internal calculations before going to those functions first though.
You are probably thinking this is all a bit much right now, but don’t worry, only 2 more sections to go.
tick_end_time = System.currentTimeMillis();
tick_duration = tick_end_time - start;
sleep_duration = MAX_MS_PER_FRAME - tick_duration;
if (sleep_duration < MIN_SLEEP_TIME)
{
sleep_duration = MIN_SLEEP_TIME;
}
fps = 1000 / (sleep_duration + tick_duration);
This code is pretty simple and self explanatory so I won’t go into details. But basically it works out how long it has taken to draw the current frame. If the system is lagging it will 'sleep_duration' reduce 'sleep_duration' and if it’s going to fast it will increase 'sleep_duration'. The sleep time is done in this final section:
try{
Thread.sleep(sleep_duration);
} catch(InterruptedException e) {}
This will simply pause the thread for the value in 'sleep_duration'. This is all done to make sure the game runs at its best on all systems.
If you want you can display the current frame rate by placing this line of code in your paint(Graphics g) function:
```
g.drawString("FPS: "+fps,1,400);
```
Also be aware that that calculation doesn't always give the correct frame rate. It is simply to regulate the speed of the app.
Well if you compile all that and run it you should get a green circle bouncing around your screen.
You will also notice that it flickers a lot especially if you increase the frame rate. This is because you can actually see the computer drawing the objects to the screen. In the next tutorial I will explain how to do frame buffering which will make your objects run smooth, and more.
Thanks for reading,
Feel free to send any comments.
# Java Game Tutorial - Part 2
Welcome back.
In this tutorial I will show you how to implement buffering into your app. First here are some details on how buffering works.
The current app you have flickers. This is because you can actually see the computer drawing the images to the screen. If you use buffering the computer draws it to one area and then only once it has finished drawing the objects it displays it on the screen.
There are different ways of doing buffering, but for now I will stick with the easiest so you can get an idea of how it all works.
If you have trouble understanding that example this may help. I like to think of buffering like someone drawing on a piece of paper. If you are watching someone draw on a piece of paper you can see every movement the person makes. But if say the person has 2 pieces of paper it could make things better. The person gives you one to look at. While you are looking at that piece of paper the person goes on and draws on the blank one. Once the person has finished drawing that one, the person will switch pages with you, and start drawing again. I know it's a lame way of putting it but it's simple.
Now that all the background is out of the way we will now get to modifying the code from the first tutorial.
First you will need to import:
We also have two new variables to add:
```java
BufferedImage bufferdImg;
Graphics2D bufferdImgSurface;
```
Scroll down until you find the function `init()` and add the following code:
```java
bufferdImg = (BufferedImage)createImage(width, height);
bufferdImgSurface = bufferdImg.createGraphics();
```
These two lines of code will set up the area to be drawn to.
The last step is to modify the `update(Graphics g)` function. The code is as follows:
```java
public void update(Graphics g)
{
Graphics2D g2 = (Graphics2D)g;
// Set the background color.
bufferdImgSurface.setBackground(Color.black);
// Clear the applet.
bufferdImgSurface.clearRect(0, 0, width, height);
bufferdImgSurface.setColor(Color.green);
// (X pos, Y pos, Width, Height)
bufferdImgSurface.fillOval(currentX[0], currentY[0], 20,20);
g2.drawImage(bufferdImg, 0, 0, this);
}
```
As you see we have changed it from drawing to 'g2' to 'bufferdImgSurface', and only at the very end drawing the whole frame to the screen:
```java
g2.drawImage(bufferdImg, 0, 0, this);
```
Now it's ready to go. You should now have a reduction in the flickering. As I said it's not the best way to do it but it works and it's easy so it is fine for now.
This next section is to show you how collision detection will be working in the game. You will notice that there is already some collision detection with the circle bouncing around the screen, but we will now expand on this. Please note most of this code will not be needed in our game so you may want to make a copy of your current file. Also you can skip this section if you wish but I recommend at least reading through it.
Again we will start in the variables section. Locate the integer variable called MAX. The current value of this is 1 (one circle). We want to have 2 circles bouncing around the screen so we will change MAX to 2.
Next we need to add two new variables:
```java
boolean collided=false;
float dist;
```
'collided' is only true if the distance between the two points is less than the specified amount.
'dist' is the distance between the two points.
In the 'init()' function add:
```java
currentX[1]=0;
currentY[1]=100;
directionX[1]=0;
directionY[1]=1;
```
This code is just the same as previous code so it shouldn't need explaining.
This next section of code should go in the 'run()' function just after the two circles have been moved:
```java
dist = (int)(Math.sqrt(Math.pow((currentX[0]+20)-(currentX[1]+20),2) +
if(dist < 20)
collided = true;
else
collided = false;
```
The first line of code is the distance formula:
$$\sqrt{(X1)-(X2)^2 + (Y1)-(Y2)^2})$$
This formula just calculates the distance between two points when given the X and Y cords.
The next section is just an if statement that says if the distance between the two points is less than 20 then they must be touching so set 'collided' to true.
Just a note. You may notice that in the distance formula I have the current position +20. This is because I am adding the diameter of the circle or you would only get the absolute X/Y cord.
The last thing to add is to the 'update(Graphics g)' function:
bufferdImgSurface.fillOval(currentX[1], currentY[1], 20,20);
if(collided==true)
bufferdImgSurface.drawString("Collided",10,10);
Add those two lines just b4:
g2.drawImage(bufferdImg, 0, 0, this);
Compile and run.
You should notice that the two circles both bounce around the screen. When the two circles are touching the word "Collided" is displayed in the top left hand corner.
This is one of the simplest methods of collision detection and the method we will be using to detect a collision between the bullets, player and the enemy(s).
It's been a long time but now all the basics are now out of the way and its now time for us to start working on the actual game.
This will require many modifications and additions to the code. Just to give you an idea of the size of the game, it's around 7 pages.
In this game we will be using the mouse for input we must do a few things to set up the mouse for input. First you must import the following package:
java.awt.event.*;
You must also add to the class line:
public class Game extends Applet implements Runnable, MouseMotionListener, MouseListener
That is all to the mouse section for now. We will be dealing with the mouse listener more throughout the code, but for now we will move onto the variables.
First you can go and delete these variables as they are no longer needed:
int directionX[] = new int[MAX];
int directionY[] = new int[MAX];
There are a lot of new variables to add so I am just going to give you the whole list that we will be using to make your's and my life easier. Some of these you will already have, some you won't. Also the comments next to them should explain them pretty well:
BufferedImage bufferdImg;
Graphics2D bufferdImgSurface;
Thread gameThread;
int width=400, height=400, MAX=50, speed=10;
int currentX[] = new int[MAX];
int currentY[] = new int[MAX];
int step=0, // Number of movements left/right
direction=1, // Current left/right direction (0=left, 1=right)
shipX=width/2-10, // Current player X position
shipY=height-45, // Current player Y position
mbx=-10, // The mouse position after mouse down, sets the_
mby=-10, // enemy bullet position to this.
randomShoot=0, // Used to work out which enemy is shooting
health=50, // The players health
BNUM=10, // Number of bullets
playing=0; // Are is the game playing (0=Playing, 1=Paused, 2=Game Over, 3=Win)
int bX[] = new int[BNUM]; // Bullet X pos.
int bY[] = new int[BNUM]; // Bullet Y pos.
int ebX[] = new int[BNUM]; // Enemy Bullet X pos.
int ebY[] = new int[BNUM]; // Enemy Bullet Y pos.
long start=0, // Frame start time
tick_end_time, // End frame time
tick_duration, // Time taken to display the frame
sleep_duration; // How long to sleep for
static final int MIN_SLEEP_TIME = 10, // Min time to sleep for
MAX_FPS = 20, // Max frame rate.
MAX_MS_PER_FRAME = 1000 / MAX_FPS; // MS per frame
float fps=0, // Current frame rate
dist; // Distance between 2 points
The first function in our code is 'start()' this has no changes so lets move to the next one.
Next is 'init()'. As you remember this function sets our initial values. This has a few
additions as follows:
This section of code is for drawing a grid of circles 10 by 5.
Set up local integer variables for keeping track of what we have drawn.
int row=10, // Current Y position
col=10, // Current X position
count=0; // How many circles have been drawn
We will set the first circle to the initial values of 'row' and 'col' so we have a starting point
to work from:
currentX[0]=col;
currentY[0]=row;
This section actually sets the coordinates for each circle:
for(int i=0; i < 50; i++) {
count++;
currentX[i]=col;
col+=25;
currentY[i]=row;
if(count==10){
row+=25;
col=10;
count=0;
}
}
This works by looping through each circle position. This in effect draws 10 circles with the Y value of 10. After it has looped through 10 times count will = 10. It will then add 25 to the 'row' value and draw another 10 circles with the Y value of 35. Each loop the X position is also moved across 25 points. It will keep doing this until 50 circles have been given values.
The following two lines of code are used to start the mouse listener “listening” on the applet:
addMouseMotionListener(this);
addMouseListener(this);
'MouseMotionListener' is used for picking up the motion of the mouse. Things such as the X,Y cords and if the mouse is on the applet or not.
'MouseListener' is used for detecting mouse clicks.
The last section in the 'init()' function is just simply to give all the bullets a position off of the screen so they are hidden and ready to be fired.
for(int i=0; i < BNUM; i++){
bX[i]=-10;
bY[i]=-10;
ebX[i]=0;
ebY[i]=height+10;
}
The next function is 'run()'. So many changes have been made to this function that you can basically delete it and I will go through it.
while(true){ // Starts the game loop
start = System.currentTimeMillis(); // Sets the current time
if(playing==0){ // Are we playing or is the game over?
Next section we will move the aliens left and right. It will first move them to the right by adding 1 to step until step is greater then 15. When this occurs it will then set step to 0 and change the direction to 0 which means move them to the left. After it moves 15 positions it will also move down one row.
step++;
for(int i=0; i < MAX; i++) {
if(step > 15) {
if(direction==1) {
direction=0;
} else {
direction=1;
}
step=0;
}
for(int j=0; j < MAX; j++)
currentY[j]+=speed;
}
if(direction==1)
currentX[i]+=speed;
else
currentX[i]-=speed;
This next for loop is used to tell if the user has fired a bullet. If they have and there is a free bullet (set so only 10 bullets can be fired at once) then to set it to the current ship position. Also if the bullets are visible on the screen then move them up.
for(int i=0; i < BNUM; i++) {
if(bY[i] <= 0) {
bX[i]=mbx;
bY[i]=mby;
mbx=-10;
mby=-10;
}
bY[i]-=speed;
}
Also related to the bullets is this for loop that detects any collision between the player's bullets and the aliens. This section works by looping through each alien and then each bullet. If the distance between the two is less than 20 then a collision has occurred. The bullet and the alien will then be hidden.
for(int i=0; i < MAX; i++) {
for(int j=0; j < BNUM; j++) {
if(!(bY[j]<=0)){
dist = (int)(Math.sqrt(Math.pow((currentX[i]+10)-bX[j],2) +
Math.pow((currentY[i]+10)-bY[j],2)));
if(dist <= 20){
bY[j]=-50;
currentY[i]=-500;
}
}
}
The next section is used for shooting the alien bullets. It works much the same as the previous shooting section. However this one will randomly pick a alien to shoot from.
for(int k=0; k < MAX; k++){
randomShoot=(int)(Math.random()*MAX);
if(currentY[randomShoot] >= 0){
for(int i=0; i < BNUM; i++){
if(ebY[i] >= height) {
ebX[i]=currentX[randomShoot];
ebY[i]=currentY[randomShoot];
break;
}
}
}
}
This is the collision detection section between the alien bullets and the player's ship. Again it is similar to the previous section.
for(int j=0; j < BNUM; j++) {
if(!(ebY[j]>=height)){
dist = (int)(Math.sqrt(Math.pow((shipX+10)-ebX[j],2) + Math.pow((shipY+10)-
ebY[j],2)));
if(dist <= 20){
ebY[j]=height+10;
health-=10;
}
}
}
We now need to move all the alien bullets down the screen. I may not have mentioned this before but if you notice that the bullets position 'ebY[]' is moved to the current position plus 'speed'. Everything that is required to move except for the ship is moved by the value 'speed'. I did this so you can change the speed of the game if you wish.
```java
for(int i=0; i < BNUM; i++){
if(ebY[i] < height) {
ebY[i]+=speed;
}
}
This is simple enough. If the player has no health left then set 'playing' to 2, which means it's "Game Over".
if(health <=0)
playing=2;
This is the last section for the game loop. This will detect if all of the aliens have been destroyed, or if the aliens have invaded. If all aliens have been destroyed then set 'playing' to 3 which means the player wins.
int count=0;
for(int j=0; j < MAX; j++){
if(currentY[j]<0)
count++;
if(currentY[j]>=340)
playing=2;
}
if(count==MAX)
playing=3;
} else { }
repaint(); // Redraw the screen
As explained this section calculates the frame rate and how long to sleep for. Please refer to the first tutorial for an explanation.
tick_end_time = System.currentTimeMillis();
tick_duration = tick_end_time - start;
sleep_duration = MAX_MS_PER_FRAME - tick_duration;
if (sleep_duration < MIN_SLEEP_TIME)
{
sleep_duration = MIN_SLEEP_TIME;
}
fps = 1000 / (sleep_duration + tick_duration);
```
try{
Thread.sleep(sleep_duration);
} catch(InterruptedException e) {}}
That's the end of our 'run()' function. The next section for us to look at is the drawing section. It's all a lot to take in but don't worry we are on the home stretch, only one more section after this.
As I mentioned these functions are for drawing to the screen. I will go through most of the code again as I don't want to miss anything out:
```java
public void paint(Graphics g)
{
update(g);
}
public void update(Graphics g)
{
Graphics2D g2 = (Graphics2D)g;
// Set the background color.
bufferedImgSurface.setBackground(Color.black);
// Clear the applet.
bufferedImgSurface.clearRect(0, 0, width, height);
bufferedImgSurface.setColor(Color.green);
// (X pos, Y pos, Width, Height)
for(int i=0; i < MAX; i++)
bufferedImgSurface.fillOval(currentX[i], currentY[i], 20, 20);
// Draw the read ship (a square)
bufferedImgSurface.setColor(Color.red);
bufferedImgSurface.fillRect(shipX, shipY, 20, 20);
for(int j=0; j < BNUM; j++)
{
bufferedImgSurface.setColor(Color.yellow);
bufferedImgSurface.fillOval(bX[j], bY[j], 5, 5);
}
}
```
This is a for loop that will draw all the bullets on the screen. I made it easy by having 10 bullets for the aliens and the player but if you change one of these numbers please be aware that it will cause problems in one place such as this. You would need to split the bullet drawing section into 2 for loops.
bufferdImgSurface.setColor(Color.blue);
bufferdImgSurface.fillOval(ebX[j], ebY[j], 5,10);
}
// Draw a bottom line to our window
bufferdImgSurface.setColor(Color.red);
bufferdImgSurface.drawString("_________________________________________
________________",0,375);
These if statements display the game status, such as if the player looses it will display "****Game Over****".
if(playing==1)
bufferdImgSurface.drawString("PAUSED", width/2-10, 390);
else if(playing==2)
bufferdImgSurface.drawString("****Game Over****", width/2-10, 390);
else if(playing==3)
bufferdImgSurface.drawString("****You Win!****", width/2-10, 390);
A simple way of displaying a health bar is to loop while the loop value is less than the value in health. On every loop draw a '|'. Set the X position to the current i value multiplied by 2 to give some spacing:
for(int i=0; i < health; i++)
bufferdImgSurface.drawString(" |", (2*i), 390);
// Draw the buffered image to the screen.
g2.drawImage(bufferdImg, 0, 0, this);
}
That's the last of our graphics section and now we are onto our last section! This section is used for detecting mouse clicks and detecting the mouse position.
Move the ship to the current mouse X position:
public void mouseMoved(MouseEvent e) { shipX=e.getX()-5; }
public void mouseDragged(MouseEvent e) { shipX=e.getX()-5; }
The user clicked a button so start firing a bullet from the current mouse X position and the current ship Y position:
public void mouseClicked(MouseEvent e) {
mbx=e.getX();
mby=shipY;
}
public void mousePressed(MouseEvent e) {
mbx=e.getX();
mby=shipY;
}
The mouse has entered the applet area so set 'playing' to 0 (start the game)
public void mouseEntered(MouseEvent e) { playing=0; }
The mouse has exited the applet area so set 'playing' to 1 (pause the game)
public void mouseExited(MouseEvent e) { playing=1; }
We don't use this function but you still must include it in your code or you will have errors:
public void mouseReleased(MouseEvent e) {}
You should now be able to compile the code and play the game. Its basic I know but it should help with understanding of applets, threads and event handling.
I hope you like my tutorial. If you find any errors please email me, also if you have any comments please email me.
Also check out my website for more tutorials and code samples www.jcroucher.com
This code and tutorial is copyright 2002 John Croucher.
|
{"Source-Url": "http://thecubscientist.com/APCS/Projects/Java_Game_Tutorial.pdf", "len_cl100k_base": 6698, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 31631, "total-output-tokens": 7965, "length": "2e12", "weborganizer": {"__label__adult": 0.0009098052978515624, "__label__art_design": 0.0002841949462890625, "__label__crime_law": 0.00041747093200683594, "__label__education_jobs": 0.0004830360412597656, "__label__entertainment": 0.0001386404037475586, "__label__fashion_beauty": 0.000255584716796875, "__label__finance_business": 0.00013756752014160156, "__label__food_dining": 0.0008549690246582031, "__label__games": 0.00872802734375, "__label__hardware": 0.0017185211181640625, "__label__health": 0.0003829002380371094, "__label__history": 0.0002130270004272461, "__label__home_hobbies": 0.0001361370086669922, "__label__industrial": 0.00040268898010253906, "__label__literature": 0.0002808570861816406, "__label__politics": 0.00022530555725097656, "__label__religion": 0.0007066726684570312, "__label__science_tech": 0.0014524459838867188, "__label__social_life": 0.00010764598846435548, "__label__software": 0.004673004150390625, "__label__software_dev": 0.97607421875, "__label__sports_fitness": 0.0006303787231445312, "__label__transportation": 0.00047135353088378906, "__label__travel": 0.0003333091735839844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26756, 0.02159]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26756, 0.5118]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26756, 0.85045]], "google_gemma-3-12b-it_contains_pii": [[0, 1536, false], [1536, 2317, null], [2317, 3868, null], [3868, 5008, null], [5008, 6352, null], [6352, 8172, null], [8172, 10341, null], [10341, 12015, null], [12015, 13548, null], [13548, 15369, null], [15369, 17085, null], [17085, 18656, null], [18656, 20206, null], [20206, 21699, null], [21699, 22838, null], [22838, 24345, null], [24345, 25917, null], [25917, 26756, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1536, true], [1536, 2317, null], [2317, 3868, null], [3868, 5008, null], [5008, 6352, null], [6352, 8172, null], [8172, 10341, null], [10341, 12015, null], [12015, 13548, null], [13548, 15369, null], [15369, 17085, null], [17085, 18656, null], [18656, 20206, null], [20206, 21699, null], [21699, 22838, null], [22838, 24345, null], [24345, 25917, null], [25917, 26756, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 26756, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26756, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26756, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26756, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 26756, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26756, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26756, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26756, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26756, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26756, null]], "pdf_page_numbers": [[0, 1536, 1], [1536, 2317, 2], [2317, 3868, 3], [3868, 5008, 4], [5008, 6352, 5], [6352, 8172, 6], [8172, 10341, 7], [10341, 12015, 8], [12015, 13548, 9], [13548, 15369, 10], [15369, 17085, 11], [17085, 18656, 12], [18656, 20206, 13], [20206, 21699, 14], [21699, 22838, 15], [22838, 24345, 16], [24345, 25917, 17], [25917, 26756, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26756, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
84b37cc945ae81f33af4e0b3109638f117c525cb
|
Clarification Ellipses, HPSG and dependent record types
Robin Cooper
Göteborg University
and
Jonathan Ginzburg
King’s College, London
Overview
- Sketch of a phenomenon – clarification ellipsis
- An HPSG approach
- Translating the HPSG approach to a record-based approach
- The big picture – future research directions
Clarification Ellipsis
A: Did Bo finagle a raise? B: (i) Bo?/ (ii) finagle?
**Clausal (focus) reading**: Are you asking if BO (of all people) finagled a raise/Bo FINAGLED a raise (of all actions)
**Constituent (identification) reading**: Who is Bo?/What does it mean to finagle?
1. **Clausal (focus) reading**: (yes/no) question used to confirm the content of a particular subutterance in the context of the whole utterance.
2. **Constituent (identification) reading**: (wh) question used to find out intended content of a subutterance.
Clarification Ellipsis: dialogue systems
- Sys: Would you like to make that trip via Malvern?
User: Malvern?
- Appropriate responses might be:
System: Malvern – M-A-L-V-E-R-N
\textit{constituent/identification}
System: Going via Malvern is the quickest route
\textit{clausal/focus}
System: Yes, Malvern
\textit{either reading}
The system should definitely NOT say
So, you would like to make that trip via Malvern instead of Malvern?
Identification is part of the general interpretation process
A: Did Bo kowtow?
- A’s question: whether the property she has referred to with her utterance of *kowtow* holds of the person she has referred to with the name *Bo*.
- B’s task: find values for these references; finding values is, with caveats, a necessary condition for B to ground A’s utterance, thereby signalling that its content has been integrated in B’s IS.
- Constraint on the representation of utterance types: such a representation must involve a function from or λ-abstract over a set of certain parameters (the *contextual parameters*) to contents. (See work on context dependence from Montague et seq.)
What do you do if identification fails?
- What if B cannot or is at least uncertain as to how he should instantiate in his IS a contextual parameter \( i \) ?
1. Perform a partial update of the existing context with the successfully processed components of the utterance, possibly existentially quantifying over unknown elements.
2. Pose a clarification question that involves reference to the sub-utterance \( u_i \) from which \( i \) emanates.
- Interpretations of previous utterances can be coerced to clarification questions.
Coercion Operations: first pass
- CE gives us some indication concerning both the input and required output of these operations.
- **parameter identification**: output a question paraphrasable as *what is the intended reference of sub-utterance uᵢ?*; partially updated context: repetition of the segmental phonology of uᵢ using rising intonation enables that question to be expressed.
- **parameter focussing**: partially updated context in which the issue under discussion is a question that arises by instantiating all contextual parameters except for i and abstracting over i; In such a context, one can confirm that i gets the value B suspects it has by uttering with rising intonation any apparently co-referential phrase whose syntactic category is identical to u₁’s.
Requisite Modification to HPSG
Did Bo leave?
```
[ root-cl
PHON did bo leave
CAT V [+fin]
C-INDICES {1,2,3, i,j}
[ ask-rel
ASKER i
ASKED j
]
CONT
MSG-ARG
question
PARAMS {}
prop|SOA
leave-rel
AGT 1
TIME 2
]
CTXT|BCKGRD
{utt-time(3), precede(2,3), named(bo)(1)}
]
```
Strategic Modelling Assumptions
- Context consists of distinct but coupled information states: each information state contains a (participant relative) record of public interactions.
\[
\begin{bmatrix}
\text{FACTS} & \text{set of facts} \\
\text{LATEST-MOVE} & \text{(illocutionary) fact} \\
\text{QUD} & \text{p.o. set of questions}
\end{bmatrix}
\]
- QUĐ represents the issues currently under discussion; locus of control for interactive coherence.
- FACTS represents conversationally presupposed information.
- LATEST-MOVE represents the (content of the) most recent conversational move.
Coercion operations: parameter identification
(1)
\[
\begin{align*}
\text{root-cl} & \\
\text{CTXT-INDICES} & \{ \ldots \# \ldots \} \\
\text{CONSTITS} & \{ \ldots \llbracket \text{CONT} \# \rrbracket \ldots \} \\
\ldots & \\
\Rightarrow & \\
\text{CONTMSG-ARG} & \llbracket \text{question} \rrbracket \\
\text{SAL-UTT} & 2 \\
\text{PARAMS} & \llbracket \llbracket \text{INDEX} \# \rrbracket \rrbracket \\
\text{PROP} & 3 \\
\text{SOA} & \\
\text{CONT} & 3 \\
\text{SIGN} & 2 \\
\text{content-rel} & \\
\text{cont} & 4 \\
\end{align*}
\]
Parameter identification update – example
a. Who do you mean BO?
b. WHO? (= who is Bo)
c. Bo? (= who is Bo)
(2)
CONT| MSG-ARG
PROP [question]
PHON bo
CAT NP
CONT|INDEX
CTXT|BCKGRD { named(Bo) }
question
PARAMS { [INDEX] }
PROP [content-rel]
SOA [SIGN]
CONT
Coercion operations: parameter focussing
(3)
\[
\begin{align*}
\text{root-cl} & \quad \{ \ldots \} \\
\text{C-INDICES} & \quad \{ \ldots \} \\
\text{CONSTITS} & \quad \{ \ldots \} \\
\text{CONT} & \quad \{ \ldots \} \\
\Rightarrow & \\
\text{CONT|MSG-ARG} & \quad \{ \text{question} \} \\
\text{SAL-UTT} & \quad \{ \text{question} \} \\
\text{MAX-QUD} & \quad \{ \text{question} \} \\
\end{align*}
\]
- Previous utterance
- Repeated utterance in current utterance
- Game-board update
Parameter focussing update – example
a. Did WHO leave?
b. WHO?
c. BO? (= Are you asking if BO left?)
Representing HPSG signs in terms of (something like) dependent record types
Families of dependent record types – functions from records to record types
\( \lambda r : T_1(T_2) \) – a function from records of type \( T_1 \) to the type \( T_2 \) (dependent on \( r \))
“Utterance skeleton”, “meaning”, “HPSG sign”
Representing utterances
\[ u_1 : 0 \text{ Did }_1 \text{ Bo }_2 \text{ leave }_3 \]
\[ u_{1,0-1} \text{ is to represent the utterance of } \text{did} \text{ in } u_1 \]
**Abbreviation**
\[
\left[ f_{u_{i,n-m}} : T \right]
\]
is to be an abbreviation for
\[
\left[
\begin{array}{c}
f_{u_{i,n-m}} : T \\
pf-f_{u_{i,n-m}} : f(u_{i,n-m}, f_{u_{i,n-m}})
\end{array}
\right]
\]
e.g., \[
\left[ \text{utt-time}_{u_{1,0-3}} : Time \right] \text{ (Time the type of time intervals)}
\]
abbreviates
\[
\left[
\begin{array}{c}
\text{utt-time}_{u_{1,0-3}} : Time \\
pf-\text{utt-time}_{u_{1,0-3}} : \text{utt-time}(u_{1,0-3}, \text{utt-time}_{u_{1,0-3}})
\end{array}
\right]
\]
$u_1 : 0$ Did 1 Bo 2 leave 3
$\lambda r : [\ldots ]( \begin{bmatrix} \text{msg}_{u_1,0-3} : ?\text{leave} & \text{ev-time}_{u_1,0-3} \\ \text{cont}_{u_1,0-3} : \text{ask} & \text{sp}_{u_1,0-3} & \text{hearer}_{u_1,0-3} & \text{msg}_{u_1,0-3} \end{bmatrix} )$
More properly:
$\lambda r : [\ldots ]( \begin{bmatrix} \text{msg}_{u_1,0-3} : ?\text{leave} & \text{ev-time}_{u_1,0-3} \\ \text{cont}_{u_1,0-3} : \text{ask} & \text{sp}_{u_1,0-3} & \text{hearer}_{u_1,0-3} & \text{msg}_{u_1,0-3} \end{bmatrix} )$
but I will suppress all the extra $r$’s as there is no risk of confusion.
filling in the dots ...
\[
\begin{align*}
\lambda r : & \begin{cases}
\text{phon}_{u_1,0-1} & : /dId/ \\
\text{phon}_{u_1,1-2} & : /bu/ \\
\text{phon}_{u_1,2-3} & : /liv/ \\
\text{phon}_{u_1,0-3} & : /\text{dI}d\text{bulív}/ \\
\text{utt-time}_{u_1,0-3} & : \text{\textit{Time}} \\
\text{ev-time}_{u_1,0-3} & : \text{\textit{Time}} \\
\text{tense}_{u_1,0-3} & : \text{ev-time}_{u_1,0-3} < \text{utt-time}_{u_1,0-3} \\
\text{ref}_{u_1,1-2} & : \text{\textit{Ind}} \\
\text{res}_{u_1,1-2} & : \text{\textit{named}}(\text{ref}_{u_1,1-2}, "\text{Bo}") \\
\text{sp}_{u_1,0-3} & : \text{\textit{Ind}} \\
\text{hearer}_{u_1,0-3} & : \text{\textit{Ind}} \\
\text{cat}_{u_1,0-3} & : [\text{V, +fin}] \\
\end{cases} \\
\left( \begin{array}{c}
\text{msg}_{u_1,0-3} \\
\text{cont}_{u_1,0-3}
\end{array} \right) \\
\left( \begin{array}{c}
?\text{leave}(\text{ref}_{u_1,1-2}, \text{ev-time}_{u_1,0-3}) \\
\text{ask}(\text{sp}_{u_1,0-3}, \text{hearer}_{u_1,0-3}, \text{msg}_{u_1,0-3})
\end{array} \right)
\end{align*}
\]
Suppose your context is defective - you don’t have a referent for Bo.
\[ \lambda r : \]
\[
\begin{align*}
\text{phon}_{u_1,0-1} & : /dId/ \\
\text{phon}_{u_1,1-2} & : /bu/ \\
\text{phon}_{u_1,2-3} & : /liv/ \\
\text{phon}_{u_1,0-3} & : /dIdbulív/ \\
\text{utt-time}_{u_1,0-3} & : Time \\
\text{ev-time}_{u_1,0-3} & : Time \\
\text{tense}_{u_1,0-3} & : \text{ev-time}_{u_1,0-3} < \text{utt-time}_{u_1,0-3} \\
\text{ref}_{u_1,1-2} & : Ind \\
\text{res}_{u_1,1-2} & : \text{named(}\text{ref}_{u_1,1-2}, \text{“Bo”}) \\
\text{sp}_{u_1,0-3} & : Ind \\
\text{hearer}_{u_1,0-3} & : Ind \\
\text{cat}_{u_1,0-3} & : [V, +\text{fin}] \\
\end{align*}
\]
\[
\begin{bmatrix}
\text{msg}_{u_1,0-3} & : \text{?leave} (\text{ref}_{u_1,1-2}, \text{ev-time}_{u_1,0-3}) \\
\text{cont}_{u_1,0-3} & : \text{ask}(\text{sp}_{u_1,0-3}, \text{hearer}_{u_1,0-3}, \text{msg}_{u_1,0-3})
\end{bmatrix}
\]
Coercion 1 – Lowering
Existential quantification of deficient parameters
She’s asking whether somebody named Bo left
$$\lambda r : \left[ \begin{array}{l}
\text{phon}_{u_1,0-1} : /dId/ \\
\text{phon}_{u_1,1-2} : /bu/ \\
\text{phon}_{u_1,2-3} : /liv/ \\
\text{phon}_{u_1,0-3} : /dIdbulív/ \\
\text{utt-time}_{u_1,0-3} : Time \\
\text{ev-time}_{u_1,0-3} : Time \\
\text{tense}_{u_1,0-3} : \text{ev-time}_{u_1,0-3} < \text{utt-time}_{u_1,0-3} \\
\text{sp}_{u_1,0-3} : \text{Ind} \\
\text{hearer}_{u_1,0-3} : \text{Ind} \\
\text{cat}_{u_1,0-3} : [V, +\text{fin}] \\
\end{array} \right]$$
$$\left[ \begin{array}{l}
\text{ref}_{u_1,1-2} : \text{Ind} \\
\text{res}_{u_1,1-2} : \text{named(}\text{ref}_{u_1,1-2}, \text{“Bo”}\text{)} \\
\text{msg}_{u_1,0-3} : ?\text{leave(}\text{ref}_{u_1,1-2}, \text{ev-time}_{u_1,0-3}\text{)} \\
\text{cont}_{u_1,0-3} : \text{ask(}\text{sp}_{u_1,0-3}, \text{hearer}_{u_1,0-3}, \text{msg}_{u_1,0-3}\text{)} \\
\end{array} \right]$$
Coercion 2 - parameter identification
Ask a question for the value of the parameter
\[ u_2 : 0 \text{ Bo?}_1 \]
Who is referred to by your utterance of “Bo”?
\[
\lambda r : \begin{bmatrix}
\text{phon}_{u_1,0-1} : /d\text{Id}/ \\
\text{phon}_{u_1,1-2} : /bu/ \\
\text{phon}_{u_1,2-3} : /liv/ \\
\text{phon}_{u_1,0-3} : /d\text{Idbulív}/ \\
\text{utt-time}_{u_1,0-3} : Time \\
\text{ev-time}_{u_1,0-3} : Time \\
\text{tense}_{u_1,0-3} : \text{ev-time}_{u_1,0-3} < \text{utt-time}_{u_1,0-3} \\
\text{sp}_{u_1,0-3} : \text{Ind} \\
\text{hearer}_{u_1,0-3} : \text{Ind} \\
\text{cat}_{u_1,0-3} : [V, +\text{fin}] \\
\text{phon}_{u_2,0-1} : /bu \ L-H/ \\
\text{utt-time}_{u_2,0-1} : Time \\
\text{sp}_{u_2,0-1} : \text{Ind} \\
\text{hearer}_{u_2,0-1} : \text{Ind}
\end{bmatrix}
\]
\[
\begin{bmatrix}
\text{msg}_{u_2,0-1} : ? \lambda r' : \begin{cases}
\text{ref}_{u_1,1-2} : \text{Ind} \\
\text{res}_{u_1,1-2} : \text{named}(\text{ref}_{u_1,1-2}, \text{Bo}) \\
(\text{ref}(u_{1,1-2}, \text{ref}_{u_1,1-2}))
\end{cases}
\end{bmatrix}
\]
N.B. This last case splits up the abbreviatory convention concerning ref and pf-ref. The type of pf-ref has been used as the body of the question instead.
Coercion 3 – parameter focussing
Are you asking whether Bo left? (The relevant issue is *who left?*)
\[
\lambda r : \begin{bmatrix}
\text{phon}_{u_1,0-1} & : /dI\acute{d}/ \\
\text{phon}_{u_1,1-2} & : /bu/ \\
\text{phon}_{u_1,2-3} & : /liv/ \\
\text{phon}_{u_1,0-3} & : /dIdbuliv/ \\
\text{utt-time}_{u_1,0-3} & : \text{Time} \\
\text{ev-time}_{u_1,0-3} & : \text{Time} \\
\text{tense}_{u_1,0-3} & : \text{ev-time}_{u_1,0-3} < \text{utt-time}_{u_1,0-3} \\
\text{ref}_{u_1,1-2} & : \text{Ind} \\
\text{res}_{u_1,1-2} & : \text{named(ref}_{u_1,1-2}, \text{“Bo”)} \\
\text{sp}_{u_1,0-3} & : \text{Ind} \\
\text{hearer}_{u_1,0-3} & : \text{Ind} \\
\text{cat}_{u_1,0-3} & : [V, +\text{fin}] \\
\text{phon}_{u_2,0-1} & : /bu L-H/ \\
\text{utt-time}_{u_2,0-1} & : \text{Time} \\
\text{sp}_{u_2,0-1} & : \text{Ind} \\
\text{hearer}_{u_2,0-1} & : \text{Ind} \\
\end{bmatrix}
\]
\[
\begin{align*}
\text{max-QUD}_{u_2,0-1} & : ? \lambda x : \text{Ind} \ (\text{ask} (\text{sp}_{u_1,0-3}, \text{hearer}_{u_1,0-3}, ?\text{leave} (x, \text{ev-time}_{u_1,0-3}))) \\
\text{cont}_{u_2,0-1} & : ? \text{ask} (\text{sp}_{u_1,0-3}, \text{hearer}_{u_1,0-3}, ?\text{leave} (\text{ref}_{u_1,1-2}, \text{ev-time}_{u_1,0-3}))
\end{align*}
\]
Future work
- Translate fragment in Ginzburg and Sag’s book to record based grammar
- Look at general relationship HPSG-RBG
- Incorporate recent work by Thierry Coquand et al. – equalities in record types
- Relate to GF – can/should GF be extended to include this kind of grammar? Are there other ways of achieving similar effects
- RBG and GoDiS/ibis – is this the way to go to incorporate “real” semantics into our dialogue management? Should RBG provide a field in an information state or should the whole information state be a dependent record type?
- Relationship RBG and DRT/dynamic semantics
- Relationship RBG and “flat” (“bits and pieces”) semantics (minimal recursion semantics, semantic charts)
- Computational issues: logic vs functional programming or both (prolog, haskell, oz)
The hope
Get HPSG, dynamic and flat semantics and dialogue management into a single powerful computationally tractable formalism
|
{"Source-Url": "http://www.ling.gu.se/~cooper/records/indifest.pdf", "len_cl100k_base": 5063, "olmocr-version": "0.1.50", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 52962, "total-output-tokens": 6265, "length": "2e12", "weborganizer": {"__label__adult": 0.0007314682006835938, "__label__art_design": 0.0016269683837890625, "__label__crime_law": 0.0008378028869628906, "__label__education_jobs": 0.01125335693359375, "__label__entertainment": 0.0008625984191894531, "__label__fashion_beauty": 0.0003485679626464844, "__label__finance_business": 0.0005908012390136719, "__label__food_dining": 0.000942707061767578, "__label__games": 0.00272369384765625, "__label__hardware": 0.0009889602661132812, "__label__health": 0.0015306472778320312, "__label__history": 0.0009031295776367188, "__label__home_hobbies": 0.00020301342010498047, "__label__industrial": 0.0009026527404785156, "__label__literature": 0.01348876953125, "__label__politics": 0.0008115768432617188, "__label__religion": 0.0012416839599609375, "__label__science_tech": 0.3779296875, "__label__social_life": 0.0006361007690429688, "__label__software": 0.0277557373046875, "__label__software_dev": 0.5517578125, "__label__sports_fitness": 0.0005054473876953125, "__label__transportation": 0.0009489059448242188, "__label__travel": 0.0002930164337158203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13425, 0.02988]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13425, 0.01729]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13425, 0.51734]], "google_gemma-3-12b-it_contains_pii": [[0, 135, false], [135, 320, null], [320, 861, null], [861, 1315, null], [1315, 1996, null], [1996, 2535, null], [2535, 3312, null], [3312, 3671, null], [3671, 4267, null], [4267, 4807, null], [4807, 5075, null], [5075, 5562, null], [5562, 5664, null], [5664, 5980, null], [5980, 6662, null], [6662, 7244, null], [7244, 8252, null], [8252, 9130, null], [9130, 10092, null], [10092, 11127, null], [11127, 11282, null], [11282, 12502, null], [12502, 13296, null], [13296, 13425, null]], "google_gemma-3-12b-it_is_public_document": [[0, 135, true], [135, 320, null], [320, 861, null], [861, 1315, null], [1315, 1996, null], [1996, 2535, null], [2535, 3312, null], [3312, 3671, null], [3671, 4267, null], [4267, 4807, null], [4807, 5075, null], [5075, 5562, null], [5562, 5664, null], [5664, 5980, null], [5980, 6662, null], [6662, 7244, null], [7244, 8252, null], [8252, 9130, null], [9130, 10092, null], [10092, 11127, null], [11127, 11282, null], [11282, 12502, null], [12502, 13296, null], [13296, 13425, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 13425, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13425, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13425, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13425, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 13425, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13425, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13425, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13425, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13425, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13425, null]], "pdf_page_numbers": [[0, 135, 1], [135, 320, 2], [320, 861, 3], [861, 1315, 4], [1315, 1996, 5], [1996, 2535, 6], [2535, 3312, 7], [3312, 3671, 8], [3671, 4267, 9], [4267, 4807, 10], [4807, 5075, 11], [5075, 5562, 12], [5562, 5664, 13], [5664, 5980, 14], [5980, 6662, 15], [6662, 7244, 16], [7244, 8252, 17], [8252, 9130, 18], [9130, 10092, 19], [10092, 11127, 20], [11127, 11282, 21], [11282, 12502, 22], [12502, 13296, 23], [13296, 13425, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13425, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
91cf7f5a4c2ed79a96bf5c5b38e5d78fe1e25978
|
Towards a Common Understanding of Business Process Instance Data
Nima Moghadam and Hye-young Paik
School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, 2052 Australia
{nima, hpaik}@cse.unsw.edu.au
Keywords: Business Process Management, Process Instances Data, Data Models, Interoperability, BPMS Architectures
Abstract: Business process management has grown into a mature discipline supported by a large number of commercial and open source products, collectively referred to as Business Process Management (BPM) systems. BPM systems store the process instance information in a physical storage known as Process Instance Repository. In an organisation several BPMS products can co-exist and work alongside each other. Each one of these BPM tools has its own definition of process instances, creating a heterogeneous environment. This reduces interoperability between business process management systems and increases the effort involved in analysing the data. In this paper, we propose a common model for business process instances, named Business Process Instance Model (BPIM), which provides a holistic view of business process instances generated from multiple systems. BPIM consists of visual notations and their metadata schema. It captures three dimensions of process instances: process execution paths, instance data provenance and meta-data. BPIM aims to provide an abstract layer between the process instance repository and BPM engines, leading to common understanding of business process instances.
1 Introduction
A business process is a collection of related activities performed together to fulfill a goal in an organisation (Aguilar-Saven, 2004). A main function of a BPM system is to turn a business process model into an executable program so that the process described in the model is enacted to assist business operations.
A process instance is a concrete running instance of such a program containing (i) a subset of the activities appearing in the model that spawned the instance and (ii) materialised data (e.g., Customer Name, Order Number). For example, given a process model describing a car insurance claim process, a BPM system would enact concrete instances of the model, each instance representing an actual claim being processed and the details of the data involved.
Although business process instances could be short-lived, many process instances are in fact long running, in that they could take hours and days from start to finish. This is because a typical life-cycle of a business process instance could spend most of its life in wait mode (e.g., waiting for a reply from a previous request). When a running process instance reaches the point that it needs to wait, the BPM system maps the instance information directly to physical storage artefacts such as relational tables or XML database and stores it. BPM systems also use a physical storage to store other information such as process instance execution logs. Organisations can use this information to analyse and improve their business processes (Grigori et al., 2004).
In modern enterprise environments, multiple BPM systems and applications co-exist and work alongside each other. In this paper, we examine issues of business process instance management in such an environment. Figure 1 depicts a scenario where a single application (i.e., single process instance) is supported by two sub-processes, each implemented with different BPM solutions. Each BPM system has its own representation of process instances and a proprietary process instance repository, leading to a heterogeneous environment.
The lack of common understanding about business process instances amongst BPM systems could prompt the following problems:
- Having to analyse multiple/heterogeneous sources (e.g., Log Files, Data Tables) to extract complete process instance information.
- Having not enough information to fully describe a process instance. Some BPM systems do not store important information about process instance (e.g., which user or application started the instance, snapshots of data during the execution).
and makes it impossible to understand the process instance fully.
- Tight-coupling of a process instance model to a physical storage model.
We believe the challenges in creating a process instance model to induce the common understanding are two folds. First, such a model should contain all necessary information that a process execution engine needs to enact, suspend and resume a process instance effectively. Second, the model should be able to aid in creation of an architectural framework that allows decoupling of the business process instance management from an individual BPM system. In this paper, we make the following contributions:
- We propose Business Process Instance Model (BPIM) which provides a holistic view of a business process instance by considering process execution paths, data provenance and relevant metadata.
- We illustrate, through an example scenario, how BPIM decouples the process instance definition from BPM systems and enables different BPM systems and applications to share the same process instance repository and re-use the exiting process instance information.
This paper is organized as following. Section 2 discusses the problems in detail through a motivation scenario. The related work is discussed in Section 3. Sections 4–6 introduce the BPIM model, its components and an application with the motivating scenario. Finally we will discuss the advantages of using BPIM and future work in section 7.
2 Motivating Example and Problem Background
In this section, we examine the problem area in detail through a motivating scenario, an E-Toll processing application. The application is divided into two sub-processes: Get Customer Account sub-process implemented by a third party solution using jBPM1, and Customer Payment sub-process implemented using an in-house solution with Riftsaw BPM2.
According to the model (Figure 1), Get Customer Account sub-process retrieves a customer account and passes it to Customer Payment sub-process which calculates the final fare to be paid – considering discounts that may apply, and processes the payment.
When required (e.g., a long wait), an instance of the customer journey process would be stored in its respective BPM engine (i.e., jBPM (as part of Get Customer Account sub-process) and Riftsaw (as part of Customer Payment sub-process)). Note that despite the fact that both of these products are using RDBMS database, the underlying data models to represent and store the process instance information are very different. BPM products use different data structures (e.g., Data Table, RDF, XML) to model the business process instance (Choi et al., 2007; Grigorova and Kamenarov, 2012; Ma et al., 2007).
Based on this setting, let us explore the following scenarios to highlight the issues in the current BPM systems.
1. Data Sharing: A Customer Payment process instance depends on the ‘Customer Account’ and ‘Journey Details’ entities which generated by the Customer Journey process. However, directly accessing and sharing the data is difficult because jBPM and Riftsaw are using different schemas.
2. Failure and Error Diagnosis: After spending sometime waiting in pending status, a Customer Journey instance fails to complete. To investigate the root cause of the failure, the operation team needs to perform complicated tasks of going through the log files of the Get Customer Account process in jBPM as well as interrogating relevant database records in Riftsaw.
3. Migrating Data: The stakeholders of E-Toll application request a new business report showing discount entitlements and payment details for each customer journey completed. The development team needs to modify the business processes and the Web services involved to store the new information in E-Toll application database. However, migrating the information for the existing process instances is hard due to the differences between the process instance models in the two BPM systems.
4. Rollback/Re-start: Due to a bug in the ‘Get Discount Entitlements’ activity, the fair amount for some customers was reduced to zero. To rectify this problem the operation team wants to remove the miscalculated discount entitlements from the process instances and restart the process from ‘Get Discount Entitlements’. There is functionality in jBPM to restart the process instance execution from a specific activity, but it does not roll back the changes which might have happened to the data.
---
1 jBPM, www.jbpm.org
2 RiftSaw Open Source BPEL, riftsaw.jboss.org
5. **Data Inconsistency**: The database system in E-Toll application goes down for a while, but the BPM systems continue to run. This leads to data inconsistency between E-Toll and BPM system databases.
6. **Changing process instance storage technology**: In the fast evolving technology environment, sometimes operational decisions are made that force a change on the implementation technology. One of such decisions could be on the physical storage, either changing it to a different vendor, or different technology all together (e.g., from RDBMS to No SQL). Because the BPM systems are tightly coupled to the underlying storage mechanism, it is impossible to replace it.
To mitigate these types of problems, we propose a common model for representing business process instances, which BPM systems may adopt and use. The model, BPIM (Business Process Instance Model), defines a framework for process instance information; it defines three separate data aspects that can collectively build a common view on process instances. In doing so, BPIM aims to makes it easier to obtain a holistic view of the process instance and help build an abstract model that could de-couple the BPMS execution engine from a physical storage.
### 3 Related Work
Most of the academic research work so far have focused on business process modelling and process model repositories (Yan et al., 2012; Choi et al., 2007; Grigorova and Kamenarov, 2012). In-depth discussions about models for business process instances have been largely neglected. We discuss the most relevant streams of academic work below.
**Interoperability**: The issue of interoperability between process instances in a distributed environment is discussed in Zaplata et al. (Zaplata et al., 2010). They have identified BPEL and XPDL as the most popular execution languages and conducted a comprehensive study on the elements in these languages to develop a model for process instances that is flexible enough to be used for both languages. Using this model, a BPM execution engine (a source environment) can transform its native process instance to an interoperable instance and sends the information to another BPMS execution engine (a target environment). However, much of the elements in the model focuses on the migration aspects and does not provide the holistic view a process instance.
**Artefact Oriented BPM**: Recently, a data-oriented business process view has emerged. The approach allows decoupling of the process instance data from its execution engine. Sun et al. (Sun et al., 2014; Sun et al., 2012) propose ‘Self-Guided Artefact’ as a holistic view of process instance. Each self-guided artefact (sg-artefact) contains process instance data and its process model. A workflow system can understand the sg-artefact and execute the process instance. A data-oriented view of the process through artefacts is certainly relevant to our topic. However, we see major differences in that (i) a sg-artefact incorporates
a process model, as we mentioned before BPM systems can use different process modelling languages and coupling the process instance to a specific modelling language makes it less interoperable, (ii) A sg-artefact does not cater for instance specific information such as Execution Path and Meta-Data.
Process Mining: Finally, a process mining technique is a possible method for building a holistic view of process instances in a heterogeneous and distributed environment. The ProM process mining framework (Van Der Aalst et al., 2007) uses an XML format to define a workflow log model. This model contains information about the business process, process instance and related data. This approach is useful when we are dealing with business processes with no formal process model to begin with. We see process mining as a bottom up approach and they have to be customized for each system. In many BPM systems, we already have a model to generate instances with (as described in our scenario in Customer Journey process). Instead of logging the activities and trying to analyse them later (a bottom-up approach), we are proposing to store the instance related information in a format that any BPM system can understand.
4 Solution Overview
In this section, we present an overview of our solution. As mentioned before, a bottom-up approach to the problem we described would involve logging the activities and related data during the execution of process instances and aggregating and analysing information from different sources (e.g., files, database records), building and maintaining complex mapping information between systems, and so on (Grigori et al., 2004).
For the cases where process models are available (and many BPM systems do require a model to be described), we could take the model-first approach (top-down approach) where process instance data generation is guided by a common understanding on the constitution of process instances.
In the rest of the paper, we will explain the details of our multi-view business process instance model named Business Process Instance Model (BPIM). It aims to be a model for interoperable process instance data and presents a holistic view of a business process instance by integrating different ‘views’ of a process instance. We have identified three views (i.e., dimensions) in this model, which are:
1. Process Instance Execution Path: It describes information about the activities which have been executed during the process instance enactment.
2. Process Instance Data and Provenance: A process instance contains some related data (e.g., order no, customer details). Each activity in the Execution Path can modify this information. This dimension is focused on the data flow and in combination with the Execution Path dimension, it keeps snapshots of an instance during any stage of execution.
3. Process Instance Meta Data: It provides extra information about the elements in the Execution Path and Process Instance Data dimensions. For example it keeps when process instance was created, started and finished or which user has performed a manual activity in the Execution Path.
BPIM consists of visual notations and BPIM Meta Model. The meta model describes the BPIM elements formally in UML. BPIM contains all the process instance information in a format which should be understandable by all BPM products. We argue that having a standardised understanding of a business process instance information increases the interoperability between different BPM products and enables more streamlined business intelligence tools to be built over different BPM systems. Later in the paper, we briefly discuss the impact of our design on the current BPMS architectures.
5 Business Process Instance Model
In this section, we introduce the individual components of BPIM. We will first present the visual notations of the model and then the BPIM Meta Model that describes the schema of BPIM elements.
5.1 Process Instance Execution Path
The Process Instance Execution Path, or Execution Path for short, focuses on describing the exact execution path that activities took in an instance. Unlike a process model, which contains all possible actions and scenarios in a business process, a Process Instance Execution Path contains just the activities that have been performed during the process instance execution. The following items summarize the differences between these two:
1. A Process Instance Execution Path accommodates the runtime information, so some of the elements we normally see in the process model are removed or replaced by different types of elements. For example, BPMN elements such as ‘sub-process’ and ‘pool’ have been removed because runtime only actions which have been executed are important
and how we logically group them does not have any affect on the execution of a process instance.
2. A process model can use both block and graph structures (Ko et al., 2009) but Process Instance Execution Path just uses a graph structure to provide a clear view of the executed action path.
3. A Process Instance Execution Path is made of generic and simple elements. There are different execution languages that are mapped from business process models (e.g., BPEL, XPDL). Using a generic and simple model makes it possible to transform different types of execution languages to our Process Instance Execution Path.
BPMN v2.0 (Object Management Group, 2011) comes with an interchange standard to formally describe process models, which makes it an interoperable process model. A BPMS runtime engine can use it to build and execute the process instances. Therefore, in our work, instead of creating new notations and their semantic from scratch, we use a subset of BPMN elements. However, we also have added new elements that are relevant to the runtime information. Each element in the Execution Path has a visual representation to make it easy to track the activities during the process execution.
5.1.1 Execution Path Elements: Activities.
All activities in this model are directly or indirectly inherited from the Activity or FlowNode classes in the BPMN v2.0. Each activity in an Execution Path represents an action which has been performed during the process execution. The rest of this section explains each type of activity and its functionality.
**Start:** An Execution Path always starts with a Start activity. Unlike the process model, each Execution Path can only have one Start element. This is because Start element represents the creation point of a process instance and each instance creation happens only once.
**End:** End activity indicates that a process instance has reached the termination point. Each Execution Path can have only one End element.
**Automated Task:** Represents an activity which has been performed by an application.
**Manual Task:** Represents an activity which has been performed by a human.
**Wait:** Process execution might be suspended due to various reasons (e.g., waiting for external events or messages). The Wait element indicates that process instance execution is suspended.
**Call Process Instance:** Specifies that during the current process instance execution, a message or event has been sent to another process instance. This call can be to an existing process instance or it could lead to creating a new instance. It is also possible to call a process instance which is hosted by another BPM system.
**Reference Process Instance:** A process instance during the execution can receive a message or event from another instance. The source instance in the Execution Path will be represented by this reference process instance activity.

5.1.2 Execution Path Elements: Transitions.
Transition connects two activities and shows the direction of the process execution flow. Transition also has another responsibility in the Execution Path. It shows how many times the execution engine has passed through it during the execution.
**Normal Transition:** Connects two activities in the Execution Path.
**Message Transition:** Connects the node which generated the message to the receiver node. If the receiver node is located in another process instance, message transition connects the source activity to ‘Call Process Instance’.
**Event Transition:** Event transition connects the node which has generated the event to the first activity in the event handler chain. The target node for event transition can exist inside the same process instance or in another instance. If target activity is in another process instance, event transition connects the source activity to ‘Call Process Instance’.
**Gateway Transition:** Connects two activities in the Execution Path. This transition indicates that a decision has been made during the process instance enactment and this path was chosen as a result of that decision.
5.2 Process Instance Data
Process instance data contains information relating to the goal the corresponding instance aims to fulfil. Here, we first examine the data structure characteristics of process instance data. Then, we introduce the Process Instance Data Snapshot, or Data Snapshot for short, Data Snapshot Graphs and Data Snapshot Pools and their visual notations.
5.2.1 Data Elements in a Process Instance
Different types of data exist in a process instance. These data types can be grouped into the following categories:
- Basic Data Types: Each BPM system comes with built in data types (e.g., byte, integer, character) for defining process instance variables (Qin and Fahringer, 2012).
- Complex Data types: Some information in a process instance has complex data structure (e.g., business entities, documents). A complex data structure is composed of basic data types as attributes and builds a new data type.
- Arrays: An array contains a collection of data with the same or different data types (e.g., basic or complex data item or another array). For example, in the Customer Payment process ‘Discount Entitlements’ is an array.
During the process instance enactment, activities in the Execution Path can introduce new data or modify the existing instance data. Each activity in the Execution Path may define input or output data items. Table 1 defines the input and output for the activities in the Execution Path. None of the transitions in the Execution Path modify the data items, so they do not have data input/output and they are not listed here. Also, note that ‘End’ and ‘Wait’ activities do not change the instance data, these activities have no data input or output.
Keeping track of the changes in the process instance data before and after the execution of an activity can be valuable. Chebotko et al. (Chebotko et al., 2010) states that data provenance management is an essential component for interpreting the result, diagnosing errors and reproducing the same result in scientific workflows. Although most of research have focused on the data provenance in the scientific workflows, recently the same concepts are being applied to industrial systems. Shamdasani et al. (Shamdasani et al., 2014) discusses the usefulness of data provenance in the BPM systems and proposes a workflow system which can store provenance data.
BPIM, similar to scientific workflows, stores the data provenance as Data Snapshots and Graphs and Data Snapshot Pools. Each Data Snapshot of a process instance represents the state of data items at a specific point of execution (i.e., before or after execution of an activity in the Execution Path). Data Snapshot Graphs capture the transition of Data Snapshots by the activities.
A Data Snapshot Pool is a repository of all Data Snapshots and each snapshot is identified by a unique id. Each node in a Data Snapshot Graph contains this unique id which points to the actual instance of the Data Snapshot. The Data Snapshot Pool helps the Data Snapshot Graphs to share the same Data Snapshots across different nodes in the graph.
5.2.2 Data Snapshots and Graphs
In the following, we present the details of Data Snapshots and Graphs along with their the visual notations. The Data Snapshot Graph is a directed graph and it shows:
1. The state of data before and after the execution of each activity in the Execution Path
2. The flow of data items between activities during the process instance execution
3. Any errors and faults that occurred before or after the execution of an activity
Figure 4 shows the visual notations. To explain:
Table 1: Data Input/output for activities in the Execution Path
<table>
<thead>
<tr>
<th>Activity Name</th>
<th>Input</th>
<th>Output</th>
</tr>
</thead>
<tbody>
<tr>
<td>Start</td>
<td>✗</td>
<td>✓</td>
</tr>
<tr>
<td>End</td>
<td>✗</td>
<td>✓</td>
</tr>
<tr>
<td>Automated Task</td>
<td>✓</td>
<td>✗</td>
</tr>
<tr>
<td>Manual Task</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Wait</td>
<td>✗</td>
<td>✓</td>
</tr>
<tr>
<td>Call Process Instance</td>
<td>✓</td>
<td>✗</td>
</tr>
<tr>
<td>Reference Process Instance</td>
<td>✗</td>
<td>✓</td>
</tr>
</tbody>
</table>
Data Item: The basic and complex data types are represented by data item notation. Data item displays a text and a number. The text is data item’s name and number is data item’s version.
Data Item Array: This notation represents an array of basic or complex data types. Data item array similar to data item displays the data item array’s name and version number.
Data Transition: Each activity in the Execution Path maps to a transition in the data snapshot graph. When an action changes the data during the process instance execution, it creates a brand new version of data item and connects the older version to the new version with a directed edge in the graph.
Null Data Item: Some activities in the Execution Path don’t have input or output data. Null data item is used in this cases to show that the activity doesn’t produce any output data or doesn’t require input data.
To create a snapshot of data we use the following rules in the BPIM framework:
- Each data item has an identifier.
- Each data item has a version number.
- When an activity in the Execution Path modifies the existing data item, it will create a brand new instance of that object with the same identifier and different version.
- When an activity in the Execution Path creates a new data item, it assigns an unique identifier and version to it.
- Initial version for all data items starts from 1.
- Data item identifier is a unique id which can be used to retrieved that object but there might be more than one instance of that object with different version number.
- Combination of item identifier and version makes an item unique.
5.3 Process Instance Metadata
BPIM also represents the information about the life-cycle of a process instance (i.e., creation, enactment and termination). Some of these information are merely informational (e.g., creation date time) and some of them are important for the execution engine (e.g., process instance state) to enact the process instance.
We describe the Process Instance Metadata as a relation with the following tuple \{id, name, modelld, creationDateTime, endDateTime, creator, server, state\}, where modelld is the process model Id, creator is the name of the application or user created the process instance, server is the host which created the instance, state is the current state of the process instance.3
5.4 BPIM Meta Model
Along with the visual notations, BPIM defines a meta model using UML to formally describe the elements of the model as well as the schema for the elements. That is, the BPIM meta model is a model which describes all the elements in the Process Instance Execution Path and Data Snapshots and Pools.
5.4.1 Execution Path Meta Model
Figure 5 provides the meta model for the Execution Path elements. In the following, we will closely look at the schema information for the elements in the Execution Path.
Activities: All the activities in the BPIM Execution Path share some common attributes. We define them as a relation with a tuple \{id, name, startDate, endDate, performer, server, state, mappingCorrelationId\}, where performer is the name of the person/application executed the activity, server is the host which executed the activity, mappingCorrelationId is the correlationId to identify the target element in the execution language during the mapping process. The following tuple describe the additional attributes for each activity type in the Execution Path:
- Automated Task: \{serviceName, serviceURL, serviceGroup, applicationName, applicationId\}, where serviceName is the name of the service which BPM system calls. A service refers to any object which can process the instance data and provide a response back (e.g., Java Object, Web
3From here on, we will skip descriptions of the attributes whenever the names themselves are descriptive enough.
service), applicationName and applicationId are the name and Id of the application which is hosting the service,
- **Manual Task**: \{userId, userName, role, comments, description, organisation, department\}, where userId, userName, role are the details of the user who performs the task, extra comments made and task descriptions are captured in comments and descriptions respectively.
- **Wait**: \{duration, expiryDateTime, interrupted\}, where duration is the period of time which process execution was suspended (ExpiryDateTime should be empty), expiryDateTime specifies the date and time which process instance execution can resume (Duration should be empty), interrupted specifies if Wait was interrupted
- **Call Process Instance**: \{targetInstanceId, targetActivityId, targetServer\}, where the details of the target process instance and activity are stored
- **Reference Process Instance**: \{sourceInstanceId, sourceActivityId, sourceServer\}, where the details of the source process instance and activity are stored
**Transitions**: All the transitions in the Execution Path have some common attributes. We define them as a relation with the following tuple \{id, name, from, to, mappingCorrelationId, traverseCounter\}, where mappingCorrelationId is the correlationId to identify the target element in the execution language during the mapping process, traverseCounter shows how many times execution engine passed through this transition
In the following, the tuples describe the extra attributes for each transition type in the Execution Path:
- **Event Transition**: \{eventId, eventType, eventName\}, where eventType refers to the type of the event (e.g., Message, Timer)
- **Message Transition**: \{messageId, messageName\}, where the message Id and Name are message details
- **Gateway Transition**: \{gatewayId, gatewayType, gatewayName\}, where the gateway Id, Type and Name are gateway details
### 5.4.2 Process Instance Data Meta Model
The process instance data meta model describes the structure and schema of Data Snapshots and Graphs. Figure 6 provides the meta model for the elements.
As discussed in Section 5.2.1, the process instance data elements can have basic or complex data types. Depending on the complexity of the data type, it can have different metadata (e.g., a document can have author and size attribute, a Customer entity can have name and address). In this section, we only list the common attributes for these data types.
The following tuples describe the attributes for each element in the Snapshots and Graphs:
- **Data Item**: \{id, dataItemObject, version, creationDateTime, type\}, where dataItemObject is a reference to a data object.
- **Data Item Array**: \{id, dataItemArrayObjects, version, creationDateTime, size\}, where
dataItemArrayObjects is a reference to an array of data objects, size is the number of items in this array.
- Data Transition: \{id, activityId, dataInput, dataOutput\}, where activityId refers to an activity in the Execution Path.
- Data Input: \{id, dataElementIds\}, where dataElementIds specifies the data inputs for a data transition.
- Data Output: \{id, dataElementIds\}, where dataElementIds specifies the data outputs for a data transition.
6 Application of BPIM
In this section, we first present how BPIM is applied to the customer journey process scenario and highlight some of the salient points regarding the problem discussions in Section 2. Then, we discuss the possible implication and implementation options of the model in the current BPMS architectures.
6.1 Customer Journey Process through BPIM
We use the customer journey process presented in Section 2 and create a process Execution Path, Data Snapshots and Data Snapshot Pools. In doing so, we assume that: (i) the BPM systems involved have adopted the BPIM framework and implemented it, (ii) they have added new functionality to support calling a process instance which is hosted by the other tools. We also assume that BPM systems share the same process instance repository.
In the following, we illustrate each BPIM component in the context of a customer journey process instance.
6.1.1 The Execution Paths
Figure 7 shows an Execution Path of a customer journey process instance, made up of two Execution Paths from the sub processes: one from Get Customer Account process instance in jBPM, the other from Customer Payment process instance in Riftsaw.
The Get Customer Account Execution Path displays all the steps taken to retrieve a customer account. After loading the customer account, jBPM execution engine sends a message to Riftsaw to create a new Customer Payment process instance and terminates. The 'Call Process Instance' activity is a link between these two instances. The Customer Payment Execution Path also has a corresponding 'Reference Process Instance' activity which points to the Get Customer Account process instance.
All the transitions in the Execution Path are marked with a number. This number shows how many times the execution engine has traversed that transition. In this example 'Apply Discount' node has two input transitions which are marked with 1 and 2. From there, we know that there are three discount entitlements applied for this journey. The example also shows that the Customer Payment Execution Path has no 'End' activity for this process instance. This means this process is not finished yet and it is waiting to try to call payment service again.
6.1.2 The Data Snapshots
Similar to the Execution Paths, we have two Data Snapshots from Get Customer Account and Customer Payment. Figure 8 displays these two side by side.
As mentioned in Section 5.2.2, none of the transitions in the execution path have any effect on the data and they do not appear in the Data Snapshots. The number beside the description of each data object is the version number. Having a version number helps to distinguish multiple versions of the same object. For example 'Get Discount Entitlements' activity uses the 'Customer Account' entity to fetch the available discounts for this customer. The result is an array of discount entitlements. There are three dis-
count entitlements for this customer and each one individually applies to the fair amount. As a result of that we end up with four versions of ‘Fair Amount’ entity. In order to simplify the notations, if an activity (e.g. for each loop) produces multiple versions of the same data item we just display the latest version of that data item. All the intermediary versions of the data item exist in the Data Snapshot Pool.
Figure 9 illustrates the Data Snapshot Pool for the Customer Payment process instance.
Figure 9: Customer Payment Process Instance Data Snapshot Pool
From these illustrations, let us examine how BPIM can help solve the problems mentioned in Section 2.
1. **Data Sharing:** Both BPM systems are using the same process instance repository and understand the BPIM. After Get Customer Account process instance finished, it sends a signal to Customer Payment with the unique instance id. Customer Payment uses the id to locate the process instance in the repository and continues the process.
2. **Failure and Error Diagnosis:** The operation team can use the generated Execution Path and Data Snapshots to quickly identify issues that may have caused the error. The data presented in BPIM is already aggregated and streamlined per instance.
3. **Migrating Data:** Since BPIM organises and stores all the relevant data to the process instances, it is possible to develop a query language to interrogate the process instance records and extract any relevant information (e.g., customer details, discount entitlements and payment) to be mapped to the new requirements.
4. **Rollback/Re-start:** BPIM makes it possible to rollback the changes and restore the process instance to a specific point during the execution, as all the changes to the data and the activities are recorded in the Execution Path, Data Snapshot and Pool.
5. **Data Inconsistency:** By using BPIM, data inconsistency problem no longer exists. BPIM consolidates all the process instance information in one place and there is no need to have multiple, isolated storage.
6. **Changing process instance storage technology:** BPIM changes the way BPM systems interact with the physical storage. It provides an abstraction layer between the BPM system and the process instance repository, allowing for the physical repository mechanism to be swapped in and out with little effort.
### 6.2 BPM System Architecture with BPIM
In this section we will look at the impact of adopting an interoperable process instance model on the existing BPM systems architecture. BPIM acts as an abstract data layer between the process runtime engine and its physical storage. This abstract layer decouples the runtime engine from the physical storage and makes the runtime engine completely independent from the constraints of its storage. This way the runtime engine can focus only on the execution of process instance and does not need to know about
the physical storage’s data structure or the commands to insert, delete or update a process instance. Another affect of this model is that BPM systems do not need to rely on a particular query language (e.g., SQL) to analyse the process instance data. We envisage in the future that BPIM, as a complete and mature solution, would provide an implementation-agnostic query and analytic language that would interact with the BPIM model.
Figure 10 shows the BPM systems architecture after using proposed process instance model.

7 Conclusion and Future Work
In this paper, we proposed an interoperable model which provides a holistic view of process instances. The model is designed to capture process execution paths, instance data provenance and process context metadata. This model may be adopted by BPM systems to work as an abstraction layer between execution engine and physical storage. This way all of BPM systems can share their process instances with each other.
Currently, we are working to provide the full mapping between the elements in the BPMN and BPEL to/from BPIM elements and realise a full transformation algorithm. A prototype will be developed to show how all these components work together and help build a holistic view of process instance information.
REFERENCES
|
{"Source-Url": "http://www.cse.unsw.edu.au/~hpaik/pdf/nima16.pdf", "len_cl100k_base": 7803, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 31748, "total-output-tokens": 9165, "length": "2e12", "weborganizer": {"__label__adult": 0.00034046173095703125, "__label__art_design": 0.0004782676696777344, "__label__crime_law": 0.0004730224609375, "__label__education_jobs": 0.00203704833984375, "__label__entertainment": 0.00011730194091796876, "__label__fashion_beauty": 0.0002084970474243164, "__label__finance_business": 0.003644943237304687, "__label__food_dining": 0.0004208087921142578, "__label__games": 0.0005846023559570312, "__label__hardware": 0.0008950233459472656, "__label__health": 0.0005435943603515625, "__label__history": 0.0003867149353027344, "__label__home_hobbies": 0.00010371208190917967, "__label__industrial": 0.0010786056518554688, "__label__literature": 0.0004024505615234375, "__label__politics": 0.0003361701965332031, "__label__religion": 0.00034689903259277344, "__label__science_tech": 0.11669921875, "__label__social_life": 0.00013208389282226562, "__label__software": 0.0501708984375, "__label__software_dev": 0.8193359375, "__label__sports_fitness": 0.00025725364685058594, "__label__transportation": 0.0006923675537109375, "__label__travel": 0.00020897388458251953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41587, 0.01617]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41587, 0.40792]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41587, 0.89972]], "google_gemma-3-12b-it_contains_pii": [[0, 4129, false], [4129, 8652, null], [8652, 11633, null], [11633, 16400, null], [16400, 20538, null], [20538, 24605, null], [24605, 28436, null], [28436, 31229, null], [31229, 34597, null], [34597, 37519, null], [37519, 41587, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4129, true], [4129, 8652, null], [8652, 11633, null], [11633, 16400, null], [16400, 20538, null], [20538, 24605, null], [24605, 28436, null], [28436, 31229, null], [31229, 34597, null], [34597, 37519, null], [37519, 41587, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41587, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41587, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41587, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41587, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41587, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41587, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41587, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41587, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41587, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41587, null]], "pdf_page_numbers": [[0, 4129, 1], [4129, 8652, 2], [8652, 11633, 3], [11633, 16400, 4], [16400, 20538, 5], [20538, 24605, 6], [24605, 28436, 7], [28436, 31229, 8], [31229, 34597, 9], [34597, 37519, 10], [37519, 41587, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41587, 0.04762]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
7404ce1767100a2542daab2bb3e6cfbafaa66984
|
Chapter 8
Web enabled client-server DEDIP
1. Introduction ................................................................. 2
2. Model ........................................................................ 3
2.1 Navigation ................................................................ 5
2.2 Application Configuration ....................................... 5
2.3 Application building .................................................. 11
2.4 Application execution and monitoring ......................... 11
2.5 Error Handling ........................................................... 13
2.6 Session management ................................................... 14
2.7 DEDIP system management ........................................ 14
3. Utilities .......................................................................... 15
3.1 File transfer ............................................................. 15
3.2 Application Results storage ........................................ 16
4. Overview of analysis and design ..................................... 16
5. Case study ..................................................................... 16
6. DEDIP usage for web application development ............... 18
7. Conclusion .................................................................... 20
1. Introduction
This research project aimed at studying the image processing software optimization for fulfilling its processing requirements. The objective of this research work was not confined to the concept proving implementation of new ideas but was aimed at providing necessary tools. Hence, A full-fledged Development Environment for Distributed Image Processing (DEDIP) was not only conceptualized but also operationalized [VI]. It is discussed in detail in chapter 7.
"Ease of use" was the main theme behind this application oriented research work. Hence, the efforts were not required up to software operationalization. The survey concluded that the tool has proved the usefulness for IRS-1C. However, the tool needs to fulfill following requirements to make it more useful and easier:
- The model supports the VAX/VMS and Unix operating systems. It should be made truly system independent.
- The users are required to carry out few tedious and error prone tasks. The aim of this research work was to provide a tool that is easy to use. Hence, it needs to be augmented to make the model more usable.
Users need to edit application configuration file for giving the process interdependency information. They have to strictly follow the predefined format. They need to study the format and adhere to it. Furthermore, the DEDIP supports transfer of the required file from one node to another as per application need. The user needs to provide the list of such files in a predefined format. User needs to insert the DTHS or DTSH programs into his application configuration at proper place as seen in figure 7.1. It can be easily visualized in graphical mode but user found it difficult in editing the same in predefined format. It would be better to make this interface easier through the visual interface. We adopted the GUI usage for parallel compilers from [38] and extended for configuring parallel and distributed applications under DEDIP.
- Users need to build their executables on all the target nodes and keep them at predefined locations. Hence, users have to carry out this tedious work whenever they modify their applications. The DEDIP also decided to support the automatic application building incase the user provides the required source code and make-
Web enabled DEDIP
file. PVM [39] also supports such facility. We support parallel compilation in contrast to PVM.
- The DEDIP depends entirely on the host system in its master-slave architecture. Hence, the host system failure is a bottleneck. The DEDIP provided the facility to continue with the previous session with the minimum loss of processing power. However, it needs the host system up continuously. It has to wait for failure diagnosis and recovery action.
Internet technology has proved its usage not only in information area but also in workflow automation too. This research project decided to study whether the Internet technology is useful for distributed image processing or not. It was found that DEDIP can provide more flexibility for user interaction using the Internet technology.
A study was also conducted for usage of Java in making the DEDIP system independent.
It was decided to make the model more generic covering wider scope of usage.
A new architecture for DEDIP was worked out exploring object oriented modeling technique in the web domain. It is a three-tier architecture instead of master-slave model. The browser based GUI provides the roaming profile to application designer, operation manager and operators. It addresses the above requirements for providing ease of use. The augmented design addresses all the important redundancy issues making the model fault tolerant. Although the main aim was to provide an environment for image processing applications, the design and architecture is truly generic for use by other applications of a similar kind.
This chapter discusses the new architecture of the DEDIP. Section 2 briefly describes the model along with new facilities.
2. Model
The web based DEDIP model is a three tier architecture; DEDIP GUI, DEDIP Server and DEDIP agents, as shown in figure-8.1. The task of GUI, server and agent is same as OPRINT, HostManager and SlaveManager in chapter-7. The DEDIP server is having additional responsibility of catering to supply data as required by GUI. The task of the GUI is restricted to the user-interaction.
The DEDIP GUI is the web enabled graphical user interface making the entire user-interaction truly system independent. It supports various forms for application configuration, application building, application operation initiation, application progress monitoring, and session controlling. The user initiates the interaction by visiting a predefined site using a standard browser. The standard web server loads the required GUI on the web browser.
Figure-8.1: Client-Server model of Web Dedip
It has at the back-end the DEDIP server running on the web site. The DEDIP GUI submits the task to the DEDIP server. The DEDIP server initiates the execution of the tasks as per the configuration information. It requests the remote agents to schedule the process. It monitors the entire session’s progress. The DEDIP server maintains complete information about all the applications configured on the web site. The DEDIP server exchanges information with the DEDIP backup server making the model fault tolerant.
The task of the DEDIP scheduler-agent is very simple (same as SlaveManager). It accepts requests from the DEDIP server, executes them and provides the status information when completed. It has process building (compilation), execution, and monitoring capabilities. It can schedule multiple processes in parallel. It does not control the synchronization among the parallel processes, instead it depends on the DEDIP server for this job. It treats each process as a single independent entity.
The DEDIP not only caters to the requirements of application designer, but also addresses all the requirements of the operations manager as well as operators. The application configuration and building is a privileged task, carried out either by the application designer or operations manager. During the regular operations, the operator can initiate any required application, monitor progress, cope with error handling, and terminate the application, if necessary.
Object-oriented modeling (implemented in Java) is used for the design of the augmented DEDIP [DWPIP-3]. The application is modeled as an object while the process is modeled as an embedded object. The object inter linking capability is used to maintain interdependency information for an application. The object serialization is used in storing the information, including dynamic information. The same is explored in communication among the DEDIP GUI, server and agents.
2.1 Navigation
A simple blank Java applet is loaded on the browser by the web server. The applet intern makes connection with DEDIP server. It gets the list of applications that are configured in DEDIP. It displays the list in a tree view. The Windows-explorer is used as a metaphor in developing the navigation GUI due to its popularity and ease of use (see figure-8.2). The user can configure a new application, build a configured application or start a ready application using the navigation GUI.
2.2 Application Configuration
The application designer first decides the configuration of his application. It depends on the distributed resource requirement, parallel processing requirement, input/output of each process, etc. The DEDIP supports a nice GUI for the same as shown in figure-8.2. He first provides the overview
8.5
Web enabled DEDIP
information about the application using the GUI shown in Figure-8.2A. Then he configures his process interdependency chart using GUI shown in figure-8.2B. The line joining two processes shows their interdependency in top-down mode. This GUI is quite user friendly giving the complete visualization of the process interdependency. It is as simple as drawing the chart on a paper. Furthermore, user can easily visualize the insertion point of DTSH and DTHS as per the file transfer requirement.
Each button in the figure 8.2B represents a process of the application. The user gives the detailed information about the process as shown in figure-8.2C.
The list of files to be transferred can be given using the GUI in figure-8.2D.
When the GUI submits application configuration to the DEDIP server it stores all the information on the web site in the predefined format. The user can easily modify the configuration, as and when required, using the same GUI. The application designer can provide the rights to the operations manager to modify the application configuration, if required. A directory structure is designed for storing all the information. The DEDIP creates and modifies the structure during the application configuration. The directory structure can be visualized from figure 8.2A and 8.2B.
Web enabled DEDIP
Figure 8.2A: Basic Application Information Form
Web enabled DEDIP
Figure 8.2B: Application Configuration Information Form
Web enabled DEDIP
Figure 8.2C: Process Information Form
Figure 8.2D: Data dependency Information Form
2.3 Application building
An application consists of many processes. All these processes are required to be compiled on the target node. The DEDIP automates all such compilation. The configuration information has all the required details about each process. The DEDIP directory structure is designed (and automatically created during the application configuration) to store the source-code & make-file(s) on the server.
The DEDIP server copies the source code & make-file, required to build a process, on the target node in a predefined temporary area. It then requests DEDIP agent on the node to build the process using the make-file. It carries out this task for each process given in the configuration. The DEDIP agent preserves the executable in the designated directory. The DEDIP agent has the capability to create the required directory structure. The detailed build status is stored in configuration file on the server. The DEDIP server allows the application execution only when all the processes are built successfully on the target nodes.
The application designer can build the processes externally on all systems incase he is not willing to give the code. The GUI provides necessary support for such external readiness indicator.
2.4 Application execution and monitoring
The operator can start execution of any application from any machine on the net using the standard browser. DEDIP GUI displays the configured applications to the operator for selection. When the operator submits the request to the DEDIP server, it reads the application configuration information from the configuration file. The DEDIP server initiates the execution of the first process in the interdependency chart. Normally, most of the applications have a single starting process. If any application has multiple starting processes, it initiates execution of all such independent processes. It informs the DEDIP agent(s) on the target node to start the execution of the process. A DEDIP agent can be installed on the server also, to use the server as a processing node. The DEDIP agent sends the status information back to the DEDIP server when the process is completed. The DEDIP server finds out the
Web enabled DEDIP
dependent processes on the successful completion of a process and initiates the execution of each such process.
Figure-8.3 shows the GUI for session and application progress information.
Figure 8.3A: Session Progress Information
2.5 Error Handling
In case of abnormal completion, the DEDIP Server displays the error message with error code to the operator. Each application designer has to provide the error codes along with the corresponding meaningful message text. DEDIP maintains this information in the configuration file.
The operator can restart the process after taking the necessary actions. In addition, the operator has the options to restart the application or to abort the application.
DEDIP maintains complete information about the process termination. This enables the operator to carry out the error handling during the next logon also. Often the problem in this mode is the disk space occupied by the intermediate files. DEDIP sends the message to operation manager in such a case.
2.6 Session management
Each time an operator logs in, DEDIP scheduler starts/restarts a session for him. Each session has a unique session identification number. It maintains all the information about the session on the server. The operator has multiple options to log out. He can close the session, terminate the session, suspend/resume the session, or submit the session for progress in background before logging out.
He can close the session only after normal completion of all the requests submitted by him. He can terminate the session immediately in case of emergency. The DEDIP kills all the processes of all the requests submitted by the operator irrespective of the status. The background processing is very effective in the case of non-interactive applications. The DEDIP gives the detailed status to the operator at the next logon.
2.7 DEDIP system management
The DEDIP system consists of a DEDIP server and DEDIP agents. The server makes a connection with all the required agents at predefined time interval. Hence, it is able to detect the crashing of any computer. It displays the message on operators' console as well as sends message to operation manager.
The DEDIP server is the most important process in the entire system. Its failure, for example, due to system crashing, can cause a severe problem. The DEDIP design supports the configuration of a backup server. The operation manager can install the DEDIP server on any machine as backup of the main server. The DEDIP server updates the back-up server on each important event. The DEDIP server can take over the complete responsibility at any time in case the main DEDIP server fails. The DEDIP agents help in such a take over. The DEDIP agent normally makes three attempts (at specified time interval) to pass the process termination status to the DEDIP server. Incase, it
fails after three attempts, it contacts the DEDIP backup server. The backup server validates the main server failure and informs the operations manager. The operations manager can make the back-up server as the main server. At the same time, he can start another backup server too, if installed.
The operation manager can toggle the main and back-up server irrespective of any failure condition. This provides him full control over the entire system. The operation manager also has browser based GUI to perform these task.
The servers are exchanging information only incase of external events like process termination, a new process starts, operator initiates an application, new session starts, etc. The frequency of such possible events is very low. Furthermore, the volume of the information is negligible. Hence, the communication overhead for maintaining the back-up server is very low.
3. Utilities
The DEDIP supports and uses the following important utilities:
3.1 File transfer
The image processing application requires a large volume of data transfer across the distributed processors. The DEDIP system supports automatic assured data transfer mechanism. It makes three attempts failing which it asks for operator help. A general-purpose data transfer process is developed for the same. The process is automatically inserted in the configuration when IP designer inserts the I/O dependency information (figure-8.2D) between two processes.
The DEDIP provides the callable library in Java, which can be useful to the application designers for self-controlled data transfer. The library makes interface with the standard FTP servers for actual data transfer [DWPIP-3].
3.2 Application Results storage
The DEDIP directory structure contains the predefined location for the output of each application. The operation manager can configure the DEDIP root path based on the disk space availability. The users are requested to create their output in the predefined location. The user can access his results from any system on net using the standard browser. The web server provides the access rights to the user only for his application area.
4. Overview of analysis and design
The most advanced version of object oriented technology, the UML, is used for analysis and design of the final model. The detailed analysis and design is given in [DWPIP-3]. Sun had worked out delegate-model concept from Netscape’s model-view-control concept. It helps to isolate the basic model from the user interaction. The new DEDIP followed the same. The DEDIP has various model classes as well as GUI classes.
Appendix-4 contains major UML diagrams along with the brief description about major classes.
The object serialization gave flexibility to the DEDIP designers to generate the dynamic protocol for interaction between various GUI, DEDIP server, backup server and agents. The object persistence helps in storing the important information; for example application configuration.
5. Case study
The main aim of the DEDIP was to provide a tool that is easy to use for developing a distributed parallel image processing application. The enriched GUI is supported at all places for application configuration, execution and monitoring. It can be best visualized from the screen shots given in figure 8.2 and 8.3. The Windows NT and IIS4 were used as web server for testing on the 10 Mbps Intranet. The front-end GUI is tested on two most popular browsers IE and Netscape supporting Java-2 plugins.
DEDIP functionality and efficiency was tested using Microsoft NT as host and IRIS workstations as slaves.
Web enabled DEDIP
The DEDIP was tested for three cases using simulated executables by three operators in ten runs. The simulated processes were generated resembling actual processes for image processing interaction/processing. The process dependency chart is given in figure 8.4. The elapsed time requirement and processing node is shown in the bracket. The process DTHS indicates the Data Transfer required from host to slave where as DTSH indicates the reverse. 'T' indicates the tape unit requirement by the process. 'W' & 'W2' indicate that the process is scheduled on workstation1 and workstation2 respectively. The time (in minutes) required by each process is shown in bracket.
Case 1: Single package requiring sequential scheduling is shown in figure 8.4A depicting the simplest case.
Case 2: Single package requiring parallel scheduling is shown in figure 8.4B.
Case 3: Parallel execution of two packages, each package requiring sequential scheduling, is shown in figure 8.4C.
Figure 8.4.A: Single package with sequential execution
Figure 8.4.B: Single package with parallel processing requirement
Web enabled DEDIP
Package 1:
P1 (3,7) → P2 (3) → P3 (4) → DTHS (4) → P4 (2,7) → P5 (5,7) → DTHS (4) → P6 (3) → P7 (2,7)
Package 2:
Q1 (3,7) → Q2 (4) → Q3 (5) → Q4 (6) → DTHS (4) → Q5 (10,7) → DTHS (4) → Q6 (2)
Figure 8.4.C: Two packages executed in parallel.
Table 1: Results for the case studies (time in minutes)
<table>
<thead>
<tr>
<th>Case</th>
<th>Theoretical</th>
<th>Web DEDIP</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>30.0</td>
<td>32.0</td>
</tr>
<tr>
<td>2</td>
<td>23.5</td>
<td>25.0</td>
</tr>
<tr>
<td>3</td>
<td>42.0</td>
<td>46.0</td>
</tr>
</tbody>
</table>
The efficiency results are almost the same as those achieved in the earlier version, i.e., 90-95%. The access time in case of DEDIP is mainly due to two reasons: (1) action communication delay and (2) DEDIP server overheads. This action communication delay was measured for various actions by repeated exercises. It was found out to be approximately 10 to 40 seconds on this type of action. The remaining is the DEDIP server overheads.
6. DEDIP usage for web application development
The DEDIP is conceptualized and operationalized for distributed image processing applications. However, its design is generic so it can be used for other class of
applications too. It was decided to study its usability in the area other than the image processing.
Recently, few scientists were engaged in developing web-based applications for SAC Intranet [68]. They had to automate various activities like hierarchical progress reporting & compilation, meeting management, project task management, personal task management, document authentication, resource booking, complaint management, job work flow, remote system configuration detection, etc. These applications needed the server side execution for database connectivity, dynamic web page creation, interfacing with mailing server. Such a web-based application is quite complex in comparison with dynamic web sites. It is a full-fledged application giving user friendly GUI on standard browser with required functionality. Web based application development needs the client-server modeling. The DEDIP server was found to be useful for making the development easier. The DEDIP server was customized to support their server side execution requirements. These web-based applications did not require the DEDIP GUI, instead they needed direct interface with DEDIP server. The DEDIP is having its own library for communication among DEDIP server, DEDIP agents and DEDIP GUI. The application designers were asked to use a class named “RequestObject”. They need to call only one function “RequestToServer( Object)” to interface with server. Furthermore, they need to implement an interface called “Execute On Server”. The object, passed as an argument, need to implement the server side functional components. RequestObject passes the object to DEDIP server residing at web server. The DEDIP server returns the object back to RequestObject after executing the required component. The return object may contain status as well as the data generated by the functional components. This interface is very easy. Hence, the application designer could easily adopt within an hour.
Another option with them was to use Java servlets or CGI interface with Java applets. This option would need to work out communication protocol, servlet to servlet interface and tedious coding for data communication for each application. Furthermore, it would restrict the modifiability due to complexity involved in development. The protocol and network communication may require changing every time a new functionality is added in an application. The DEDIP made their development very easy making the functional components independent of network communication.
Web enabled DEDIP
The DEDIP customization required only one day efforts. This has proved the generic nature of DEDIP.
7. Conclusion
The DEDIP provides a useful facility to the designer to develop the distributed image processing application in a user-friendly environment. The browser based GUI enables him to use the system functionality from anywhere. The graphical user interface makes it easy to visualize and configure the application. Furthermore, DEDIP addresses all the critical elements of the operation. The option of back-up server support makes the entire system robust.
The result obtained from the simulated test cases for the augmented DEDIP match with those of the earlier version. The communication delay over the network is the only additional delay. The earlier version of the model was used by 15 scientists for development and operationalization of 10 applications. The same is likely to be replaced by the new web enabled client-server based DEDIP.
Although the augmented DEDIP has been focused on the requirement of image processing applications, the design and architecture is truly general so that it may be used for other applications also.
|
{"Source-Url": "http://shodhganga.inflibnet.ac.in:8080/jspui/bitstream/10603/46338/13/13_chapter%208.pdf", "len_cl100k_base": 5106, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 34804, "total-output-tokens": 5824, "length": "2e12", "weborganizer": {"__label__adult": 0.0002663135528564453, "__label__art_design": 0.000591278076171875, "__label__crime_law": 0.00025200843811035156, "__label__education_jobs": 0.000949859619140625, "__label__entertainment": 7.957220077514648e-05, "__label__fashion_beauty": 0.0001289844512939453, "__label__finance_business": 0.0002987384796142578, "__label__food_dining": 0.00026679039001464844, "__label__games": 0.0004107952117919922, "__label__hardware": 0.0015249252319335938, "__label__health": 0.0003533363342285156, "__label__history": 0.0002911090850830078, "__label__home_hobbies": 6.824731826782227e-05, "__label__industrial": 0.0005564689636230469, "__label__literature": 0.00018513202667236328, "__label__politics": 0.0001671314239501953, "__label__religion": 0.0003933906555175781, "__label__science_tech": 0.059326171875, "__label__social_life": 7.528066635131836e-05, "__label__software": 0.0193328857421875, "__label__software_dev": 0.91357421875, "__label__sports_fitness": 0.00019669532775878904, "__label__transportation": 0.0003867149353027344, "__label__travel": 0.00021064281463623047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25169, 0.03746]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25169, 0.28075]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25169, 0.90767]], "google_gemma-3-12b-it_contains_pii": [[0, 1330, false], [1330, 3608, null], [3608, 5712, null], [5712, 6718, null], [6718, 8981, null], [8981, 10304, null], [10304, 10371, null], [10371, 10446, null], [10446, 10503, null], [10503, 10549, null], [10549, 12741, null], [12741, 12991, null], [12991, 13463, null], [13463, 15614, null], [15614, 17296, null], [17296, 19216, null], [19216, 20329, null], [20329, 21475, null], [21475, 23998, null], [23998, 25169, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1330, true], [1330, 3608, null], [3608, 5712, null], [5712, 6718, null], [6718, 8981, null], [8981, 10304, null], [10304, 10371, null], [10371, 10446, null], [10446, 10503, null], [10503, 10549, null], [10549, 12741, null], [12741, 12991, null], [12991, 13463, null], [13463, 15614, null], [15614, 17296, null], [17296, 19216, null], [19216, 20329, null], [20329, 21475, null], [21475, 23998, null], [23998, 25169, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25169, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25169, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25169, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25169, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25169, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25169, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25169, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25169, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25169, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25169, null]], "pdf_page_numbers": [[0, 1330, 1], [1330, 3608, 2], [3608, 5712, 3], [5712, 6718, 4], [6718, 8981, 5], [8981, 10304, 6], [10304, 10371, 7], [10371, 10446, 8], [10446, 10503, 9], [10503, 10549, 10], [10549, 12741, 11], [12741, 12991, 12], [12991, 13463, 13], [13463, 15614, 14], [15614, 17296, 15], [17296, 19216, 16], [19216, 20329, 17], [20329, 21475, 18], [21475, 23998, 19], [23998, 25169, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25169, 0.03968]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
3f4ce140e41e9dae40b70a1724fc8e707ef1d045
|
How Good Requirements Gathering Leads to a Successful Planning and Reporting Implementation
Mustansir Saifuddin
Learning Points
- Observe techniques to capture and communicate requirements to the project team
- Learn the importance of business ownership on the planning and reporting projects
- Learn to identify the key components that are required to keep the project on track
Agenda
- Introduction
- Definition Of Requirements Gathering
- The Actual Requirements Gathering Process
- How To Handle The Different Planning/Reporting Requirements
- Development Approach
- Useful Suggestions Related To Requirements Gathering
Introduction
- Illustrate the requirements gathering process for a Planning and reporting project
- Show how the requirements gathering process fits into the project
- Provide guidance on how to improve the process
- Demonstrate how good requirements can help avoid scope creep and minimize configuration changes
- Identify techniques to capture and communicate requirements to the project team
Agenda
- Introduction
- **Definition Of Requirements Gathering**
- The Actual Requirements Gathering Process
- How To Handle The Different Planning/Reporting Requirements
- Development Approach
- Useful Suggestions Related To Requirements Gathering
Typical Project Steps
- Typical project phases used for an implementation are shown below:
- Project preparation
- Business blueprint/requirements
- Realization
- Final preparation
- Go-live and support
- From an ASAP methodology, the project starts from:
- Business blueprint/requirements phase
- The ASAP methodology mimics the “Waterfall” development methodology where:
- All the requirements are completed before development starts
- Depending on the complexity of the project the time difference between requirements definition and actual go live will vary
Business Blueprint/Requirements - Definition
- The **business requirement process** allows the project team members to gather pertinent information related to the planning/reporting system from the end users.
- Based on the scope of the project and business objectives, the project team translates business requirements into project requirements.
- The project requirement should be kept under version control.
- It should capture the various planning processes and functions used to create/change a business plan.
Definition Of Requirements Gathering - Next Steps
- Once the requirements gathering is completed and disseminated amongst the project personnel:
- The AS IS business planning process is mapped to the SAP’s planning solution
- Next the data model design in SAP BW is started, to support the planning processes; this includes the definitions of
- Info objects – master data characteristics and key figures
- Info providers – transactional/real-time
- With the introduction of SAP BPC (Business Planning and Consolidation) tool the above step may vary
- Design of planning screens/reports and functions
Note
- All of the above steps should be captured as part of the requirements analysis document
Business Planning and Consolidation (BPC) Terminology
- In SAP NetWeaver BI, an application is approximately equivalent to a cube.
Application Set
- SAP NetWeaver InfoArea
- SAP NetWeaver InfoCubes
- Applications may share dimensions with other applications within the same application set, or have dimensions that are unique.
<table>
<thead>
<tr>
<th>Dimensions in application set</th>
<th>Finance application</th>
<th>Sales application</th>
</tr>
</thead>
<tbody>
<tr>
<td>Account</td>
<td>x</td>
<td>x</td>
</tr>
<tr>
<td>Entity</td>
<td>x</td>
<td>x</td>
</tr>
<tr>
<td>Category</td>
<td>x</td>
<td>x</td>
</tr>
<tr>
<td>Time</td>
<td>x</td>
<td>x</td>
</tr>
<tr>
<td>Merchandise</td>
<td>x</td>
<td>x</td>
</tr>
</tbody>
</table>
The Need For Business Requirements Gathering
- Why should we spend adequate time and effort during the Requirements Gathering Phase?
- Any successful project requires a good foundation
- Requirements Gathering process is that foundation
- Poor foundation can result in incomplete solution
- Well gathered and documented requirements helps the development process to produce better results
- Also help promote the new planning solution to the rest of the company
The Need For Business Requirements Gathering (cont.)
- In the case of incomplete requirements, the following will happen:
- Unhappy end users since the delivered solution does not meet their needs
- Incomplete solution
- Scope creep - project scope becomes a moving target
- Additional costs in resources and change management to meet the requirements
- Project does not deliver on it’s business objectives
- In the end you have a solution in place “Technically Speaking” but is that what the user community asked for?
The Need For Business Requirements Gathering (cont.)
- The issue of incomplete requirements can be amplified if you are using an “Offshore” model to develop the planning solution.
- Some examples are given below:
- Communication issues due to time zones
- Cultural differences
- Transfer of business understanding
- Organizational differences
- High attrition rates
- Cost overruns
Payback For A Good Requirements Gathering
- How a solid foundation for a building ensures its integrity similarly good requirements sets the tone for the remainder of the project phases. Some examples include:
- Validates organizational expectations needed to obtain project acceptance and success
- Project team and end users get a clear picture of the scope of the planning/reporting project and avoids any miscommunication
- Helps define the resource requirements to develop the planning solution both from the business side and IT
Agenda
- Introduction
- Definition Of Requirements Gathering
- The Actual Requirements Gathering Process
- How To Handle The Different Planning/Reporting Requirements
- Development Approach
- Useful Suggestions Related To Requirements Gathering
First Thing To Do
- Make sure that a project charter is in place
- Define project scope statement
- A good scope statement defines the limits of the project and provides focus on what to deliver
- It should also identify items that are out of scope for your project
- In case of planning a good example can be:
- Current scope is to implement “Annual Budgeting Solution”
- Quarterly or Monthly Forecast is out of scope
- Using the above approach highlights the items that are in scope
- In the future if there are any changes done to the scope make sure that it is communicated to the project team
Once the scope is finalized and the team members are identified, the actual requirements gathering process starts. Depending on the organization, different formats can be used to gather the requirements. Following are some of the commonly used methods:
- Focus group meetings with selective end users from each business unit
- Use a questionnaire approach where all the stakeholders are required to provide their input
- Individual meetings with the Key users
Any or all of the above methods can be utilized to get meaningful requirements.
The Actual Requirements Gathering Process
- Once all the requirements are received make sure that they are communicated back to the end users
- The above step is important since it clears any miscommunication or misunderstanding between the project team and end users
- This also gives an opportunity to the end users to review their requirements and make any adjustments
- The next step is very critical for the project team as well as the end users
- This step is the actual “SIGN OFF” from the users and sets the stage for the configuration work to start
The Actual Requirements Gathering Process
**Note:** Don’t forget to collect the Reporting requirements as they are part of the scope of a planning system
- There can be significant impact on the data model in BW if the reporting requirements are not kept in mind
- Following are some examples of the kind of reporting that will be required:
- Reports to analyze the plan data
- Actual to Plan variance reports
- Drill down reports to see details behind the actuals data
### Requirements Gathering Processes
- Take a closer look at some of the commonly used methods for requirements gathering, such as:
- Focus group meetings with selective end users from each business unit
- Use a questionnaire approach where all the stakeholders are required to provide their input
- Individual meetings with the Key users
Meetings With Selective End Users from Business Units
- The key is to make sure that there is good cross section of user representation from each business unit
- Business Analysts
- Decision Makers
- Leading requirements questions have been reviewed to ensure all subjects will be covered
- Prep the project team in advance and assign a role to each project team member
- Appropriate meeting location is selected with the right equipment available, such as: projector, white board etc.
- There should be a meeting facilitator assigned
Meetings With Selective End Users from Business Units
- Meeting agenda is distributed to the attendees in advance
- Line items assigned to the individuals from the agenda
- A person responsible for taking meeting minutes
- Action items are assigned during the meeting with a timeline
- Prompt follow up on the action items before the next meeting
- Meeting times/locations are communicated well in advance to avoid any scheduling conflicts
Questionnaire Approach
- This approach requires a lot of upfront work from the project team to compile the right set of questionnaires.
- Mostly this kind of requirements gathering is geared towards folks who are in remote locations and are not readily available for onsite meetings or conference calls.
- Most challenging form of process to get the right answers in a timely manner since the users may not understand or appreciate the urgency of their responses.
- Allow yourself to have a follow-up meeting with the individuals who are using this method of communication to close any open gaps.
Individual Meetings With Key Stakeholders
- Make sure to have all the questions ready before this meeting
- Keep the focus of the meeting on the requirements gathering process
- Avoid getting into the shortcomings of the current tool and don’t get into a discussion of the new tool
- The goal of this kind of meetings is to get the maximum information out of the users
- This will help design a better solution with the new planning tool
- Don’t try to design the new solution while listening to the end users requirements
**Business Involvement - Experience**
- Ensure at least one member of the project team comes from the business and is intimately familiar with the planning process being developed
- An experienced user knows when to seek outside help for the project team to make the best configuration decision
- This type of user also understands the power structure associated with the planning process
- They can dramatically reduce configuration decision cycle time
- Other benefits:
- When the project is in production, they can mentor other power users on how to use the software
- Can be a powerful change agent in the business community after the project go-live
Recommendations
- Issues will come up that cannot be resolved or decided on during a meeting because of time or lack of expertise
- Maintain an issue log during this part of the project
- Follow up on all issues before the end of the blueprint phase
- Reach a decision on those identified issues to ensure all requirements have been defined
- If you do not obtain closure on these issues, design and configuration work will need to be modified later in the project, at a greater cost
- Other implications could also be: new function is deemed scope creep, resource impacts (financial and human), etc.
Agenda
- Introduction
- Definition Of Requirements Gathering
- The Actual Requirements Gathering Process
- How To Handle The Different Planning/Reporting Requirements
- Development Approach
- Useful Suggestions Related To Requirements Gathering
How To Handle The Different Planning/Reporting Requirements
- Depending on the scope of the project the team has to decide on how to accomplish the tasks
- Analyze entire planning process
- e.g., Financial Planning – strategic, five-year budget, annual plan and monthly/quarterly forecasts
- e.g., Logistics Planning – supply chain, plant production, warehousing, etc.
- Or, analyze only the portion of the planning process that is in scope for the project
How To Handle The Different Planning/Reporting Requirements
- When communicating and gathering planning and reporting requirements the following three methods can be utilized to keep the end-user community and project team informed
- Process Flow Analysis
- Requirement Analysis Document
- Prototyping Philosophy
Process Flows Analysis
- Can be useful when you are analyzing multiple planning processes
- Process Flow diagrams show relationships and dependencies between planning processes
- Should be backed up by a detailed document that describes the future state process and functions for the project development staff
- If a data point in one flow influences another planning process, show this on the diagram
- e.g., exchange rate assumptions are generally global across all plans
Heads Up!
Planning Process Flow
- The following figure shows the flow between the different planning processes
Planning Process Flow-Chart
<table>
<thead>
<tr>
<th>August</th>
<th>September</th>
<th>Oct Week 1</th>
<th>Oct Week 2</th>
<th>Oct Week 3</th>
<th>Oct Week 4</th>
<th>November</th>
</tr>
</thead>
<tbody>
<tr>
<td>Assumptions</td>
<td>Sales targets</td>
<td>Depreciation estimate</td>
<td>Corp alloc</td>
<td>- Corporate allocations</td>
<td>- Insurance & Tax</td>
<td>Prepare consolidated P&L budget</td>
</tr>
<tr>
<td>Expense Planning</td>
<td>Benefits Information</td>
<td>Expense budget due</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Capital and P&L budget</td>
<td>Capital budget</td>
<td></td>
<td></td>
<td>Sales Plan Complete</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>P&L budget due</td>
</tr>
</tbody>
</table>
Process Flows Analysis
- A process flow diagram could also be used to standardize the planning process between business units or across the organization.
- For this analysis, the process flows detail the same planning process, i.e., strategic, budget, forecast, etc.
- Enables analysis to identify where differences occur across the enterprise for what is effectively the same planning process.
Requirement Analysis Document
- Requirement Analysis Document should describe the process and functions of the “To Be” planning process
- Areas that should be documented are:
- Global planning processes and functions
- Specific planning processes (cost, sales, capital plans, etc.)
- Unique planning functions
- SAP BW Impacts
- Other areas that impact plan process
- Deployment options
- Plan approval process
Global Planning Processes and Functions
- Currency translation process for plans
- Determine how the plan will be recalculated if the foreign exchange rate assumption changes during the plan development process
- Security/authorization
- Security around who should be able to view and change the plan data is essential to a solid planning solution
- Consider security required around the various stages of the planning process
- Also consider security and authorization for reporting; remember that it will be both plan and actual values in the reports
Things to Consider for Planning Data Model
- Data model issues
- Poor data model design results in poor performance
- Recommendations:
- Leverage a reporting information model along with a planning information model
- Consider pros and cons of data model design based on the planning requirements
- Use the expertise of a BI data modeler to validate the data model
- Consider ongoing monitoring and continuous improvements whenever possible
Things to Consider for Planning Data Model (cont.)
- When working with the new Business Planning and Consolidation (BPC) tool the following process applies to optimize the data model and performance of the Application (Infopovider)
- Light optimize
- Closes the open request
- Compresses the data in cube
- Indexes and updates the database statistics on the cube
- Full optimize
- Does the same steps as light optimize
- Also checks the Netweaver BI data model and
- If needed will run more detailed steps to optimize the entire data model (may take long runtimes)
Things to Consider for Planning Data Model (cont.)
- Data granularity – impacts volume and performance
- Analyze the following areas for planning:
- Time requirement: daily, weekly, monthly or quarterly
- Account: individual account level versus item versus group
- Part number: product or product group level
- Versioning strategy
- Monthly forecast versus quarterly forecast
- How many iterations of the plan/forecast data needs to be stored (what if analysis?)
- Dealing with actuals – copying actuals into forecast version versus referencing actuals for forecasting purposes
- Keep versions at a manageable level as it will increase volume and have negative impact on performance
Defining the Planning Cube (cont.)
- Planning data model
- Always keep future enhancements and, in some cases, integration with other applications (such as BCS, CRM, and APO) in mind
- For financial statement planning, decide on characteristics versus key figure data model
- Characteristics-based model is more flexible
- Limitations on key figure-based data model
- Caution! Key figure model should be used with care or avoided
What kind of planning is desired in the new planning system?
- Financial planning
- Sales planning
- Operational planning
- Asset depreciation planning, etc.
What source systems feed the actuals data to supplement the planning process?
- SAP R/3
- Non-R/3 (legacy systems, etc.)
- Other Data Warehouse
Scenario 1: SAP R/3 is Only Source of Actuals
- If SAP R/3 is the source of actuals to facilitate the planning process, then maintain master data in SAP R/3:
- For example:
- Accounts, company codes, profit centers, customers, etc.
- Currency translation rates for actuals and plan data
- Maintain the translation rates for both plan and actuals in SAP R/3 and extract them into BI on a scheduled basis
- The above approach avoids dual maintenance
**Scenario 2: Both SAP R/3 and Non-R/3 Supply Actuals**
- This scenario can relate to any planning application
- Sales, Financial, Operations planning, etc.
- For non-R/3 actuals, maintain the master data in BW directly
- This may require additional mapping/lookup tables
- For example:
- Account numbers from the non-R/3 system do not match the SAP R/3 accounts
- One approach can be to use a mapping table in BW to map the non-R/3 data to the R/3 data.
- This helps create a common financial planning application
Scenario 3: Future Master Data
- This scenario can impact both the short term (1-2 year) or long term (5-10 year) plan
- Examples include:
- Capital projects
- Assets
- Customers, etc.
- One approach is to maintain the future master data in BW and have proper controls in place to replicate it in the source system (such as SAP R/3) when it becomes a reality
Documenting the Various Planning Processes
- Document the specific procedure associated with each of the planning processes that are being implemented. For example: sales, expense, capital, headcount plan etc.
- This should include calculations and processes. As an example headcount calculations to account for FTE vs. part time, gross margin calculations on sales plan etc.
- Other items to account for includes:
- How to make global level changes to the sales plan if market conditions change at the end of planning cycle
- What will be the lowest level of detail entered for each planning application. For example product vs. product group level in sales plan
- What manipulations or calculations are done on the data?
The Unique Planning Assumptions/Calculations
- Top down distribution of the new sales targets
- Corporate allocations to the various business units
- Global assumptions to drive the different planning applications, such as:
- Benefits overhead across the organization
- Interest rates
- Currency translation rates
- Recalculation at a global level when market conditions change
Prototyping Philosophy
- Use pilot projects to show how an application can look and operate
- A pilot should focus on one common planning process that a majority of end users will use and/or understand
- Can be useful to train the project team
- Provides targeted end users with something tangible to better understand what their requirements can lead to
- It can also be used to build excitement amongst the community
Agenda
- Introduction
- Definition Of Requirements Gathering
- The Actual Requirements Gathering Process
- How To Handle The Different Planning/Reporting Requirements
- **Development Approach**
- Useful Suggestions Related To Requirements Gathering
Development Approach - Agile
- In case of a fully integrated financial statement planning project, you can approach the development process in two different ways:
- **Agile approach** where the requirements gathering are only done for a subset of the planning application of the total planning vision
- Allows the users to see the complete solution in a much shorter time frame
- It also allows the project team to work with the business users to realign the requirements for the next development piece thus giving them more flexibility
Development Approach – Waterfall
- The ASAP methodology constitutes a Waterfall approach where complete requirements are defined for the entire financial statement planning process before development is started.
- It constitutes a more structured approach where each of the different planning applications are built at the same time.
- For example:
- Sales planning, Expense/Profit planning, Capital planning
- Balance Sheet planning, Investment planning etc.
- All of the above pieces are then connected together to create one seamless financial statement planning solution.
Agenda
- Introduction
- Definition Of Requirements Gathering
- The Actual Requirements Gathering Process
- How To Handle The Different Planning/Reporting Requirements
- Development Approach
- **Useful Suggestions Related To Requirements Gathering**
Useful Suggestions Related To Requirements Gathering
- Make sure that the Project Sponsor as well as the Project Manager do the due diligence as far as the requirements/blueprinting phase is concerned.
- Ensure that enough time is dedicated to the requirements gathering phase.
- This lays the foundation for the Realization phase and sets the tone for the remainder of the project.
- Provide enough details in the business requirements document; this helps focus and align the project at different milestones; good requirements reduce configuration risk as the project team understands what they need to do before their fingers hit the keyboard.
Useful Suggestions Related To Requirements Gathering
- Manage the user expectations by
- Using an iterative development/playback approach to keep the user community engaged during the build phase
- Playback session also minimizes any necessary later rework since the users can see the application as it is being built
- For your next planning project use the agile approach:
- It allows for shorter development cycles, more closely linked to the requirements process
- Keeps the end users more engaged in the design and testing process
- Keeps the development resources to a minimum but increases the overall time to develop the complete solution
- When using offshore model make sure that both parties (business sponsor and offshore development team) have a clear understanding of the requirements
Questions
Thank you for participating.
Please remember to complete and return your evaluation form following this session.
For ongoing education on this area of focus, visit the Year-Round Community page at www.asug.com/yrc
|
{"Source-Url": "http://www.isolutionpartners.com/wp-content/uploads/2010/06/ATT00495.pdf", "len_cl100k_base": 4958, "olmocr-version": "0.1.53", "pdf-total-pages": 52, "total-fallback-pages": 0, "total-input-tokens": 120627, "total-output-tokens": 6564, "length": "2e12", "weborganizer": {"__label__adult": 0.0006718635559082031, "__label__art_design": 0.0015201568603515625, "__label__crime_law": 0.0009641647338867188, "__label__education_jobs": 0.0382080078125, "__label__entertainment": 0.00018215179443359375, "__label__fashion_beauty": 0.0004258155822753906, "__label__finance_business": 0.0697021484375, "__label__food_dining": 0.0008039474487304688, "__label__games": 0.0010986328125, "__label__hardware": 0.0008034706115722656, "__label__health": 0.0007262229919433594, "__label__history": 0.0006546974182128906, "__label__home_hobbies": 0.0006222724914550781, "__label__industrial": 0.002719879150390625, "__label__literature": 0.0007367134094238281, "__label__politics": 0.0005488395690917969, "__label__religion": 0.0006575584411621094, "__label__science_tech": 0.01141357421875, "__label__social_life": 0.00043702125549316406, "__label__software": 0.07293701171875, "__label__software_dev": 0.7919921875, "__label__sports_fitness": 0.0004203319549560547, "__label__transportation": 0.0012044906616210938, "__label__travel": 0.0006699562072753906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24361, 0.0013]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24361, 0.12895]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24361, 0.88949]], "google_gemma-3-12b-it_contains_pii": [[0, 113, false], [113, 381, null], [381, 627, null], [627, 1023, null], [1023, 1273, null], [1273, 1854, null], [1854, 2370, null], [2370, 3084, null], [3084, 3943, null], [3943, 4413, null], [4413, 4944, null], [4944, 5339, null], [5339, 5881, null], [5881, 6127, null], [6127, 6750, null], [6750, 7292, null], [7292, 7857, null], [7857, 8335, null], [8335, 8681, null], [8681, 9221, null], [9221, 9662, null], [9662, 10260, null], [10260, 10790, null], [10790, 11457, null], [11457, 12067, null], [12067, 12313, null], [12313, 12783, null], [12783, 13103, null], [13103, 13591, null], [13591, 14222, null], [14222, 14618, null], [14618, 15048, null], [15048, 15611, null], [15611, 16073, null], [16073, 16667, null], [16667, 17379, null], [17379, 17824, null], [17824, 18127, null], [18127, 18588, null], [18588, 19122, null], [19122, 19497, null], [19497, 20230, null], [20230, 20617, null], [20617, 21045, null], [21045, 21295, null], [21295, 21839, null], [21839, 22426, null], [22426, 22676, null], [22676, 23326, null], [23326, 24136, null], [24136, 24146, null], [24146, 24361, null]], "google_gemma-3-12b-it_is_public_document": [[0, 113, true], [113, 381, null], [381, 627, null], [627, 1023, null], [1023, 1273, null], [1273, 1854, null], [1854, 2370, null], [2370, 3084, null], [3084, 3943, null], [3943, 4413, null], [4413, 4944, null], [4944, 5339, null], [5339, 5881, null], [5881, 6127, null], [6127, 6750, null], [6750, 7292, null], [7292, 7857, null], [7857, 8335, null], [8335, 8681, null], [8681, 9221, null], [9221, 9662, null], [9662, 10260, null], [10260, 10790, null], [10790, 11457, null], [11457, 12067, null], [12067, 12313, null], [12313, 12783, null], [12783, 13103, null], [13103, 13591, null], [13591, 14222, null], [14222, 14618, null], [14618, 15048, null], [15048, 15611, null], [15611, 16073, null], [16073, 16667, null], [16667, 17379, null], [17379, 17824, null], [17824, 18127, null], [18127, 18588, null], [18588, 19122, null], [19122, 19497, null], [19497, 20230, null], [20230, 20617, null], [20617, 21045, null], [21045, 21295, null], [21295, 21839, null], [21839, 22426, null], [22426, 22676, null], [22676, 23326, null], [23326, 24136, null], [24136, 24146, null], [24146, 24361, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 24361, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24361, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24361, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24361, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24361, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24361, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24361, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24361, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24361, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24361, null]], "pdf_page_numbers": [[0, 113, 1], [113, 381, 2], [381, 627, 3], [627, 1023, 4], [1023, 1273, 5], [1273, 1854, 6], [1854, 2370, 7], [2370, 3084, 8], [3084, 3943, 9], [3943, 4413, 10], [4413, 4944, 11], [4944, 5339, 12], [5339, 5881, 13], [5881, 6127, 14], [6127, 6750, 15], [6750, 7292, 16], [7292, 7857, 17], [7857, 8335, 18], [8335, 8681, 19], [8681, 9221, 20], [9221, 9662, 21], [9662, 10260, 22], [10260, 10790, 23], [10790, 11457, 24], [11457, 12067, 25], [12067, 12313, 26], [12313, 12783, 27], [12783, 13103, 28], [13103, 13591, 29], [13591, 14222, 30], [14222, 14618, 31], [14618, 15048, 32], [15048, 15611, 33], [15611, 16073, 34], [16073, 16667, 35], [16667, 17379, 36], [17379, 17824, 37], [17824, 18127, 38], [18127, 18588, 39], [18588, 19122, 40], [19122, 19497, 41], [19497, 20230, 42], [20230, 20617, 43], [20617, 21045, 44], [21045, 21295, 45], [21295, 21839, 46], [21839, 22426, 47], [22426, 22676, 48], [22676, 23326, 49], [23326, 24136, 50], [24136, 24146, 51], [24146, 24361, 52]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24361, 0.03562]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
2d8fbb59a5307bfe5e79f556f20bd08e49ef6418
|
Program Phase and Runtime Distribution-Aware Online DVFS for Combined Vdd/Vbb Scaling
Jungsoo Kim*, Sungjoo Yoo†, and Chong-Min Kyung*
*Dept. of EECS at KAIST, [email protected], [email protected]
†Dept. of EE at POSTECH, [email protected]
Abstract—Complex software programs are mostly characterized by phase behavior and runtime distributions. Due to the dynamism of the two characteristics, it is not efficient to make workload predictions during design-time. In our work, we present a novel online DVFS method that exploits both phase behavior and runtime distribution during runtime in combined Vdd/Vbb scaling. The presented method performs a bi-modal analysis of runtime distribution, and then a runtime distribution-aware workload prediction based on the analysis. In order to minimize the runtime overhead of the sophisticated workload prediction method, it performs table lookups to the pre-characterized data during runtime without compromising the quality of energy reduction. It also offers a new concept of program phase suitable for DVFS. Experiments show the effectiveness of the presented method in the case of H.264 decoder with two sets of long-term scenarios consisting of total 4655 frames. It offers 6.6% ∼ 33.5% reduction in energy consumption compared with existing offline and online solutions.
I. INTRODUCTION
Dynamic voltage and frequency scaling (DVFS) is one of the most effective low power design methods. Due to the increasing leakage power consumption, DVFS now controls both supply voltage (Vdd) and body bias (Vbb) dynamically to minimize the total power consumption [1]. DVFS sets the performance level of CPU to the ratio of predicted remaining workload to the given deadline. Thus, the accuracy of remaining workload prediction (in short, workload prediction) plays a crucial role in obtaining minimum energy consumption.
The workload prediction is to predict future workload mostly based on recent history. For instance, the average of recent workload can be a workload prediction. However, in reality, due to the complex behavior of software program (e.g., data dependent iteration counts of nested loops) and architectural factors (e.g., cache miss, DDR memory page miss, etc.), such a naive prediction may not work efficiently. Fig. 1 (a) illustrates a profile of per-frame workload in H.264 decoder (an excerpt from the movie “Lord of the Rings”). The X-axis and the left-hand side Y-axis represent frame index and per-frame workload, respectively. The figure shows that the profile has two different scales of behavior in both macroscopic and microscopic ways. First, it has a macroscopic time-varying behavior, i.e., phase behavior. There are time durations whose workload characteristics (e.g., mean, standard deviation, max value, etc.) are distinctly different from other time durations. We call a time duration with a distinct workload characteristic a phase (we will give a formal definition later in this paper). Fig. 1 (a) shows 10 phases (see the right-hand side Y-axis for phase indexes). Note that the phase index does not correspond to the required performance level of the corresponding phase in this example. The example of Fig. 1 also shows the microscopic behavior, the runtime distribution. Fig. 1 (b) gives the runtime distributions of three representative phases to illustrate that there can be a wide runtime distribution even within a phase, and the phase itself is characterized by the runtime distribution.
Our observation on the runtime characteristics of software programs suggests that, as shown in Fig. 1, the program workload has two characteristics: phase behavior and (multi-modal) runtime distribution (even within a phase). Especially, a phase can have a multi-modal runtime distribution with more than one salient peaks as phases 6 and 7 in Fig. 1 (b) show. Considering the worst-case execution time has a significant impact on the efficiency of DVFS for real-time systems, such a multi-modal distribution needs to be carefully analyzed and exploited in order to obtain an accurate prediction of remaining workload.
In our work, we aim at an online DVFS method that exploits both phase behavior and multi-modal runtime distribution in
1Note that the single mode distribution can be considered to belong to the multi-modal distribution
order to make accurate workload predictions for dynamic Vdd/Vbb scaling. We address the online DVFS problem in two ways: intra-phase workload prediction and phase detection. The intra-phase workload prediction is to predict workloads based on the runtime distribution of the current phase. The phase detection is to identify to which phase the current instant belongs. To the best of the authors’ knowledge, our work is the first approach of online DVFS for real-time systems which exploits both phase behavior and runtime distribution in combined Vdd/Vbb scaling.
This paper is organized as follows. Section II reviews existing work. Section III presents an overall flow. Section IV explains a multi-modal (i.e., bi-modal) analysis of runtime distribution. Section V gives the details of workload prediction. Section VI presents the phase detection method. Section VII reports experimental results, followed by the conclusion in Section VIII.
II. RELATED WORK
There have been a lot of research works on the workload prediction for online DVFS, e.g., (weighted) average of $N$ recent workloads [2]. Recently, a control theory-based prediction method is presented [3]. The above studies are effective in the case of simple workload characteristics without phase behavior or wide runtime distributions.
Phase detection has been one of hot research issues since it will allow for new opportunities of performance optimization, e.g., dynamic adaptations of cache architecture [4] [5]. Phase detection is also applied to DVFS in [6] [7]. In this work, the per-phase runtime characteristic is modeled with a vector of execution cycles of basic blocks. A new phase is detected when two vectors are significantly different, e.g., when there is a large Hamming distance between the two vectors. The key problem here is to identify a subset of basic blocks that represent phase behavior. Exploring all the combinations of basic blocks will be prohibitively expensive in the case of current and future complex software applications with a large number of basic blocks. In this paper, we present a practical method of phase detection, suitable for DVFS purpose, which is based on the vectors of predicted workloads for coarse grain code sections as explained in Section VI. In addition, compared with existing phase-based DVFS methods, the presented method exploits runtime distribution within a phase to better predict the remaining workload.
Runtime distribution has been actively exploited mostly in offline DVFS methods. Control flow-dependent time slack is exploited by predicting the remaining workload on a path basis in most of intra-task DVFS methods: the worst-case execution path [8] [9], average-case execution path [10], and virtual execution path [11]. Recently, analytical approaches have been presented to address all the sources of runtime distribution: data dependency (e.g., number of loop iterations), and architecture (e.g., cache misses) as well as control flow [12] [13].
Existing offline runtime distribution-aware DVFS methods, if applied to online DVFS, would suffer from two limitations.
Algorithm 1: Overall flow
First, they lack in utilizing phase behavior. In these methods, a single runtime distribution is obtained by running all the test benches over possibly numerous phases. Thus, phase-specific workload information is lost, which may lead to inefficiency in lowering energy consumption for software programs with noticeable phase behavior. Second, if they are applied to online DVFS without modifications, they will incur prohibitively high runtime overhead due to its computation complexity (e.g., up to 2.5 times of entire program runtime in solving differential equations numerically [13] as explained in Section VII). In order to overcome these limitations, we present a low overhead online version of originally offline runtime distribution-aware DVFS method.
III. OVERALL FLOW FOR ONLINE DVFS
Algorithm 1 shows the overall flow of the proposed method. Our work focuses on intra-task DVFS where the performance level is set at each performance setting point (PSP) inserted into the software code by designers or automatically.
We perform workload prediction and phase detection periodically (e.g., on a granularity of PHASE UNIT cycle period in line 1 of Algorithm 1. Note that a phase can consist of multiple consecutive periods (e.g., each period with PHASE UNIT cycles). On every period, PSPs are traversed from the end of program ($N_{leaf}$) to the beginning of program ($N_{root}$) for workload prediction (lines 2 ~ 5). At each PSP, we perform a bi-modal analysis of runtime distribution (line 3) and predict the remaining workload (line 4). We approximate a multi-modal distribution with a bi-modal one, since, in most cases, the number of modes is less than or equal to two. Thus, our approximation does not incur a significant inefficiency in energy reduction as Section VII shows. The phase detection check is performed (line 6) utilizing the predicted workloads. The phase detection is to identify to which phase the current period belongs. A new phase is detected when there is a large difference (in terms of Hamming distance of PSP vectors, to be explained in Section VI) between the predicted workload of current phase and that of current period.
Fig. 2 illustrates the workload prediction based on the bi-modal analysis. Fig. 2 (a) shows two program regions, $n_i$ and $n_{i+1}$ (a program region is a code section starting with a PSP and finishing with another PSP). Fig. 2 (b) shows the PDF (probability distribution function) of runtime distribution for each program region and the key steps of workload prediction for the program region $n_i$ in this case.
Given the runtime distribution of a phase, in order to predict the energy optimal remaining workload for combined
To do that, we first modularize the remaining workload from low runtime overhead by exploiting the pre-characterized data. In this paper, we propose an approach that gives us a lightweight, yet accurate solution to the workload consumption as shown in Section VII. For the online purpose, cause prohibitively high overhead of runtime and energy consumption during runtime. However, a direct execution of the solution Vdd/Vbb scaling, we apply the solution presented in [13] with the bi-modal analysis.

At each PSP, the performance level is set to the ratio of predicted workload to the remaining time-to-deadline or to a level that satisfies the given deadline constraint depending on the result of real-time constraint check as in [13]. Note that the performance setting implies Vdd/Vbb setting since there is a one-to-one correspondence between a performance level and a pair of Vdd/Vbb settings that give the minimum energy consumption [1].
**IV. BI-MODAL ANALYSIS**
We calculate the effective workload of program region \( n_i \), i.e., \( x_i^{\text{eff}} \), in three steps: mode decomposition, workload prediction for each mode, and mode recomposition to obtain the effective workload of the program region.
**A. Mode Decomposition**
Mode decomposition is to decompose the original runtime distribution into two modes, i.e., two separated distributions of which has a salient peak. Algorithm 2 explains how to decompose the original runtime distribution into two modes. As Algorithm 2 shows, the original PDF is decomposed into two sub-PDFs, \( PDF_0 \) and \( PDF_1 \) at a saddle point where the probability is the minimum. In the case that there is only one mode (lines 2 ~ 4 in Algorithm 2), the original PDF is returned.
**B. Modes Recomposition**
In the step of workload prediction (Section V), the effective workload for each of the two modes \( x_i^{\text{eff}(0)} \) and \( x_i^{\text{eff}(1)} \) is obtained. Then, the effective workload of program region \( n_i \) is calculated as a weighted sum of the two values as follows.
\[
x_i^{\text{eff}} = \beta x_i^{\text{eff}(0)} + (1 - \beta) x_i^{\text{eff}(1)}
\]
Parameter \( \beta \) determines the relative importance of two modes. In our work, we calculate the parameter by a table lookup of pre-characterized data with the runtime distributions of program regions as the input of table lookup. In the following, we explain how to build the lookup table for the accurate parameter calculation.
The two modes will have different impacts on the final predicted remaining workload depending on three factors as follows.
**Algorithm 2: Mode decomposition**
1: find two non-continuous points \((x_0, p_0)\) and \((x_1, p_1)\) whose probability values, \( p_0 \) and \( p_1 \) are the two highest probabilities in the original PDF
2: if there are no such two non-continuous points, then
3: the entire PDF is considered to be a single mode
4: return the original PDF
5: else
6: find the saddle point \((x_s, p_s)\) between the two points, which gives the minimum probability
7: if there are more than on saddle points then
8: the median is selected as the saddle point
9: end if
10: \( PDF_0 = \{(x, p) | x < x_s, (x, p) \in PDF\} \)
11: \( PDF_1 = \{(x, p) | x > x_s, (x, p) \in PDF\} \)
12: return \( PDF_0 \) and \( PDF_1 \)
13: end if
The two modes will have different impacts on the final predicted remaining workload depending on three factors as follows.
- **Factor 1**: Ratio between the relative probabilities of the two modes in the original distribution
- **Factor 2**: Ratio between the execution cycles of the two modes
- **Factor 3**: Ratio between the execution cycle of program region \( n_i \) and the remaining workload after the program region \( n_i \)
For instance (Factor 1), if mode 0 has a significant portion, i.e., high probability, parameter \( \beta \) will have a high value approaching ‘1’. As another example, if mode 1 has much higher execution cycles than mode 0, i.e., if WCEC (worst-case execution cycle) is much higher than AEC (average execution cycle), then mode 1 has more impact than mode 0 since DVFS tends to set a high frequency (i.e., high effective execution cycle), then mode 1 has more impact than mode 0. Regarding execution cycle, if WCEC (worst-case execution cycle) is much higher than AEC (average cycle), then mode 1 has more impact than mode 0.
In such a case, parameter \( \beta \) becomes a small value to reduce the effect of mode 0 and to increase that of mode 1.
We prepare a pre-characterized table for parameter \( \beta, LUT_\beta \) with the above three factors as the index. To be specific, Factor 1 is represented by the cumulative probability of mode 0, \( P_0 \) (since \( P_0 + P_1 = 1 \)). Factor 2 is represented by the ratio of \( x_i^\text{eff}(1) \) to \( x_i^\text{eff}(0) \) since each of them represents the execution cycle information of each mode. Factor 3 is represented by the ratio of \( x_i^\text{eff}(1)/x_i^\text{eff}(0) \) increases (e.g., WCEC ≫ AEC) or \( x_i^\text{eff}(1)/w_{i+1}^\text{eff} \) increases, parameter \( \beta \) decreases since mode 1 comes to have more impact on the energy optimal remaining workload than mode 0. Regarding \( x_i^\text{eff}(1)/w_{i+1}^\text{eff} \), as a simple case, if \( w_{i+1}^\text{eff} \) approaches 0, then the program region \( n_i \) dominates the remaining workload. Thus, the energy optimal remaining workload of program region \( n_i \) will approach the worst-case execution cycle of program region \( n_i \). Thus, mode 1 (higher portion of PDF) dominates the remaining workload, which requires parameter \( \beta (1-\beta) \) to decrease (increase) in Eqn. (1).
As a summary, when the PDFs of program regions are available during runtime by performance monitoring the four variables, \( P_0, x_i^\text{eff}(1), x_i^\text{eff}(0), \) and \( w_i^\text{eff} \) are calculated as explained in Section V. Then, the table \( LUT_\beta \) is looked up for the parameter \( \beta \), which is used in the calculation of Eqn. (1).
V. PREDICTING REMAINING WORKLOAD FOR A DECOMPOSED MODE
In this section, we explain how to calculate the effective workload, i.e., \( x_i^\text{eff} \), assuming a single (decomposed) mode. Our approach is based on an analytical formulation utilizing an analytical energy function. Combined Vdd/Vbb scaling does not have the quadratic relationship between energy consumption per cycle and frequency that the Vdd-only scaling has.
Thus, we approximate the energy consumption by fitting the golden energy model (measurement data or estimation result) as follows.
\[
E_{\text{cycle}} = a^b + c \tag{2}
\]
where \( E_{\text{cycle}} \) is the energy consumption per cycle, and parameters \( a, b \) and \( c \) are fitting parameters.
A. Effective Workload of Program Region
Fig. 3 illustrates the PDFs of two program regions, \( n_i \) and \( n_{i+1} \) as in Fig. 2 (a). For simplicity, we assume a unit function for PDFs. We will generalize the case (i.e., utilize a general form for PDFs) later in this section. Given the PDFs (PDF1 and PDF2 in Fig. 3), the average energy consumption of two program regions, i.e., \( \overline{E}(w_i) \), and the energy optimal remaining workload of program region \( n_i \), i.e., \( w_i \), are calculated using the energy model in Eqn. (2) as follows.
\[
E(x_i, x_{i+1}, w_i) = (a_i^b x_i + c_i) + (a_i^{b+1} x_{i+1} + c_i^{b+1}) \tag{3}
\]
\[
\overline{E}(w_i) = \int \int E(x_i, x_{i+1}, w_i) p_i p_{i+1} dx_i dx_{i+1}
\]
\[
= a_i w_i b x_i + a_i b w_{i+1}^{b+1} x_{i+1} \left(1 - x_{i+1}/w_{i+1}\right) + c_i + c_i^{b+1}
\]
\[
\frac{\partial E(w_i)}{\partial w_i} = 0 \Rightarrow x_i + w_i^{b+1} x_{i+1} \left(1 - w_i/w_{i+1}\right) = 0
\]
\[
w_i = x_i + (w_{i+1} x_{i+1})^{-1/\left(b+1\right)} = x_i + w_{i+1}^\text{eff} \tag{3}
\]
As shown in Eqn. (3), the predicted remaining workload of program region \( n_i \), \( w_i \), consists of the workload of program region, i.e., \( x_i \), and the second term, i.e., \((w_{i+1} x_{i+1})^{-1/\left(b+1\right)}\). The second term represents the portion of remaining workload after program region \( n_i \). We call it the effective remaining workload of \( n_{i+1}, w_i^\text{eff} \). Fig. 3 illustrates that \( w_i^\text{eff} \) represents the PDFs of remaining program regions after \( n_i \).
Assume the general case where program region \( n_i \) also has a wide PDF. In this case, we need to apply the numerical solution in [13] to obtain the energy optimal remaining workload. However, if such an analysis is performed during runtime, it
\[\text{In our implementation, we use the ratio of } x_i^\text{eff}(1)/w_{i+1}^\text{eff} \text{ since, given a ratio of } x_i^\text{eff}(1)/x_i^\text{eff}(0), (x_i^\text{eff}(0) + x_i^\text{eff}(1))/w_i^\text{eff} \text{ and } x_i^\text{eff}(1)/w_i^\text{eff} \text{ represent the same information.}\]
will cause prohibitively large runtime overhead. Thus, being inspired by Eqn. (3), we model the solution, \( w_i \) as follows.
\[
w_i = x_i^{eff} + w_{i+1}^{eff}
\]
(4)
where \( x_i^{eff} \) is the effective workload of program region \( n_i \). Note that \( w_{i+1}^{eff} \) is obtained as Eqn. (3) shows. In the case that the software program has cascaded program regions and conditional branches, we calculate the effective remaining workload of program region in a similar manner to [13].
We calculate \( x_i^{eff} \) (for each of two modes in Section IV) by exploiting the pre-characterization of solutions. To do that, we represent \( x_i^{eff} \) as follows.
\[
x_i^{eff} = \frac{\mu_i}{\sigma_i} \cdot (1 + \lambda)
\]
(5)
Then, we prepare a lookup table, \( LUT_\lambda \) for the residue \( \lambda \) during design-time and perform table lookups during runtime to obtain \( \lambda \). We derived the indexes of \( LUT_\lambda \) as follows: \( \sigma_i/\mu_i, \gamma_i, w_{i+1}^{eff}/\mu_i \), where \( \mu_i, \sigma_i \), and \( \gamma_i \) represent the mean, standard deviation, and skewness of PDF, respectively. The rationale of choosing the three indexes is as follows. First, the residue \( \lambda \) depends on \( \mu_i, w_{i+1}^{eff}, \) and PDF as Appendix explains. The PDF of a mode is modeled as a skewed normal distribution since the decomposed mode usually does not have a nice normal distribution though there is mostly one salient peak per mode. Thus, there can be some level of skewness \( (\gamma_i) \) in the PDF of decomposed mode. The dependence of residue \( \lambda \) on \( \mu_i, w_{i+1}^{eff}, \) and the skewed normal approximation of PDF \( (\sigma_i, \gamma_i) \) gives the above three indexes of \( LUT_\lambda \).
VI. PHASE DETECTION
A phase needs to be characterized by a salient difference in program behavior, especially, in terms of execution cycles. Conventionally, the phase is represented by a vector of execution cycles of basic blocks [4] [5]. As mentioned in Section II, a direct application of the existing phase definition may not be effective in DVFS. In our work, we first define a new vector, called PSP vector, which consists of predicted remaining workloads of program regions. Then, we detect a new phase when the Hamming distance between the representative PSP vector of current phase and that of current period becomes greater than a threshold (set to 10% in our experiments). The rationale is that the predicted remaining workload of a program region represents the entire runtime distributions of remaining program regions. Thus, it can be a representative of future behavior.
The representative PSP vector of a phase is calculated as the median vector of all the PSP vectors of periods belonging to the phase. After the phase detection, in order to utilize the phase-level repetitive behavior, we check to see if there is any previous phase similar to the newly detected one by comparing the PSP vector of the current period and the representative PSP vectors of ever existed phases. If so, we reuse the runtime distribution of the matched previous phase as that of the new phase. If there is no previous phase similar to the new one, then a new phase starts by maintaining a new set of runtime distribution information until another new phase is detected.
VII. EXPERIMENTAL RESULTS
A. Experimental Setup
We use a real software program, H.264 decoder (QCIF, 10fps) in the experiments. In order to investigate the effects of phase and runtime distribution, we used total 4655 frames of pictures. The examples consist of two sets as follows.
- Set 1: A sequence of conventional test pictures: foreman 127 frames \( \rightarrow \) football 89 frames \( \rightarrow \) Stefan 89 frames \( \rightarrow \) akiyo 150 frames (total 455 frames).
- Set 2: Four movie clips from “Lord of the Rings” (total 4200 frames). The per-frame runtime is shown in Fig. 1 (a).
We use the processor model with combined Vdd/Vbb scaling in [13]. We use 11 frequency levels up to 6.0GHz with 0.5GHz step size\(^5\). We run cycle-accurate simulation with a commercial tool, ARM SoCDesigner in order to obtain the PDFs for all the program regions in the software program.
B. Results
We compared the energy consumption of five methods: two offline methods and three online ones. As the offline methods, we use an average execution-cycle based method (AEC) and a runtime distribution-aware method (DIST), both from [13]. For the online methods, we use a control theory-based method (CON) [3], phase-aware average cycle-based one (P-AEC), and the presented one (OURS). Regarding the control theory-based method, we used the coefficients reported in [3] as the initial ones and made a further exploration of coefficients to obtain the best results. The phase-aware average execution cycle-based method, which we also present in this paper, is to exploit only the phase behavior while predicting the remaining workload based on the average remaining execution cycle obtained from the history. In this case, the phase detection is performed based on the Hamming distance in the vectors of average execution cycles of program regions.
Tables I (a) and (b) show the energy consumptions for Sets 1 and 2, respectively. All the energy consumption data
\(^5\)We set the maximum frequency at 6GHz due to the tight deadline constraint of H.264 decoder, 10fps. We will be able to use lower maximum frequency if the deadline constraint is relaxed. The presented method still works in both cases.
We set PHASE "Lord of the Rings" (4200 frames) on ARM926 processor. We measured the runtime overhead of the proposed online workload prediction method when running H.264 decoder with the control theory-based method, the presented method gives 13.9% ∼ 33.9% further improvements over P-AEC. The improvements are obtained from two differences: (1) workload prediction based on runtime distribution (OURS) or average (P-AEC), and (2) phase detection based on runtime distribution (OURS) or average (P-AEC).
Runtime Overhead
We measured the runtime overhead of the proposed online workload prediction method when running H.264 decoder with "Lord of the Rings" (4200 frames) on ARM926 processor. We set PHASE_UNIT and the number of program regions as 15 frames and eight, respectively. Under the condition, the runtime overhead ranges from 913,761 ∼ 1,003,178 clock cycles, which corresponds to 0.021% of the total execution cycles. Compared with the runtime overhead of the presented online method, that of the design-time solution [13], if the design-time solution is applied during runtime without modification, amounts to about 12 billion clock cycles under the same condition as above. It is 12,000 times bigger than that of the presented online method. Such a high overhead is unacceptable since the runtime overhead alone takes 2.5 times longer runtime than that of H.264 decoder run.
Memory Overhead of LUTs
The presented method requires two types of LUTs: \( LUT_{\beta} \) and \( LUT_{\lambda} \). The LUTs require memory space. The memory overhead largely depends on the number of steps (scales) in the indexes of the tables. As the numbers of steps increase, more accurate workload prediction will be achieved with a higher memory area overhead. In our implementation, the total area overhead of LUTs is 20kB by adjusting the step sizes and by compressing the contents of LUTs while exploiting the value locality in the tables.
VIII. Conclusion
In this paper, we presented a novel online DVFS method that utilizes both phase behavior and runtime distribution to give accurate workload predictions thereby lower energy consumption in combined Vdd/Vbb scaling. It performs a bi-modal analysis to practically account for the multi-modal characteristics of runtime distribution. The runtime distribution-aware workload prediction is executed while exploiting the pre-characterized data in order to minimize the runtime overhead of online method. For the phase detection for DVFS, a new concept of phase is presented which is based on the runtime distribution of program regions. Experimental results show that the presented method offers 6.6% ∼ 33.9% further energy savings compared with existing offline and online methods.
REFERENCES
APPENDIX
If we substitute \( w_i \) with Eqn. (4) in Eqn. (3) and if we assume that the PDF of \( n_i \) is represented by \( M \) bins, i.e., \( M \) pairs of execution cycle and probability, \( \langle x_i(k), p_i(k) \rangle >'s, \) we obtain the following equation.
\[
\overline{x_i} + (w_i^{eff})^b_{j+1} \cdot \sum_{k=1}^{M} \frac{-x_i(k)p_i(k)}{(x_i^{eff} + w_i^{eff} - x_i(k))^{b+1}} = 0
\]
As shown in the above equation, \( x_i^{eff} \) depends on \( \overline{x_i}, w_i^{eff}, \) and the PDF of \( n_i, i.e., \( \langle x_i(k), p_i(k) \rangle >'s.\)
|
{"Source-Url": "https://past.date-conference.com/proceedings-archive/2009/DATE09/PDFFILES/04.7_2.PDF", "len_cl100k_base": 6699, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 23461, "total-output-tokens": 7743, "length": "2e12", "weborganizer": {"__label__adult": 0.000598907470703125, "__label__art_design": 0.0007624626159667969, "__label__crime_law": 0.0005445480346679688, "__label__education_jobs": 0.0004878044128417969, "__label__entertainment": 0.00018513202667236328, "__label__fashion_beauty": 0.00026917457580566406, "__label__finance_business": 0.0004549026489257813, "__label__food_dining": 0.0004916191101074219, "__label__games": 0.0013246536254882812, "__label__hardware": 0.0198516845703125, "__label__health": 0.0008230209350585938, "__label__history": 0.000545501708984375, "__label__home_hobbies": 0.00019347667694091797, "__label__industrial": 0.0014619827270507812, "__label__literature": 0.0002777576446533203, "__label__politics": 0.00047206878662109375, "__label__religion": 0.0007081031799316406, "__label__science_tech": 0.361083984375, "__label__social_life": 7.134675979614258e-05, "__label__software": 0.0123291015625, "__label__software_dev": 0.59521484375, "__label__sports_fitness": 0.00046753883361816406, "__label__transportation": 0.0012197494506835938, "__label__travel": 0.0003342628479003906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29729, 0.02365]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29729, 0.21183]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29729, 0.89058]], "google_gemma-3-12b-it_contains_pii": [[0, 4336, false], [4336, 10171, null], [10171, 13728, null], [13728, 19181, null], [19181, 24727, null], [24727, 29729, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4336, true], [4336, 10171, null], [10171, 13728, null], [13728, 19181, null], [19181, 24727, null], [24727, 29729, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29729, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29729, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29729, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29729, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29729, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29729, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29729, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29729, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29729, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29729, null]], "pdf_page_numbers": [[0, 4336, 1], [4336, 10171, 2], [10171, 13728, 3], [13728, 19181, 4], [19181, 24727, 5], [24727, 29729, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29729, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
b6ba28eac422df5c7a21f29212843912d220ca63
|
COMP251: Greedy algorithms
Jérôme Waldispühl & Giulia Alberini
School of Computer Science
McGill University
Based on (Cormen et al., 2002)
Based on slides from D. Plaisted (UNC) & (Goodrich & Tamassia, 2009)
Overview
- Algorithm design technique to solve optimization problems.
- Problems exhibit optimal substructure.
- Idea (the greedy choice):
- When we have a choice to make, make the one that looks best right now.
- Make a locally optimal choice in hope of getting a globally optimal solution.
Outline
• Definition of the activity selection problem
• Greedy choice & optimal sub-structure
• Greedy algorithm for the activity selection problem
• Text compression & Huffman encoding
Greedy Strategy
The choice that seems best at the moment is the one we go with.
– Prove that when there is a choice to make, one of the optimal choices is the greedy choice. Therefore, it is always safe to make the greedy choice.
– Show that all but one of the sub-problems resulting from the greedy choice are empty.
**Activity-selection Problem**
- **Input:** Set $S$ of $n$ activities, $a_1, a_2, \ldots, a_n$.
- $s_i = \text{start time of activity } i$.
- $f_i = \text{finish time of activity } i$.
- **Output:** Subset $A$ of maximum number of compatible activities.
- 2 activities are compatible, if their intervals do not overlap.
Example:

Activity-selection Problem
<table>
<thead>
<tr>
<th>i</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
</tr>
</thead>
<tbody>
<tr>
<td>( s_i )</td>
<td>0</td>
<td>1</td>
<td>2</td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>8</td>
</tr>
<tr>
<td>( f_i )</td>
<td>2</td>
<td>3</td>
<td>5</td>
<td>6</td>
<td>9</td>
<td>9</td>
<td>10</td>
</tr>
</tbody>
</table>
Activities sorted by finishing time.
Optimal compatible set: \( \{ a_1, a_3, a_5 \} \)
Optimal Substructure
• Assume activities are sorted by finishing times.
• Suppose an optimal solution includes activity $a_k$. This solution is obtained from:
– An optimal selection of $a_1, \ldots, a_{k-1}$ activities compatible with one another, and that finish before $a_k$ starts.
– An optimal solution of $a_{k+1}, \ldots, a_n$ activities compatible with one another, and that start after $a_k$ finishes.
Optimal Substructure
• Let $S_{ij} =$ subset of activities in $S$ that start after $a_i$ finishes and finish before $a_j$ starts.
$$S_{ij} = \{ a_k \in S : \forall i, j \quad f_i \leq s_k < f_k \leq s_j \}$$
• $A_{ij} =$ optimal solution to $S_{ij}$
• $A_{ij} = A_{ik} \cup \{ a_k \} \cup A_{kj}$
Recursive Solution
• Subproblem: Selecting the maximum number of mutually compatible activities from $S_{ij}$.
• Let $c[i, j] =$ size of maximum-size subset of mutually compatible activities in $S_{ij}$.
Recursive solution:
$$c[i, j] = \begin{cases}
0 & \text{if } S_{ij} = \emptyset \\
\max \{c[i, k] + c[k, j] + 1 \} & \text{if } S_{ij} \neq \emptyset \text{ and } i < k < j \text{ and } a_k \in S_{ij}
\end{cases}$$
Note: We do not know (yet) which $k$ to use for the optimal solution.
Analysis of complexity
<table>
<thead>
<tr>
<th>Naïve approach</th>
</tr>
</thead>
<tbody>
<tr>
<td># subproblems in optimal solution</td>
</tr>
<tr>
<td># choices to consider</td>
</tr>
</tbody>
</table>
\[ A_{ij} = A_{ik} \cup \{ a_k \} \cup A_{kj} \]
In other words, we have a linear number of decompositions to process (i.e., the choice of \( a_k \)) and each of these choice makes two recursive calls (exponential growth).
Greedy choice
**Theorem:**
Let \( S_{ij} \neq \emptyset \), and let \( a_m \) be the activity in \( S_{ij} \) with the earliest finish time \( f_m = \min \{ f_k : a_k \in S_{ij} \} \). Then:
1. \( a_m \) is used in some maximum-size subset of mutually compatible activities of \( S_{ij} \).
2. \( S_{im} = \emptyset \), so that choosing \( a_m \) leaves \( S_{mj} \) as the only nonempty subproblem.
Greedy choice
Proof:
(1) \( a_m \) is used in some maximum-size subset of mutually compatible activities of \( S_{ij} \).
- Let \( A_{ij} \) be a maximum-size subset of mutually compatible activities in \( S_{ij} \) (i.e. an optimal solution of \( S_{ij} \)).
- Order activities in \( A_{ij} \) in monotonically increasing order of finish time, and let \( a_k \) be the first activity in \( A_{ij} \).
- If \( a_k = a_m \) \( \Rightarrow \) done.
- Otherwise, let \( A'_{ij} = A_{ij} - \{ a_k \} \cup \{ a_m \} \)
- \( A'_{ij} \) is valid because \( a_m \) finishes before \( a_k \)
- Since \( |A_{ij}| = |A'_{ij}| \) and \( A_{ij} \) maximal \( \Rightarrow A'_{ij} \) maximal too.
Greedy choice
Proof:
(2) $S_{im} = \emptyset$, so that choosing $a_m$ leaves $S_{mj}$ as the only nonempty subproblem.
If there is $a_k \in S_{im}$ then $f_i \leq s_k < f_k \leq s_m < f_m \Rightarrow f_k < f_m$ which contradicts the hypothesis that $a_m$ has the earliest finishing time.
Greedy choice
<table>
<thead>
<tr>
<th></th>
<th>Before theorem</th>
<th>After theorem</th>
</tr>
</thead>
<tbody>
<tr>
<td># subproblems in optimal solution</td>
<td>2</td>
<td>1</td>
</tr>
<tr>
<td># choices to consider</td>
<td>j-i-1</td>
<td>1</td>
</tr>
<tr>
<td></td>
<td>( A_{ij} = A_{ik} \cup { a_k } \cup A_{kj} )</td>
<td>( A_{ij} = { a_m } \cup A_{mj} )</td>
</tr>
</tbody>
</table>
We can now solve the problem \( S_{ij} \) top-down:
- Choose \( a_m \in S_{ij} \) with the earliest finish time (greedy choice).
- Solve \( S_{mj} \).
Activity-selection Problem
<table>
<thead>
<tr>
<th>i</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
</tr>
</thead>
<tbody>
<tr>
<td>$s_i$</td>
<td>0</td>
<td>1</td>
<td>2</td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>8</td>
</tr>
<tr>
<td>$f_i$</td>
<td>2</td>
<td>3</td>
<td>5</td>
<td>6</td>
<td>9</td>
<td>9</td>
<td>10</td>
</tr>
</tbody>
</table>
Activities sorted by finishing time.
Activity-selection Problem
<table>
<thead>
<tr>
<th>i</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
</tr>
</thead>
<tbody>
<tr>
<td>$s_i$</td>
<td>0</td>
<td>1</td>
<td>2</td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>8</td>
</tr>
<tr>
<td>$f_i$</td>
<td>2</td>
<td>3</td>
<td>5</td>
<td>6</td>
<td>9</td>
<td>9</td>
<td>10</td>
</tr>
</tbody>
</table>
Activities sorted by finishing time.
### Activity-selection Problem
<table>
<thead>
<tr>
<th>i</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
</tr>
</thead>
<tbody>
<tr>
<td>$s_i$</td>
<td>0</td>
<td>1</td>
<td>2</td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>8</td>
</tr>
<tr>
<td>$f_i$</td>
<td>2</td>
<td>3</td>
<td>5</td>
<td>6</td>
<td>9</td>
<td>9</td>
<td>10</td>
</tr>
</tbody>
</table>
Activities sorted by finishing time.
Activity-selection Problem
<table>
<thead>
<tr>
<th>i</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
</tr>
</thead>
<tbody>
<tr>
<td>s_i</td>
<td>0</td>
<td>1</td>
<td>2</td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>8</td>
</tr>
<tr>
<td>f_i</td>
<td>2</td>
<td>3</td>
<td>5</td>
<td>6</td>
<td>9</td>
<td>9</td>
<td>10</td>
</tr>
</tbody>
</table>
Activities sorted by finishing time.
Recursive Algorithm
Recursive-Activity-Selector \((s, f, i, n)\)
1. \(m \leftarrow i+1\)
2. \(\textbf{while } m \leq n \text { and } s_m < f_i \quad \text{// Find first activity in } S_{i,n+1}\)
3. \(\textbf{do } m \leftarrow m+1\)
4. \(\textbf{if } m \leq n\)
5. \(\textbf{then return } \{a_m\} \cup\)
\[
\text{Recursive-Activity-Selector}(s, f, m, n)\]
6. \(\textbf{else return } \emptyset\)
Initial Call: Recursive-Activity-Selector \((s, f, 0, n+1)\)
Complexity: \(\Theta(n)\)
Note 1: We assume activities are already ordered by finishing time.
Note 2: Straightforward to convert the algorithm to an iterative one.
Typical Steps
- Cast the optimization problem as one in which we make a choice and are left with one subproblem to solve.
- Prove that there is always an optimal solution that makes the greedy choice (greedy choice is safe).
- Show that greedy choice and optimal solution to subproblem $\Rightarrow$ optimal solution to the problem.
- Make the greedy choice and **solve top-down**.
- You may have to preprocess input to put it into greedy order (e.g. sorting activities by finish time).
Elements of Greedy Algorithms
No general way to tell if a greedy algorithm is optimal, but two key ingredients are:
• Greedy-choice Property.
– We can build a globally optimal solution by making a locally optimal (greedy) choice.
• Optimal Substructure.
Text Compression
• Given a string X, efficiently encode X into a smaller string Y (Saves memory and/or bandwidth)
A \rightarrow 0; \ B \rightarrow 10; \ C \rightarrow 110; \ D \rightarrow 1110
\ DDCB \rightarrow 1110\ 1110\ 110\ 10\ (13\ \text{bits})
A \rightarrow 1110; \ B \rightarrow 110; \ C \rightarrow 10; \ D \rightarrow 0
\ DDCB \rightarrow 0\ 0\ 10\ 110\ (7\ \text{bits})
• A good approach: \textbf{Huffman encoding}
– Compute frequency \( f(c) \) for each character \( c \).
– Encode high-frequency characters with short code words
– No code word is a prefix for another code
– Use an optimal encoding tree to determine the code words
Encoding Tree Example
- A **code** is a mapping of each character of an alphabet to a binary code-word.
- A **prefix code** is a binary code such that no code-word is the prefix of another code-word.
- An **encoding tree** represents a prefix code.
- Each external node (leaf) stores a character.
- The code word of a character is given by the path from the root to the external node storing the character (0 for a left child and 1 for a right child).
```
<table>
<thead>
<tr>
<th></th>
<th>00</th>
<th>010</th>
<th>011</th>
<th>10</th>
<th>11</th>
</tr>
</thead>
<tbody>
<tr>
<td>a</td>
<td>b</td>
<td>c</td>
<td>d</td>
<td>e</td>
<td></td>
</tr>
</tbody>
</table>
```
Encoding Example
Initial string: $X = \text{acda}$
Encoded string: $Y = 00 \ 011 \ 10 \ 00$
Encoding Tree Optimization
• Given a text string \( X \), we want to find a prefix code for the characters of \( X \) that yields a small encoding for \( X \)
– Rare characters should have long code-words
– Frequent characters should have short code-words
• Example
– \( X = \text{abracadabra} \)
– \( T_1 \) encodes \( X \) into 29 bits
– \( T_2 \) encodes \( X \) into 24 bits
Example
$X = \text{abracadabra}$
Frequencies
<table>
<thead>
<tr>
<th>a</th>
<th>b</th>
<th>c</th>
<th>d</th>
<th>r</th>
</tr>
</thead>
<tbody>
<tr>
<td>5</td>
<td>2</td>
<td>1</td>
<td>1</td>
<td>2</td>
</tr>
</tbody>
</table>
Diagram of a binary tree representing the frequencies of each letter.
Extended Huffman Tree Example
String: a fast runner need never be afraid of the dark
<table>
<thead>
<tr>
<th>Character</th>
<th>a</th>
<th>b</th>
<th>d</th>
<th>e</th>
<th>f</th>
<th>h</th>
<th>i</th>
<th>k</th>
<th>n</th>
<th>o</th>
<th>r</th>
<th>s</th>
<th>t</th>
<th>u</th>
<th>v</th>
</tr>
</thead>
<tbody>
<tr>
<td>Frequency</td>
<td>9</td>
<td>5</td>
<td>1</td>
<td>3</td>
<td>7</td>
<td>3</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>4</td>
<td>1</td>
<td>5</td>
<td>1</td>
<td>2</td>
<td>1</td>
</tr>
</tbody>
</table>
Huffman tree
Huffman’s Algorithm
- Given a string $X$, Huffman’s algorithm constructs a prefix code that minimizes the size of the encoding of $X$.
- It runs in time $O(n + d \log d)$, where $n$ is the size of $X$ and $d$ is the number of distinct characters of $X$.
- A heap-based priority queue is used as an auxiliary structure.
**Algorithm** $\text{HuffmanEncoding}(X)$
**Input** string $X$ of size $n$
**Output** optimal encoding trie for $X$
1. $C \leftarrow \text{distinctCharacters}(X)$
2. $\text{computeFrequencies}(C, X)$
3. $Q \leftarrow$ new empty heap
4. for all $c \in C$
- $T \leftarrow$ new single-node tree storing $c$
- $Q$.insert($\text{getFrequency}(c)$, $T$)
5. while $Q$.size() > 1
- $f_1 \leftarrow Q$.minKey()
- $T_1 \leftarrow Q$.removeMin()
- $f_2 \leftarrow Q$.minKey()
- $T_2 \leftarrow Q$.removeMin()
- $T \leftarrow \text{join}(T_1, T_2)$
- $Q$.insert($f_1 + f_2$, $T$)
6. return $Q$.removeMin()
|
{"Source-Url": "https://www.cs.mcgill.ca/~jeromew/teachings/251/F2022/COMP251_Lecture10_F2022.pdf", "len_cl100k_base": 4124, "olmocr-version": "0.1.50", "pdf-total-pages": 28, "total-fallback-pages": 0, "total-input-tokens": 46554, "total-output-tokens": 4698, "length": "2e12", "weborganizer": {"__label__adult": 0.00035309791564941406, "__label__art_design": 0.00035071372985839844, "__label__crime_law": 0.0005431175231933594, "__label__education_jobs": 0.0017652511596679688, "__label__entertainment": 8.273124694824219e-05, "__label__fashion_beauty": 0.000186920166015625, "__label__finance_business": 0.00020515918731689453, "__label__food_dining": 0.00045013427734375, "__label__games": 0.00067901611328125, "__label__hardware": 0.0015239715576171875, "__label__health": 0.0006794929504394531, "__label__history": 0.00027489662170410156, "__label__home_hobbies": 0.0001500844955444336, "__label__industrial": 0.0005998611450195312, "__label__literature": 0.00027108192443847656, "__label__politics": 0.0003199577331542969, "__label__religion": 0.0006322860717773438, "__label__science_tech": 0.04498291015625, "__label__social_life": 0.00012600421905517578, "__label__software": 0.004840850830078125, "__label__software_dev": 0.93994140625, "__label__sports_fitness": 0.0004086494445800781, "__label__transportation": 0.0005707740783691406, "__label__travel": 0.00020778179168701172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 10601, 0.0303]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 10601, 0.12871]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 10601, 0.71392]], "google_gemma-3-12b-it_contains_pii": [[0, 211, false], [211, 508, null], [508, 696, null], [696, 1017, null], [1017, 1410, null], [1410, 1680, null], [1680, 2102, null], [2102, 2403, null], [2403, 2897, null], [2897, 3260, null], [3260, 3662, null], [3662, 4347, null], [4347, 4637, null], [4637, 5181, null], [5181, 5392, null], [5392, 5603, null], [5603, 5818, null], [5818, 6052, null], [6052, 6682, null], [6682, 7174, null], [7174, 7433, null], [7433, 8098, null], [8098, 8663, null], [8663, 8757, null], [8757, 9148, null], [9148, 9333, null], [9333, 9660, null], [9660, 10601, null]], "google_gemma-3-12b-it_is_public_document": [[0, 211, true], [211, 508, null], [508, 696, null], [696, 1017, null], [1017, 1410, null], [1410, 1680, null], [1680, 2102, null], [2102, 2403, null], [2403, 2897, null], [2897, 3260, null], [3260, 3662, null], [3662, 4347, null], [4347, 4637, null], [4637, 5181, null], [5181, 5392, null], [5392, 5603, null], [5603, 5818, null], [5818, 6052, null], [6052, 6682, null], [6682, 7174, null], [7174, 7433, null], [7433, 8098, null], [8098, 8663, null], [8663, 8757, null], [8757, 9148, null], [9148, 9333, null], [9333, 9660, null], [9660, 10601, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 10601, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 10601, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 10601, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 10601, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 10601, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 10601, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 10601, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 10601, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 10601, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 10601, null]], "pdf_page_numbers": [[0, 211, 1], [211, 508, 2], [508, 696, 3], [696, 1017, 4], [1017, 1410, 5], [1410, 1680, 6], [1680, 2102, 7], [2102, 2403, 8], [2403, 2897, 9], [2897, 3260, 10], [3260, 3662, 11], [3662, 4347, 12], [4347, 4637, 13], [4637, 5181, 14], [5181, 5392, 15], [5392, 5603, 16], [5603, 5818, 17], [5818, 6052, 18], [6052, 6682, 19], [6682, 7174, 20], [7174, 7433, 21], [7433, 8098, 22], [8098, 8663, 23], [8663, 8757, 24], [8757, 9148, 25], [9148, 9333, 26], [9333, 9660, 27], [9660, 10601, 28]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 10601, 0.18537]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
2ec2634658c9972db5f72bf0016e79fb0b28c59d
|
[REMOVED]
|
{"Source-Url": "http://dl.ifip.org/db/conf/dais/dais2007/VallejosEDCMC07.pdf", "len_cl100k_base": 7298, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 104640, "total-output-tokens": 9996, "length": "2e12", "weborganizer": {"__label__adult": 0.00031113624572753906, "__label__art_design": 0.000286102294921875, "__label__crime_law": 0.0003151893615722656, "__label__education_jobs": 0.0005097389221191406, "__label__entertainment": 6.586313247680664e-05, "__label__fashion_beauty": 0.00014793872833251953, "__label__finance_business": 0.0001634359359741211, "__label__food_dining": 0.0002982616424560547, "__label__games": 0.0004470348358154297, "__label__hardware": 0.0010843276977539062, "__label__health": 0.0004992485046386719, "__label__history": 0.0002104043960571289, "__label__home_hobbies": 6.80685043334961e-05, "__label__industrial": 0.0003399848937988281, "__label__literature": 0.0002446174621582031, "__label__politics": 0.0002529621124267578, "__label__religion": 0.0003712177276611328, "__label__science_tech": 0.030487060546875, "__label__social_life": 7.587671279907227e-05, "__label__software": 0.006832122802734375, "__label__software_dev": 0.9560546875, "__label__sports_fitness": 0.00026345252990722656, "__label__transportation": 0.00047469139099121094, "__label__travel": 0.0001832246780395508}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44826, 0.01272]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44826, 0.30711]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44826, 0.89358]], "google_gemma-3-12b-it_contains_pii": [[0, 2588, false], [2588, 5667, null], [5667, 8557, null], [8557, 11515, null], [11515, 13514, null], [13514, 15863, null], [15863, 17895, null], [17895, 20677, null], [20677, 22263, null], [22263, 25353, null], [25353, 28199, null], [28199, 31354, null], [31354, 34739, null], [34739, 37886, null], [37886, 40777, null], [40777, 44041, null], [44041, 44826, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2588, true], [2588, 5667, null], [5667, 8557, null], [8557, 11515, null], [11515, 13514, null], [13514, 15863, null], [15863, 17895, null], [17895, 20677, null], [20677, 22263, null], [22263, 25353, null], [25353, 28199, null], [28199, 31354, null], [31354, 34739, null], [34739, 37886, null], [37886, 40777, null], [40777, 44041, null], [44041, 44826, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44826, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44826, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44826, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44826, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44826, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44826, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44826, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44826, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44826, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44826, null]], "pdf_page_numbers": [[0, 2588, 1], [2588, 5667, 2], [5667, 8557, 3], [8557, 11515, 4], [11515, 13514, 5], [13514, 15863, 6], [15863, 17895, 7], [17895, 20677, 8], [20677, 22263, 9], [22263, 25353, 10], [25353, 28199, 11], [28199, 31354, 12], [31354, 34739, 13], [34739, 37886, 14], [37886, 40777, 15], [40777, 44041, 16], [44041, 44826, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44826, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
e6dfbd74cda56c23cf59fa7ddd3a9695e7d9a790
|
Contract-Based Cooperation for Ambient Intelligence: Proposing, Entering and Executing Contracts Autonomously
Fatma Başak Aydemir*
Dept of Computer Engineering
Boğaziçi University
İstanbul, Turkey
[email protected]
ABSTRACT
Ambient Intelligence (AmI) describes environments that sense and react to the humans in time to improve their living quality. Software agents are important in realizing such environments. While existing work has focused on individual agent’s reactions, more interesting applications will take place when the agents cooperate to provide composed services to humans. When cooperation is required, the environment needs mechanisms that regulate the agents’ interactions but also respect their autonomy. Accordingly, this paper develops a contract-based approach for offering composed services. At runtime, agents autonomously decide whether they want to enter contracts. Agents then act to fulfill their contracts. Ontologies are used to capture domain information. We apply this multiagent system on an intelligent kitchen domain and show how commitments can be used to realize cooperation. We study our application on realistic scenarios.
Keywords
Agents, commitments, ontologies
1. INTRODUCTION
Ambient Intelligence (AmI) indicates environments that are aware of and responsive to human presence. Besides various types of sensors and nanotechnology, software agents are one of the emerging technologies for AmI. Intelligent agents are used for a wide range of tasks from searching for information to adaptive decision making [11]. With this aspect of it, AmI can be realized by a multiagent system. Multiagent systems are systems where multiple intelligent agents interact [10]. These interactions are generally given a meaning using commitments, which are contracts among agents to satisfy certain properties [8]. Using contracts among agents regulate the interactions and enable cooperation among them.
In this paper, we propose an AmI system which consists of autonomous agents. The system is dynamic in various ways: resources can be added or consumed, agents may enter and leave the system or they can change the services they provide. We follow a user centered design focusing on the user’s needs and demands [7] for this system, as it is consistent with the human-centric nature of the AmI systems. One of the intelligent agents represents the user of the system and it is called User Agent (UA). Other agents cooperate with UA in order to satisfy the user’s needs. One distinguishing aspect is that predefined contracts, which are generated before agent interaction, do not exist in the system. Instead of relying on predefined contracts, relevant contracts are created in conformity with the internal states of the parties during agent interactions. The internal states of the agents are not visible to other agents and the agents decide whether or not to take part in the contracts themselves. When a contract cannot be created, it is UA's duty to establish another one that guarantees realization of the properties needed to satisfy the user.
The rest of the paper is organized as follows: Section 2 explains the advantages of the dynamically generated contracts over the statically generated ones. Section 3 describes the overall system architecture and explains the contract evolutions. Section 4 demonstrates the application of the system on an example domain. Section 5 studies the system over selected scenarios and Section 6 compares the system with the related work.
2. CONTRACTS FOR AMBIENT INTELLIGENCE
A contract between agents X and Y is represented as CC(X,Y,Q,P) and interpreted as the debtor agent X is committed to bring the proposition P to the creditor agent Y when the condition Q is realized. Contracts assure that the creditor obtains the promised properties and ease the process of tracing the source of possible exceptions. In some multiagent systems, the system is designed so that the role of the agents are set, agent capabilities do not change, the resources to realize these capabilities are determined and the agents’ access to these resources are unlimited. In such static environments, contracts can be specified during compile time and agents can follow these contracts at run time. Since the system is not going to change at run time, there is no reason to attempt to generate the contracts at run time.
Consider a multiagent AmI system with UA and two other agents, Agent 1 and Agent 2. Assume that the following contracts are generated at design time and adopted by the agents:
1. CC(Agent 1, UA, Service 1 Request, Service 1)
2. CC(Agent 2, UA, Service 2 Request, Service 2)
That is, if UA requests Service 1, then Agent 1 will always provide that service. Similarly, if UA requests Service 2, then Agent 2 will always provide that service. These two contracts work well as long as the agents of the system, their capabilities, the resources and the user preferences do not change.
**Scarce Resources:** The scenario depicted above is far from being realistic. Any change in the environment prevents the system from satisfying the user’s needs. Consider the case that the resources necessary to provide the services 1 and 2 are not available any more. For example, Agent 1 may run out of Resource 1 that is fundamental to serve Service 1. So, Agent 1 fails to serve Service 1 when requested, although it is committed to serve it. This leads to an overall system failure since UA is not served a part of the service bundle. In such cases, the statically generated contracts described above are not sufficient to realize the user’s preferences. Instead, the agents should decide whether or not to take part in the contracts and also they should try to generate new contracts that may help to fulfill the former ones. For the Scarce Resource 1 example, Agent 1 may ask for a new contract including the following commitment: CC(Agent 1, UA, Resource 1, Service 1), which means that if UA provides Resource 1, then Agent 1 can provide Service 1. If UA accepts the new proposed contract and provides Resource 1 to Agent 1, Agent 1 provides Service 1 to UA. Service 1 would not be provided if the later contract had not been generated by Agent 1 dynamically.
**Dynamic Environment:** In an open environment, agents may leave the system, the agents that have left the system may come back, or new agents may enter the system. When UA tries to serve a bundle, states of the agents should be considered. It is not rational to wait for a service from an agent that has already left the system, although it is committed to serve it. So, the appropriate agents should be carefully selected before agreeing on any commitment. For example, in the above scenario, Agent 1 decides to leave the system for some reason, meanwhile, a new agent, Agent 3, which offers the same services as Agent 1 enters the system. Although there is a contract agreed on with Agent 1, in order to receive Service 1, UA should make another contract with Agent 3: CC(UA, Agent 3, Service 1 Request, Service 1). If UA can dynamically create a new contract with Agent 3, it can ensure receiving Service 1.
**Dynamically Changing Services:** A multiagent system does not necessarily contain agents that have fixed services. Agents may learn new services or stop some of the existing ones. In such cases, making prior arrangements to serve a bundle may not work due to change of agent services. The agents to be interacted with should be carefully selected according to the services they offer. In such systems, it may also be impossible to serve a predetermined bundle, the service bundle may be generated dynamically too.
3. **APPRAOCH**
We develop a contract-based multiagent system for ambient intelligence. The agents cooperate by creating and carrying out contracts that they dynamically generate at run time.
**Architecture:** Main components of the system are depicted in Figure 1. Agents are shown in rectangle nodes and the ontologies are shown in ellipse nodes. Line edges describe two way interaction whereas dashed edges represent access to the ontologies.
There is one UA which interacts with all of the agents in the system. UA keeps track of the user’s needs and desires and tries to provide the user her preferred set of services. Elements of this set are often served by various agents, so other agents cooperate with UA to offer their services. UA usually starts the communication, however other agents are also able to make contract requests. All agents make the decision for whether or not entering in a contract themselves.
There are two ontologies that are accessible by all of the agents in the system. An ontology is the description of the conceptualization of a domain [1]. Elements that are described in an ontology are the individuals, that are the objects of the domain; classes, that are collections of objects; attributes, that are properties of objects; relations that are the connections between objects and rules defined on these elements [5].
The first ontology is the environment ontology, which describes the environment. The agent, contract and service bundle descriptions as well as additional spatial information about the environment is described in the environment ontology. Although the descriptions for the agent and contract structures are depicted in this ontology, information about individuals are not kept in here. The information not revealed in this ontology is a part of the agent’s initial state and managed by the related agent itself. The second ontology is the domain ontology. In this ontology, detailed descriptions of the services and the other domain dependent information are provided.
Figure 2 depicts the structure of an agent. Each agent in the system has access to the environment ontology and the domain ontology. Every agent has a local inventory where it keeps the availability information on the service resources. The inventory of an agent is consulted first to decide if the necessary service resources are available. The information about the agent’s inventory is private and it is not shared with the other agents of the system. The contract manager of an agent manages the contracts of the agent. It updates the contract states, traces the fulfillment of the propositions and conditions. Obviously, each agent handles its contracts itself so there is not a common contract base of the system as it is
not the case in the real life. The reasoner of the agent makes the decisions, takes actions and handles messages. **Contract Lifecycle:**

In our system, the interaction among agents is conducted via messages and it is based on contracts between two agents. Contracts are dynamic entities of the system and their states are updated by the agents after receiving or sending certain type of messages. States of contracts used in the system are:
- **requested:** These contracts are requested from an agent, however the reply for the request has not been received yet.
- **rejected:** These contracts are the ones that are requested and got a negative respond in return. They do not have any binding effect on either of the parties.
- **conditional:** These contracts are agreed on and created by both parties. However, their conditions and propositions remain unsatisfied.
- **cancelled:** These contracts are cancelled by the debtor.
- **active:** These contracts are agreed on and created by both parties. Moreover, their conditions are satisfied by the creditor.
- **released:** These contracts are released by the creditor, so the debtor of these contracts are no longer committed to fulfill the propositions of the contracts.
- **fulfilled:** These contracts are agreed on and created by both parties. Their conditions and propositions are satisfied.
The message types used to carry these contract, their conditions and propositions are listed below:
- **request:** These messages are used to form a contract, thereby leading the contract to its requested state.
- **reject:** A reject message changes the contract state from requested to rejected.
- **confirm:** A confirm message updates the states of the requested contracts to conditional.
- **cancel:** A cancel message carries a contract that is cancelled by the debtor. The cancel message changes the state of the contract from conditional to released.
- **release:** A release message carries a contract that is released by the creditor. The release message changes the state of the contract from conditional to released.
- **inform:** An inform message is used to fulfill the conditions of the conditional contracts (thereby, making the contract active) or the propositions of the active contracts (thereby, making the contract fulfilled).

Figure 3 explains the state changes of contracts. A contract is created when it is requested by an agent. If the agent that receives the request rejects the contract its state is changed to rejected. If the contract is accepted by the other party, its state is changed to conditional. Contracts that are in conditional state, may be cancelled by the debtor agent or may be released by the creditor agent. If the condition of the contract is provided, it is state is changed to active. When the proposition of the contract is made available by the debtor, its state is changed to fulfilled.
**Agent Lifecycle:** Workflow diagram for UA is given in Fig. 4. When UA tries to establish the contracts for a service bundle, it starts with getting the addresses of the agents that provides the services from the bundle. If it cannot find any agents for one or more services, the bundle cannot be served (Failure). If there are agents that serve the services of the bundle, UA sends them contract requests and starts waiting for the replies. Once it receives a confirmation for a contract, it checks whether it gathers confirmation for all contracts it has requested. If there are still some contracts to be confirmed, UA continues to wait for the replies. If all of the contracts are confirmed, UA provides the conditions of the contracts. UA’s duty ends here as it is the other agents’ duty to provide the services promised and the exceptions are not in the scope of this work. If UA receives a rejection instead of a confirmation, it searches for other agents that serve the same service immediately. If there are no such agents, UA cannot provide the bundle to the user (Failure). If there are other agents serving the same service, UA repeats the process of requesting contracts. UA may also receive a contract request as a reply for its initial request. When an agent including UA receives a contract request, it should decide to create it or not. There are three possible reactions that it may take: 1) Rejecting to create the contract, 2) Creating the contract in line with the requester’s desire, 3) Requesting another contract that has the same proposition as the contract requested by the requester with a different set of conditions. It is assumed that agents...
Algorithm 1: Request Received
Input: request: Request Message received
Output: m: Message to send
1 String id = request.getConversationID();
2 Contract c = request.getContent();
3 boolean found = false;
4 for i ← 1 to contracts.size() do
5 if contracts(i).conversationID == id then
6 similarity = Similarity(c.proposition, contracts(i).propostion);
7 if similarity > threshold then
9 m.type ← confirm
10 else
11 m.type ← reject
12 break;
13 if found then
14 ResourceList rList = c.getProposition();
15 ResourceList missing;
16 for i ← 1 to gList.size() do
17 Resource r = rList.elementAt(i);
18 double invQ = Inventory.getResourceQuantity(r);
19 if requestgetXRequestedQuantity > invQ then
20 missingResources = g.missing();
21 if missing.size() = 0 then
23 c.condition ← missing;
24 m.add(c);
25 return m
26 m.type ← confirm;
27 m.add(c);
28 return m
are willing to create contracts unless they lack necessary amount of ingredients and they do not receive any contract requests beyond their serving capabilities.
Algorithm 1 explains the behavior of an agent other than UA when it receives a request message. The message received can start a new conversation between the agents, or it might carry on a previous one. So, an agent checks whether the message is part of a previous conversation or not (line 5). If the message is related to a previous contract, it retrieves the contract from its contract base and calculates the similarity between the conditions of the two contracts (line 6). If the similarity is above a threshold set by the agent itself (line 7), it confirms the contract and prepares a confirmation message to be sent to the requester via the confirmation manager of the agent (line 8). If the similarity is below the threshold, a rejection message is prepared instead of the confirmation message (line 10). If the message is not related to any other conversation, the agent checks its inventory for the proposition (line 18). If the proposition is not ready in the inventory (line 19), for this time the agent checks the inventory for the ingredients of the proposition. If there are some missing ingredients (line 21), the agent prepares a request message asking for the missing ingredients in return of the proposition of the contract and returns this message (lines 20-25). Otherwise, the agent prepares a confirm message (lines 26, 27).
In addition to receiving a request message, an agent can also receive an inform message. If that is the case, the agent extracts the messages to get its content and finds relevant contracts through its contract manager. If it finds a contract whose condition matches the content and whose state is conditional, it updates the state to active. This means that, the agent itself is now responsible to carry out the rest of the contract by bringing about its proposition. On the other hand, if it finds a contract whose proposition matches the content and its state is active, meaning if the sender agent is fulfilling a contract, it updates its state to fulfilled.
4. EXAMPLE DOMAIN
We apply our approach on an AmI kitchen domain. An AmI kitchen consists of various autonomous agents such as Coffee Machine Agent (CMA), Tea Machine Agent (TMA), Fridge Agent (FA) and Mixer Agent (MA), which represent devices in a regular kitchen. Each of these agents provide different services. The agents use some ingredients related to their services as resources. For example, CMA, which serves coffee, has coffee beans and water in its inventory. It may also have some coffee ready in its inventory. Similarly, TMA which serves Tea is expected to have tea leaves and water. On the other hand, FA has some cake to serve. UA of the system tries to serve the user a service bundle which is a menu consisting of several beverages and dishes for this domain. Each element of a menu is usually served by a different agent of the kitchen.
The user of the system is satisfied when she gets the exact menu she prefers. Establishing contracts is a necessity in such a system for user satisfaction since the static contracts will not work for the reasons described in Section 2. Agents of the system may get broken, broken ones may be fixed or replaced, or new agents may enter the system, so the assuring power of the predefined contracts established between agents is limited. The availability of the resources is limited, so the agents do not always have access to the resources they need.
The environment ontology of this system describes the agent structure, contract structure and spatial information about the kitchen such as the temperature and humidity level. The domain ontology of this environment is a food ontology, in which various types of food and beverages together with their ingredients are described. Agents use the recipes provided in the ontology for their services. In this ontology, the ingredients and types of some most popular items such as coffee and tea, are carefully classified and some similarity factor is placed between pairs that are substitutable. The similarity factor shows how well these items can substitute each other. Higher the similarity factor is, stronger the similarity relationship between the items that are compared to. These similarity factors are used to serve the demanded dish with a slightly different recipe when the original ingredients are not available in the inventory of the agent and UA cannot establish a contract that promises the missing ingredient. In such cases, the agent may try to prepare the dish using the substitute of the missing ingredient. Let’s consider three types of Flour that are classified under Wheat Flour class. These types are All Purpose Flour, Cake Flour, and Bread Flour. All Purpose and Cake Flour are 0.7 similar, whereas Cake Flour and Bread Flour are 0.8 similar. When a service which requires one of these types of flour is requested, and the exact resource is not available, the resource that are similar may be substituted by one of the other types, leading to the same service served with tolerably different resources.
The detail level of a domain ontology changes from system to system. Agents of another kitchen may use a domain ontology just for the ingredients without the similarity relationship. Another one may also include the types of silverware that should be used with a specific dish.
5.1 Execution of Scenario 1
For the first scenario, user tells UA that she wants a menu consisting of two different services, coffee and cake, which should be served together. UA needs to find the agents serving the menu items, for this case they are CMA and FA. Then, UA needs to establish contracts for all of the items in the menu and receive the items. CMA needs some coffee beans to serve coffee and it manages to create a contract when it accepts to provide coffee beans to CMA. Once all contracts are established, UA fulfills the conditions of the contracts and gets served.
5.2 Execution of Scenario 2
For the second scenario, UA again tries to serve a coffee and cake menu of the user’s choice. The menu item coffee is served by CMA and the cake is served by MA. UA establishes a contract with CMA. However, MA is out of cake flour which is essential for serving a cake. It requests some from UA, however UA cannot provide it and after consulting the domain ontology, UA offers bread flour, which is a replacement for the original ingredient. Once again, after all contracts are established, UA fulfills the conditions of the contracts and gets served.
5.3 Execution of Scenario 3
The third scenario begins similar to the second one. UA tries to establish contracts for the coffee and cake menu. It establishes one with CMA. MA asks for a substitute for the cake flour, which is an ingredient to make the cake. Not being able to provide the cake flour, UA offers bread flour. However, this time MA does not find the substitute similar enough to replace the original item. UA cannot establish a contract with a substitute for the cake flour, but it consults the domain ontology for the most similar item and it finds that it is the bread flour and luckily, it can provide bread flour, so it makes a contract request back with bread flour as condition and cake as proposition (M5). The substitute satisfies MA and it accepts to take part in the contract (M6). So, UA establishes all contracts that it needs to do, since CMA has already accepted the request with M4. UA sends an inform message to realize the condition of the contract with CMA (M7, 8). It also sends an inform message to deliver the condition of the contracts with CMA (M7). UA sends the propositions of the corresponding contracts (M8, 9).
5.2 Execution of Scenario 2
For the scenario described in Section 4.2, the flow of communication is depicted in Fig. 7. UA sends relevant request messages to start conversation (M 1 and M 2). Mixer Agent immediately makes a request for cake flour, since it does not have the necessary amount of flour to bake the cake (M 3). Unfortunately, UA cannot provide cake flour, but it consults the domain ontology for the most similar item and it finds out that it is the bread flour and luckily, it can provide bread flour, so it makes a contract request back with bread flour as condition and cake as proposition (M 5). The substitute satisfies MA and it accepts to take part in the contract (M 6). So, UA establishes all contracts that it needs to do, since CMA has already accepted the request with M 4. UA sends inform messages to both agents, satisfying the conditions of the contracts (M 7, 8). After receiving the conditions, agents serve the propositions of their contracts (M 9, 10).
5.3 Execution of Scenario 3
The communication flow between agents for the scenario described in Section 4.3 is represented in Fig. 8. The scenario starts with UA’s sending contracts requests to service providers MA and CMA (M 1, 2). CMA sends a confirmation (M 3) whereas MA requests another contract, demanding cake flour to provide cake (M 4). Unable to provide cake flour, UA requests yet another contract, offering bread flour to get some cake from MA (M 5). MA does not find bread flour similar enough to cake flour, so it rejects the contract offered by UA (M 6). UA searches for another agent that can provide cake service, so it discovers FA. It makes a request (M 7) and receives confirmation in return (M 8).
tion, UA gets confirmation for all contracts to get services for the cake and coffee bundle, so it fulfills the conditions of the contracts (M 9, 10). FA provides the service it is committed to serve (M 11), however CMA gets broken and cannot provide the service. After a certain time, UA gives up hope on CMA and starts looking for a new agent to provide the same service.
6. DISCUSSION
The main contribution of our work is to dynamically generate and use contracts to ensure that the user’s needs are satisfied in a dynamic environment. Unlike Hagras et al., we do not assume that any agent serving a person must always and immediately carry out any requested actions [6]. Instead, we develop a model for an open dynamic system where the continuity of the services are secured, even when some agents stop working or leave the system, without being noticed by the user.
Although it is assumed that the agents are willing to cooperate under certain conditions in Section 5, the model which is represented in this paper does not have a predefined communication protocol. The existence of less or more cooperative agents in the system does not destroy the system’s ability to operate. We can also say that the agents in the system do not have designated roles, as they can change the services provided by them.
We benefit from the ontologies to achieve a high degree of interoperability; however, the contracts that are generated in the system are not kept in ontologies like in the case of Fornara and Colombetti [4]. Since the evolution of the contracts are handled by the agents, our model deliberately lacks a central monitoring system, which has access the information on all of the transactions. Hence, contracts are also kept independently.
Unlike some AmI frameworks such as Amigo [9], our application does not offer a low level interoperability structure. In Amigo framework agents do not have any options but to provide their services when their relevant methods are called by the other agents. Also, in that framework, exact structure of the service methods of the provider agent such as the parameters and the name and so on should be known by the demanding agent. Instead of such framework, we provide a high level interaction model where agents willingly provide their services or not. It is not necessary for the demanding agent to know the details about the provider agent’s methods.
Future work may include the development of a policy for exceptions. The sanctions that will be applied to an agent that does not follow a contract should be set to avoid the abuse of the system. Also, the cancellation and release policies for agents should be defined, so that the agents can inform the other party when they cannot deliver the services they are committed to.
7. REFERENCES
|
{"Source-Url": "http://eia.udg.edu/easss2011/resources/docs/paper11.pdf", "len_cl100k_base": 5725, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 19482, "total-output-tokens": 6038, "length": "2e12", "weborganizer": {"__label__adult": 0.00031566619873046875, "__label__art_design": 0.0006165504455566406, "__label__crime_law": 0.0004794597625732422, "__label__education_jobs": 0.0009050369262695312, "__label__entertainment": 0.00013971328735351562, "__label__fashion_beauty": 0.00016188621520996094, "__label__finance_business": 0.0007009506225585938, "__label__food_dining": 0.0004346370697021485, "__label__games": 0.0007977485656738281, "__label__hardware": 0.0012969970703125, "__label__health": 0.0005679130554199219, "__label__history": 0.0003690719604492187, "__label__home_hobbies": 0.00011086463928222656, "__label__industrial": 0.000579833984375, "__label__literature": 0.00043272972106933594, "__label__politics": 0.0003905296325683594, "__label__religion": 0.0004291534423828125, "__label__science_tech": 0.1851806640625, "__label__social_life": 0.00012612342834472656, "__label__software": 0.022247314453125, "__label__software_dev": 0.78271484375, "__label__sports_fitness": 0.00021791458129882812, "__label__transportation": 0.0006909370422363281, "__label__travel": 0.00024700164794921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28370, 0.03147]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28370, 0.17607]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28370, 0.9369]], "google_gemma-3-12b-it_contains_pii": [[0, 4563, false], [4563, 10429, null], [10429, 15122, null], [15122, 21548, null], [21548, 25576, null], [25576, 28370, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4563, true], [4563, 10429, null], [10429, 15122, null], [15122, 21548, null], [21548, 25576, null], [25576, 28370, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28370, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28370, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28370, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28370, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28370, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28370, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28370, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28370, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28370, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28370, null]], "pdf_page_numbers": [[0, 4563, 1], [4563, 10429, 2], [10429, 15122, 3], [15122, 21548, 4], [21548, 25576, 5], [25576, 28370, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28370, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
b3e0490b19686301af11a8301c2b7455aac873a0
|
Case-based Reasoning for Experience-based Collaborative Risk Management
Nielsen L. R. Machado, Luís A. L. Silva, Lisandra M. Fontoura
Programa de Pós-Graduação em Informática
Universidade Federal de Santa Maria – UFSM
Santa Maria, Brazil
{nielsenmachado, silva.luisalvaro, lisandramf}@gmail.com
Abstract - In a collaborative risk management scenario, project stakeholders often need natural forms of recording and reusing past risk management experiences so that they could better assess whether there are threats to the goals of new projects. The contribution of this paper is to propose an enhanced case-based reasoning (CBR) approach to support project participants to exploit such experiences which are here expressed as collaborative risk management discussion cases. The paper shows how these debates are structured through the exploitation of a dialogue game protocol for risk management. Then, it discusses how users can utilize queries based on facts and arguments so that past risk discussion cases could be retrieved from a case base. Attention is also given to case-based explanation templates, which are relevant for the understanding of key moves of argumentation in debate trees recorded in such enhanced cases retrieved. To demonstrate the practical utility of this approach, a case study involving the collaborative experience-based risk management of a software project is discussed.
Keywords-component: Case-Based Reasoning; Collaborative Risk Management Tool; Dialogue Game; Argumentation.
I. INTRODUCTION
Management of risks in new software projects is most effective if one can draw on concrete instances of dealing with risks in past projects. Once such experiences are collected and represented systematically in a reusable memory, project stakeholders involved in collaborative risk discussion tasks have the means of constructing a better assessment of the risks to their project’ goals, considering consequences in case these risks occur and, in general, taking actions to guarantee that the associated threats could be controlled more effectively. The overall goal is to fully exploit these past experiences in order to not only learn relevant lessons from past risk management episodes but also to avoid the repetition of past risk management mistakes in current projects.
Some degree of experience-based risk management is achieved through the exploitation of different Artificial Intelligence (AI) techniques (e.g. [1, 2]). These kinds of approach allow one to learn from data of past software projects so as to better understand the relevance of risks in new projects. Moreover, security risk management methodologies, such as Magerit [3] (in step 2: threads), for instance, discuss that past experiences should be exploited in the identification of threads to the assets of a project. In this context, Case-Based Reasoning (CBR) [4, 5] is an AI problem-solving paradigm which emphasizes the role of past experiences in the solution of future problems. By its nature, CBR is well adapted to the capture and reuse of factual and/or prescriptive information along with a solution for a risk management problem. Moreover, if stakeholders debate or argue about aspects of their project, CBR can easily accommodate any knowledge that these arguments express explicitly.
The initial claim of our approach for risk management is that collaborative debates can be expressed as an argumentation process [6]. Argumentation is understood as a process of dialogue in which different project participants are able to present different kinds of arguments when stating or justifying the assessment of risk management issues in a collaborative risk management discussion. In this dialectical context, we capture and organize such risk discussions in the body of enhanced cases for CBR according to a dialogue protocol, or “dialogue game” [7, 8]. This dialogue game is expressed as a set of moves of argumentation (e.g. locution acts such as propose, ask, inform, etc) that occur when multiple agents (human and/or computational agents) are engaged in a debate. As presented in our initial paper [9], a dialogue game for risk management was designed to mediate the interaction of project stakeholders when these agents develop collaborative risk management tasks. Our approach captures and makes use of a dialogue form (the dialogue, or set of locution acts) which people working on risk management etc. use naturally in their own discussions and debates among themselves, i.e. it respects the kind of knowledge representation that is informally closest to what they actually use in practice.
In this paper, we discuss a CBR extension which approaches not only factual project characteristics but also structural and type details of the arguments observed in past stakeholders’ discussions about how to construct solutions (e.g. risk management plans). We also discuss how project stakeholders can obtain personalized views of the results obtained when different kinds of queries are executed. In particular, the personalization resources proposed here rely on the exploitation of a set of case-based explanation templates, in a scenario in which CBR becomes a form of explanation-based reasoning [10]. In our project, these templates capture meaningful combinations of moves of dialogue, or locution acts, that are exploited in risk management discussions recorded in enhanced
cases. Such explanation templates are relevant for project stakeholders when inspecting and, consequently, forming an understanding of key risk management steps of argumentation advanced in risk debates. We also describe how these case-based query and explanation resources can be exploited by different project stakeholders when a web-based Risk Discussion System (RD System) is used. To illustrate our approach, we describe a case study in which a risk management project is analyzed by different project participants.
This paper is organized as follows: Section II presents some background information about risk management, argumentation and CBR. Section III discusses a) the formation of enhanced cases through facts and arguments, b) the alternative query forms utilized to find past cases so as to support users when deliberating solutions for new risk management problems and c) the exploitation of explanation templates for improving the users’ understanding of risk management discussion cases. Section IV presents a study case to illustrate the utility of our approach. Section V shortly reviews the paper proposals and presents some future steps in our project.
II. CBR, ARGUMENTATION AND EXPLANATION IN THE CONTEXT OF COLLABORATIVE RISK MANAGEMENT
Risk management requires the development of activities for the identification of possible problems and their causes, the analysis of risk probability and impact resulting in the prioritization of more critical risks, the construction of plans for the mitigation or even elimination of prioritized risks and the execution and monitoring of risk management plans [11]. In this context, the CBR paradigm for problem-solving supports the solution of new problems through the systematic collection and representation of cases in a case base, the retrieval of similar cases to a given problem situation, the reuse of past case solutions retrieved in the solution of new problems, the adjustment of these solutions to the context of these new problems when necessary and the retention of experiences of problem-solving in the case base so that the associated software system can improve its problem-solving capabilities [4, 5].
According to Kolodner [5], a case captures a contextualized piece of knowledge representing an experience that teaches a lesson fundamental to achieving the goals of the reasoner. Intuitively, it is possible to observe that “lessons” are likely to be offered by domain users in terms of different kinds of arguments, although such arguments still do not have places in traditional frameworks for CBR. Traditionally, only the key factual characteristics of a problem situation are utilized to index the information content of cases for typical CBR applications. As in other machine learning contexts, these case properties are usually encoded as a vector of attributes and values. Then, when users want to retrieve such cases from a case base, they make use of similarity-based forms of computing the distance between current and past cases [4, 5]. Although similarity could be computationally evaluated in different forms, the weighted Euclidian distance computation is the simplest forms of distance assessment in different CBR applications (see [12] for other forms of encoding case features and computing distances between case-like entities). In a CBR application, the overall idea is to retrieve the most similar cases to a given problem situation by stating a query which represents the current problem, or target case, to be solved. That is because CBR is built on the hypothesis that “similar problems have similar solutions”, which is a hypothesis that can be exploited in different application problems.
In addition to standard factual attributes, cases can also capture arguments and argumentation processes. These kinds of augmented cases are usually found in studies of the nature of argumentation in legal applications of CBR (e.g. [13]), although frameworks involving the use of cases in assisting the generation and evaluation of arguments have been reported in other application scenarios (e.g. [14]). Alternative argumentation models [8] have also been exploited in the development of intelligent systems for supporting the solution of problems in these argumentation applications. In this context, as described in [9], an argumentation process in which a debate is developed among different agents can be organized in the body of a case through the exploitation of a dialogue game model [7, 8]. These dialogue protocols are mainly defined by a set of locution acts, which are typical moves of speech used by these agents. In addition to such set of locution acts, these protocols are described by rules expressing how these locutions can be combined (e.g. which locutions can be used as responses to certain locutions). The interaction of two or more project participants involved in a debate can be mediated by such communication protocols.
According to [6], argumentation and explanation are interlinked activities when one is deliberating a solution for a problem. Among other reasons, they are often combined because certain users lack relevant knowledge to understand arguments being posed in a problem-solving situation, as well as because users are likely to pose arguments which express different considerations and explanations with respect to a decision. In effect, when pro and con arguments of a solution for a case-based problem are presented to users, they can review these arguments in order to form an understanding of the rationale behind certain decisions, as exploited in design problems [15], for instance. In practice, just the exploitation of cases that are similar to a current problem situation is the most effective form of explaining a solution in different application domains. It is also relevant to observe that explanations that are constructed on the grounds of past experiences are likely to be more convincing that standard rule-based explanation, as experimental results described in [16] show. In summary, the overall idea is to collect and record users’ arguments systematically in augmented cases for CBR so that one can exploit the content of these cases to provide additional layers of explanation on top of these argumentative structures.
III. A ENHANCED CBR APPROACH FOR COLLABORATIVE RISK MANAGEMENT
The exploitation of a dialogue game for the development of risk management tasks permits project stakeholders to discuss collaboratively the risks of a software project. In such a debate, project participants can express their opinion about causes and effects of risks, in addition to designing plans so that risks identified can be minimized. Through the exploitation of this dialogue protocol, participants in risk debates can reach a consensus while deciding which actions shall be taken in each risk situation. In this context, the CBR resources of our
RD System permit that these users examine and reuse concrete problem-solving experiences of risk management in the determination of risks in current projects.
A. How enhanced cases are formed when collaborative risk management tasks are developed
Concrete cases of collaborative risk management are recorded in a reusable risk management memory, which takes the form of a case base in the RD System. As it is usual in CBR, we represent the cases of this case base through a list of factual properties. To identify which properties we needed to utilize, we started with the observation that recent software projects are characterized in the agile and planned scenarios. Depending of the context that a project is developed, each one of these methodologies has their strong and weak points, which allows one to realize that contextual factors have a crucial role in the characterization of a project. Authors as Boehm and Turner [17] describe an approach which is based on risks for balancing the utilization of agile methods and planned methods in a project. Among others, attributes such as size, criticality, dynamism, skills of the team and culture are utilized in the characterization of a project. Additional attributes can be utilized so as to distinguish risk management cases obtained when different kinds of companies are considered.
Risk assessment on the localization project
<table>
<thead>
<tr>
<th>Characteristics of the Project:</th>
</tr>
</thead>
<tbody>
<tr>
<td>Team Size: Very Small</td>
</tr>
<tr>
<td>Criticality: Money Loss</td>
</tr>
<tr>
<td>Team Distribution: Collocated</td>
</tr>
<tr>
<td>Rate of Change: Low</td>
</tr>
</tbody>
</table>
Discussion:
Start discussion: Risk assessment of a localization algorithm involving sensors - Programmer 1
Propose risk: The final product will not correspond to client expectations - Programmer 1
Ask: Are the project requirements defined properly? - Programmer 1
Inform: The requirements are okay but the time to develop this project is not enough - Technical Manager
Propose consequence: An unhappy client may try to find other people to develop this project; our competitors for instance - Technical Manager
Argument pro: If the client of this project is not happy with us, we may lose the project funds and the client - Programmer 1
Propose probability: High - Programmer 1
Propose impact: High - Programmer 1
... Propose plan: We need to record the changes of requirements more formally - Programmer 2
Summarize: Requirements that are defined correctly, recording of requirement changes, new meetings in case the client is not happy and the frequent presentation of results to the client - Programmer 1
Propose risk: The timetable of the project is not real - Programmer 1
Propose probability: Medium - Technical Manager
... Propose plan: We should discuss again the project timetable and the project requirements - Programmer 2
Figure 1. An example of a risk management discussion case
In the work of Krutchen [18], a model called Octopus is proposed to characterize projects, where the key factors are: Team Size, Team Distribution, Criticality and Rate of Change. As these contextual factors are important in the area of software processes, they are also relevant for the indexing of enhanced cases. In practice, the risks of a project are dependent on such contextual factors, and the efficacy of actions to prevent them is also influenced by the project context. In addition to factual properties, we make it possible to enhance the body of each risk management case by exploiting argumentation-based characteristics. Integrating CBR and argumentation techniques, these novel case representation characteristics are grounded on the set of arguments that are collected and represented when our dialogue protocol for collaborative risk management [9] is exploited by project stakeholders. To present key locution acts of this dialogue protocol, we can examine a risk discussion case (see Fig. 1) in which different users deliberate risks of a software project. In this argumentation model for capturing arguments in cases, the general format of a single argument is: i) an identification of a locution act from our dialogue protocol, ii) a textual risk management statement (i.e. informal statements of the type that most users in this application domain are able to naturally offer) and iii) an identification of the discussion participant that is advancing such argument. These kinds of argument can be advanced by any discussion participant of a risk management debate which is started with “Propose risk” locutions. These argumentation moves are the root indexing concepts in the risk management discussion tree which is recorded in the body of an enhanced case. To gather information about such risks, a project participant can present question-like statements by using “Ask” locution acts, which can be answered via “Inform” locutions. These inform-like arguments can also be exploited in the presentation of any other contextual information for the development of the debate. Consequences related to the occurrence of risks in a project can also be examined explicitly when our dialogue protocol is used. In doing so, these consequence-like statements are advanced by discussion participants via “Propose consequence” locution acts. At any point of a debate, participants can also pose pro and con arguments in relation to different risk management issues. To promote the critical analysis of a risk management problem situation, our protocol permits participants to advance such arguments through “Argument pro” and “Argument con” locutions. Debate participants can also analyze the probability of a risk occurrence, as well as the impact that a risk is likely to have in a project. When this happens, “Propose probability” and “Propose impact” locution acts can be exploited by project participants. In practice, probability and impact estimates can be presented qualitatively or quantitatively, or both. Once initial steps of risks identification and analysis are completed, risk management plans should be constructed for the most relevant risks, or prioritized risks. In the protocol, risk mitigation, reduction, etc statements, for instance, can be advanced by discussion participants through “Propose plan” locutions. A participant can also summarize different aspects of a risk management discussion, through “Summarize” locutions. In summary, the set of locution acts available in our dialogue protocol for collaborative risk management expedites the debate of different risks in a project and the consequent recording of this discussion as a semi-formal argumentation model in the body of enhanced cases for use in CBR. As described here, this protocol is fully implemented in the RD System.
B. How risk discussion cases are retrieved from the case base so as to support the solution of new problems
The systematic recording of collaborative risk management experiences in a reusable memory is a fundamental issue for
an approach which promotes the exploitation of these experiences in the solution of new problems. However, it is also essential to make available to users alternative forms of consultation for this memory. We make it possible though similarity-based queries which can be formed and executed by discussion participants at any moment of a collaborative risk management debate. Such similarity estimates are determined when properties of the current risk management project, or query, are compared with properties recorded on past cases. Based on the enhanced case characteristics proposed, which are i) facts expressing the context of a risk management project and ii) argumentation moves from the discussion of typical risk management issues, our CBR framework allows users to retrieve the K most similar risk discussion cases to a given query. We utilize a standard “K nearest neighbor” algorithm [4, 5], and a weighted Euclidian distance function in the comparison of cases. So far in our project, an equal weighting scheme has been used despite the fact that the RD System permits the adjustment of such weights by users. For the comparison of argumentation information, our similarity algorithm matches textual statements expressed in a query with sentences recorded in past cases. In the PostgreSQL database [19], which is integrated with the RD System, this similarity assessment permits the identification of natural language arguments matching arguments recorded in past cases. In this context, two types of query methods are relevant for the retrieval of past risk management experiences.
The first query method is standard in CBR applications. In it, a user should input the factual properties of the current project (see Fig. 2 – A). An example of such query statement can involve a project which is very large (“Project Size = Very Large”), with many participants (“Team Size = Large”) that are scattered in different locations (“Team Distribution = Geographic Distribution”). In this project, a problem is that the project participants are uncertain about what risks should be considered (i.e. identified in a relevant list for future analysis) in the project and, consequently, what sort of things could fail in the project development. To solve this problem, discussion participants utilize the project properties mentioned as query parameters. When this query is executed, experiences of collaborative risk management involving projects with similar set of properties are likely to be retrieved from the case base. It means that the most similar collaborative risk management cases retrieved will be shaped by risk proposals (i.e. risk statements advanced by means of “Propose Risk”) that can be examined and, if useful, reused in different forms in the risk management discussion of a current project.
The second query method allows project stakeholders to construct queries that are formed by both factual and argumentation characteristics. In doing so, these users can utilize the list of factual project properties, as described in our first query method, and a set of keywords/sentences along with the corresponding locution acts that were advanced in a discussion. A typical example of such query involves the discussion of a current project that aims to develop a new feature in an existing system. So, the “Business Model” attribute of this project is “System Component”. This system also has a “high rate of changes in its requirements”. Because of this problem, the company responsible for the system maintenance is often
losing money (i.e. the “Criticality” attribute of this project is “Money Loss”). In this project, a participant needs to analyze risks regarding “software requirements that are not clearly described”. To reuse past collaborative risk management experiences in the solution of this problem, such participant constructs a query involving both the “Propose risk” and “Propose plan” locution acts. Along with the locution “Propose risk”, this user inputs the sentence: “unclear specification of software requirements”, expressing the kind of risk that this participant wants to review in similar risk management experiences retrieved from the case base (see Fig. 2 – C). This user also inputs the following keyword in the query statement: “software requirements”. This keyword is described along with the “Propose plan” locution act, expressing that the participant wants to review past risk management plans involving this subject. When this query is executed, previous cases of collaborative risk management are retrieved from the case base. In particular, these cases may contain “Propose risk” and “Propose plan” moves of dialogue in which those keywords/sentences mentioned were advanced by discussion participants. As a result, information regarding risk proposals and risk management plans can be reused in different forms as, for instance, in the construction of new arguments to be advanced in the current debate.
C. How explanation templates can be utilized by users in the understanding of a case retrieved for a given query
In our project, project stakeholders are able to exploit a new form of case-based explanation with the purpose of locating and highlighting relevant argumentation information in past risk management cases retrieved for a given query. Among other reasons, these explanatory views of a query result were designed after the fact that argumentation trees representing collaborative risk management discussions might contain a large number of speech moves, which might hinder a user comparison between current and past cases when analyzing a query result. Therefore, we give users access to alternative template forms in order to supplement their retrieval results and, consequently, make easier the steps of reuse of past arguments in the discussion of new risk management problems. In doing so, such a template format provides a standard for the presentation of selected risk management locution moves available in an enhanced case. This explanation is possible since the argumentative structure of enhanced cases already provides an explicit history, or a sort of narrative explanation [10], for the risk management decision-making process. What a template does, in this case, is to offer users ways of highlighting relevant argumentation steps available in these narratives (see Fig. 2 – B).
An explanation template has a name, a textual description of what the template aims to emphasize in a discussion tree (e.g. what needs to be explained, what interests a user), and how it is done through the selection of a set of locution acts belonging to the dialogue protocol used. A template can be created, recorded and adjusted by a knowledge engineer user of the RD System. In a scenario of a template utilization, for instance, the most similar case for a given query is shown to a user. However, it is presented only through the template contents of certain locution acts, e.g. as selected for its relevance to the user in a suitably personalized consultation with the system.
The template “key risk management tasks”, for instance, locates and highlights the project participants’ arguments advanced for the solution of risk identification, risk analysis and risk response planning problems, while other arguments in a debate are temporarily hidden when a retrieved case is presented to a user. To achieve the template goal, this template structure should be described in terms of locution acts: “Propose risk”, “Propose impact”, “Propose probability” and “Propose plan”, which are the key risk management locutions available in the protocol. As this example shows, the template representation relies on a pre-defined mapping between the description of the template’s goal and the specific set of locution acts that are able to present a solution for such an explanation goal (see Fig. 2 – D). However, if users desire more information about any argument in a retrieved case, independent of whatever personalization has been performed, they can scan through the entire content of the selected case, seeing the complete tree of the discussion that is recorded there. To help with the search for relevant risk management information in this tree, the RD System permits users to filter or expand the nodes or tree branches of these debates.
IV. A CASE STUDY OF COLLABORATIVE EXPERIENCE-BASED RISK MANAGEMENT
This illustrative case study involves a software project aiming to develop a web-based system for supporting a selected set of company managers to obtain the geographic location of certain employees of a company. The main problem of this project is that the software requirements are not described comprehensively since the project’s clients are not sure about the resources that this system would need to have for the privacy of the company employees not to be violated in particular situations. No experience with such a system was available there either among the project’s clients or the development team involved in the construction of this system. It means that the reuse of past risk management experiences amounts to a form of identifying risks that these participants would not be aware in such a new kind of project for this team. This is one of the reasons that we choose to discuss this case study. The contextual properties of this project are: “c1: Team Size = Very Small”, “c2: Team Distribution = In House”, “c3: Criticality = Comfort Loss” and “c4: Rate of Change = High”.
In our work, we describe clear and intuitive resources for participants of a collaborative risk discussion (e.g. a project manager, a technical leader) to exploit past experiences in the enhancement of risk management tasks of a new project. In this case study, we show how our extended CBR approach can support the development of identification, analysis and planning tasks of risk management. We show how both queries based on facts and queries based on facts and arguments can be utilized by project stakeholders. With cases retrieved from the case base as a result of such queries, explanation templates are utilized to facilitate the comparison between the target problem and the most similar cases retrieved and, consequently, to improve the reuse of past experiences.
A first task in the risk management of the target project is the identification and analysis of threats that are likely to occur. By using our approach, a project manager can analyze what kind of risks can occur in past projects that contain factual characteristics (c1 – c4) that are similar to the ones described in the target project. To assess the most similar cases retrieved, the “key risk management tasks” explanation template can be utilized due to the fact that this template highlights essential risk management arguments in a debate. Two substantially relevant past cases, with similarity measures 90.1% (‘p1’) and 79.5% (‘p2’), are retrieved. Considering only p1
because of its high rating, the risks thus identified are: “p1.r1: The final product will not correspond to client expectations”, “p1.r2: The algorithms that were defined may not be adequate to the functional requirements of the system”, “p1.r3: There is a difficulty which is the construction of realistic scenarios and collection of testing data” and “p1.r4: The client of the project and the members of the development team have different views about the same requirements”. Of the four risks, the project manager determined that only r2 did not apply to the new project under consideration. The effectiveness of plans for the resolution of risks is a central topic in risk management. In this case, it is important to observe that management plans that were successful in past projects can be reused in new projects. To construct new plans, project participants can make a new kind of CBR query in the RD System. In it, both contextual project characteristics represented as factual properties and keywords or sentences occurring in the argumentative analysis of prioritized risks can be utilized. In this case study, the technical leader of the project stated such query by using both i) the characteristics c1-c4 and ii) the locution act “Propose risk” along with the keywords “s1: requirements” and “s2: client expectations”, as well as the locution act “Propose plan” with “s3: testing and experience”. The explanation template that was selected by this user was the “Risk and plan proposals” since it emphasizes risks and treatment plans, which is the information that the technical leader is looking for. The retrieval that followed brings up a most similar (86.3%) case p3. Its associated management plan proposals are: “p3.mp1: We need to record the changes of requirements more formally”, “p3.mp2: The requirements should be documented properly”; “p3.mp3: To use small examples in order to guide the test of the system”; “p3.mp4: We can have some training before the project starts” and “p3.mp5: We should get support from the technical leader of this project”. The technical leader accepted the immediate relevance of items 1, 2 and 4. In addition, mp3 suggested a consideration specially adapted to the conditions of the new project under analysis: “small examples could be developed and linked to the use cases of the project, as well as the test cases of the system.”
V. CONCLUDING REMARKS
Collaboration and the reuse of experiences from past projects are crucial needs for users which aim to achieve effective risk management. Project stakeholders’ experience is naturally offered when these users advance different arguments in the collaborative debate of a project’s risks. Experience is also systematically captured when concrete risk management cases of problem-solving are shaped by not only factual information offered when these users advance different arguments in the systematic capture when concrete risk management cases of the project, as well as the test cases of the system.”
Future work involves giving the world access to (examples of) the RD System via the web, in addition to the development of evaluation experiments in collaborative risk management scenarios so that one could investigate forms of applying our approach in practice. We also plan to develop new explanation techniques for CBR and Argumentation in other to further improve the explanation capabilities of our CBR approach for experience-based collaborative risk management.
REFERENCES
267
|
{"Source-Url": "http://ksiresearchorg.ipage.com/seke/seke14paper/seke14paper_123.pdf", "len_cl100k_base": 6378, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18182, "total-output-tokens": 7706, "length": "2e12", "weborganizer": {"__label__adult": 0.0003097057342529297, "__label__art_design": 0.000667572021484375, "__label__crime_law": 0.0005593299865722656, "__label__education_jobs": 0.0037899017333984375, "__label__entertainment": 0.00011050701141357422, "__label__fashion_beauty": 0.0001665353775024414, "__label__finance_business": 0.0007052421569824219, "__label__food_dining": 0.0002837181091308594, "__label__games": 0.0010442733764648438, "__label__hardware": 0.0004808902740478515, "__label__health": 0.0003993511199951172, "__label__history": 0.0002903938293457031, "__label__home_hobbies": 9.745359420776369e-05, "__label__industrial": 0.00035452842712402344, "__label__literature": 0.0004680156707763672, "__label__politics": 0.0002684593200683594, "__label__religion": 0.0003256797790527344, "__label__science_tech": 0.036041259765625, "__label__social_life": 0.00015926361083984375, "__label__software": 0.0269927978515625, "__label__software_dev": 0.92578125, "__label__sports_fitness": 0.00020229816436767575, "__label__transportation": 0.00034546852111816406, "__label__travel": 0.0001633167266845703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36849, 0.029]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36849, 0.24573]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36849, 0.91678]], "google_gemma-3-12b-it_contains_pii": [[0, 5401, false], [5401, 12309, null], [12309, 19271, null], [19271, 26332, null], [26332, 30206, null], [30206, 36849, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5401, true], [5401, 12309, null], [12309, 19271, null], [19271, 26332, null], [26332, 30206, null], [30206, 36849, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36849, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36849, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36849, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36849, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36849, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36849, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36849, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36849, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36849, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36849, null]], "pdf_page_numbers": [[0, 5401, 1], [5401, 12309, 2], [12309, 19271, 3], [19271, 26332, 4], [26332, 30206, 5], [30206, 36849, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36849, 0.06897]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
bebc34d7ff4b270b4d9f0d207bc8d08847bb0629
|
An Incremental Parallel PGAS-based Tree Search Algorithm
Tiago Carneiro, Nouredine Melab
To cite this version:
Tiago Carneiro, Nouredine Melab. An Incremental Parallel PGAS-based Tree Search Algorithm. HPCS 2019 - International Conference on High Performance Computing & Simulation, Jul 2019, Dublin, Ireland. hal-02170842
HAL Id: hal-02170842
https://hal.archives-ouvertes.fr/hal-02170842
Submitted on 2 Jul 2019
Abstract—In this work, we show that the Chapel high-productivity language is suitable for the design and implementation of all aspects involved in the conception of parallel tree search algorithms for solving combinatorial problems. Initially, it is possible to hand-optimize the data structures involved in the search process in a way equivalent to C. As a consequence, the single-threaded search in Chapel is on average only 7% slower than its counterpart written in C. Whereas programming a multicore tree search in Chapel is equivalent to C-OpenMP in terms of performance and programmability, its productivity-aware features for distributed programming stand out. It is possible to incrementally conceive a distributed tree search algorithm starting from its multicore counterpart by adding few lines of code. The distributed implementation performs load balancing among different computer nodes and also exploits all CPU cores of the system. Chapel presents an interesting trade-off between programmability and performance despite the high level of its features. The distributed tree search in Chapel is on average 16% slower and reaches up to 80% of the scalability achieved by its C-MPI+OpenMP counterpart.
Index Terms—High-productivity Language, PGAS, Chapel, MPI+OpenMP, Tree Search Algorithms.
I. INTRODUCTION
Tree-based search algorithms are strategies that implicitly enumerate a solution space, dynamically building a tree. This class of algorithms is often used for the exact resolution of permutation combinatorial optimization problems (COP), and it is present in many areas, such as operations research, artificial intelligence, bioinformatics, and machine learning [1], [2]. Algorithms that belong to this class are compute-intensive and highly irregular, which demands hand-optimized data structures for efficient single-core utilization and load balancing between processes [3]–[5].
High-productivity languages historically suffer from severe performance penalties, do not provide low-level features, and are not suited to parallelism [6], [7]. Therefore, they are not often employed within the scope of parallel tree search. Instead, this kind of algorithm is frequently coded in either C or C++, and different libraries and programming models are combined for exploiting parallelism [8], [9].
Among the high-productivity languages, Chapel is one that stands out. It was designed for high-performance computing, and it is competitive to both C-OpenMP and C-MPI+OpenMP in terms of performance, considering different benchmarks [10]. The objective of the present research is to investigate whether Chapel is suitable for the design and implementation of all aspects involved in the conception of a parallel tree search algorithm for solving combinatorial problems. To the best of our knowledge, the present research is the first one that investigates the use of a high-productivity language for this purpose.
The experimental results show that Chapel is a suitable language for the design and implementation of parallel tree search algorithms. It is possible to hand-optimize the data structures involved in the search process. As a consequence, the single-threaded search in Chapel is on average only 7% slower than its counterpart written in C. Whereas programming a tree search in Chapel is equivalent to C-OpenMP in terms of performance and programmability, its productivity-aware features for distributed programming stand out.
Thanks to Chapel’s global view of the control flow and data structures, it is possible to conceive a distributed tree search starting from its multicore counterpart by incrementally adding few lines of code. The distributed implementation performs load balancing among different processes and also uses all CPU cores that a computer node has. Despite the high level of its features, the distributed tree search in Chapel is on average 16% slower and reaches up to 80% of the scalability reached by its C-MPI+OpenMP counterpart.
The remainder of this paper is structured as follows. Section II brings background information and the related works. Section III presents the incremental and PGAS-based distributed tree search algorithm. In turn, Section IV and Section V present the multicore and distributed evaluations, respectively. Next, Section VI brings a discussion of the results obtained in Sections IV and VI. Finally, conclusions are outlined in Section VII.
II. BACKGROUND AND RELATED WORKS
A. The Chapel Programming Language
Chapel is an open-source parallel programming language designed to improve the programmability for high-performance computing. It incorporates features from compiled languages such as C, C++, and Fortran, as well as high-level elements related to Python and Matlab. The parallelism
is expressed in terms of lightweight tasks, which can run on several locales or a single one. In this work, the term locale refers to a symmetric multiprocessing computer in a parallel system [11].
In Chapel, both global view of control flow and global view of data structures are present [10]. Concerning the first one, the program is started with a single task and parallelism is added through data or task parallel features. Moreover, a task can refer to any variable lexically visible, whether this variable is placed in the same locale on which task is running, or in the memory of another one. Concerning the second one, indexes of data structures are globally expressed, even in case the implementation of such data structures distributes them across several locales. Thus, Chapel is a language that realizes the Partitioned Global Address Space (PGAS) programming model [12].
Finally, indexes of data structures are mapped to different locales using distributions. Contrasting to other PGAS-based languages, such as UPC and Fortran, Chapel also supports user-defined distributions [13].
B. Tree-based Search Algorithms
Tree-based search algorithms are strategies that implicitly enumerate a solution space, dynamically building a tree [2]. The internal nodes of the tree are incomplete solutions, whereas the leaves are solutions. Algorithms that belong to this class start with an initial node, which represents the root of the tree, i.e., the initial state of the problem to be solved. Nodes are branched during the search process, which generates children nodes more restricted than their parent node. As shown in Fig. 1, generated nodes are evaluated, and then, the valid and feasible ones are stored in a data structure called Active Set.
At each iteration, a node is removed from the active set according to the employed search strategy [1]. The search generates and evaluates nodes until the data structure is empty or another termination criterion is reached. If an undesirable state is reached, the algorithm discards this node and then chooses an unexplored (frontier) node in the active set. This action prunes some regions of the solution space, keeping the algorithm from unnecessary computation. The degree of parallelism of tree-based search algorithms is potentially very high, as the solution space can be partitioned into a large number of disjoint portions, which can be explored in parallel. As these algorithms are compute intensive, diverse strategies have been used for improving performance, such as instruction-level parallelism, architecture-specific code optimizations and problem-specific data structures [3]–[5], [14]. Thus, parallel tree-based search algorithms are frequently written in C/C++, due to their low-level features and supported parallel computing libraries [8]. In the context of distributed algorithms, the performance-aware strategies above mentioned are combined with distributed programming libraries for implementing load balancing and explicit communication between processes [9], [15], [16]. As a consequence, program- ming distributed tree search algorithms can be challenging and time-consuming.
III. PARALLEL TREE-BASED SEARCH ALGORITHMS IN CHAPEL
A major objective of Chapel concerning productivity is allowing distributed programming using concepts close to the ones of shared-memory programming [10]. In this section, a multicore and single-locale tree search algorithm is initially proposed. Then, it is incrementally extended using Chapel’s productivity-aware features for distributed programming.
A. Algorithm Overview
This work focuses on permutation combinatorial problems, for which an N-sized permutation represents a valid and complete solution. Permutation combinatorial problems are used to model diverse real-world situations, and their decision versions are often NP-Complete [1], [16].
This section presents two backtracking algorithms for enumerating all complete and feasible solutions of the N-Queens. Backtracking is a fundamental problem-solving paradigm that consists in dynamically enumerating a solution space in a depth-first fashion. Due to its low memory requirements and its ability to quickly find new solutions, depth-first search (DFS) is often preferred [1].
The N-Queens problem consists in placing N non-attacking queens on a \( N \times N \) chessboard, and it is often used as a benchmark for novel tree-based search algorithms [14], [17]. The N-Queens is easily modeled as a permutation problem: position \( r \) of a permutation of size \( N \) designates the column in which a queen is placed in row \( r \). Furthermore, the concepts herein presented are similar to any permutation combinatorial problem and can be adapted for solving other problems of this class with straightforward modifications [4], [5].
B. The Single-locale Multicore Implementation
Algorithm 1 presents a pseudocode for the single-locale backtracking in Chapel. The algorithm starts receiving the problem to be solved (line 1) and the cutoff depth (line 2). Then, it is required to generate an initial load for the parallel search. For this purpose, task 0 performs backtracking from depth 1 (initial problem configuration) until the cutoff depth \( cutoff \), storing all feasible, valid, and incomplete solutions at depth \( cutoff \) in the active set \( A \) (line 4). After generating
the initial load, the parallel search strategy begins through a
\texttt{forall} statement (line 5).
As one can see in Fig. 2, nodes in the centralized active
set \textit{A} are assigned to tasks in chunks. Each task has its active
set and executes a backtracking search strategy. In turn, nodes
are used to initialize the backtracking, which enumerates the
solution space rooted by a node. The load balancing is done
through the iterator (\texttt{DynamicIters}) used to assign indexes
of \textit{A} to tasks, like in OpenMP.
Metrics are reduced through \textit{Reduce Intents}. In Chapel, it
is possible to use the \texttt{Tuple} data type (equivalent to C-
structs) and reduce all metrics at once (line 6). Differently
from OpenMP, it is not required to define a \texttt{tuple} reduction.
Finally, the parallel search finishes when the active set \textit{A} is empty.
\textbf{Algorithm 1:} The multicore tree search algorithm.
1. \texttt{I \leftarrow get\_problem()}
2. \texttt{cutoff \leftarrow get\_cutoff\_depth()}
3. \texttt{A \leftarrow \emptyset}
4. \texttt{A \leftarrow generate\_initial\_active\_set(cutoff, I)}
5. \texttt{forall node in A with\+ reduce metrics do}
6. \hspace{1em} \texttt{metrics\+ \leftarrow tree\_search(node, cutoff, I)}
7. \texttt{end}
\textbf{C. The Multi-locale Implementation}
One can see in Algorithm 2 a pseudocode for the distributed
tree-based search algorithm in Chapel. Thanks to Chapel’s
global view of control flow, the search also starts serially, with
\texttt{task 0} generating the initial load to populate the active set \textit{A}
(line 4). To make it possible to distributed the nodes of \textit{A}
across several locales, it is required to define a domain (line
5) and to indicate how the indexes of this domain are mapped
across different locales (line 6). In this work, only \textit{standard}
distributions are used \footnote{\url{https://chapel-lang.org/docs/modules/layoutdist.html}}. Finally, the distributed active set \textit{A\textsubscript{d}}
of type \texttt{Node} is defined over the mapped domain \textit{D} (line 8).
After the initial load generation, the nodes of \textit{A} are dis-
tributed by using a parallel \texttt{forall} (line 9), which generates
the distributed active set \textit{A\textsubscript{d}}. Thanks to Chapel’s global view
of \textit{A\textsubscript{d}}, the indexes of both active sets are directly accessed
\textbf{in line 10}. Moreover, as shown in Fig. 3, \textit{A\textsubscript{d}} is an abstraction.
The distributed active set \textit{A\textsubscript{d}} consists of several sets \textit{A\textsubscript{d}i}, \textit{i} \in \{0,...,l - 1\}, where \textit{l} is the number of locales on
which the application is going to run.
The parallel search takes place in line 12. As one can see in Algorithm 2, its \texttt{forall} is similar to the one of
Algorithm 1. However, distributed iterators are used instead
(\texttt{DistributedIters}). Additionally, the distributed search
exploits two levels of parallelism, and the compiler is also
responsible for generating the code that exploits all CPU cores
a locale has. Finally, the metrics are reduced in the same way
as in the single-locale algorithm.
\textbf{Algorithm 2:} The multi-locale tree search algorithm.
1. \texttt{I \leftarrow get\_problem()}
2. \texttt{cutoff \leftarrow get\_cutoff\_depth()}
3. \texttt{A \leftarrow \emptyset}
4. \texttt{A \leftarrow generate\_initial\_active\_set(cutoff, I)}
5. \texttt{Space \leftarrow \{0,..,(|A| - 1)\}}
6. \texttt{D \leftarrow Space mapped according to a standard distribution}
7. \texttt{A\textsubscript{d} \leftarrow \emptyset}
8. \texttt{A\textsubscript{d} \leftarrow \{D\} : Node}
9. \texttt{forall s in Space do}
10. \hspace{1em} \texttt{A\textsubscript{d}[s] \leftarrow A[s]}
11. \texttt{end}
12. \texttt{forall node in A\textsubscript{d} following the iterator with\+ reduce metrics do}
13. \hspace{1em} \texttt{metrics\+ \leftarrow tree\_search(node, cutoff, I)}
14. \texttt{end}
\textbf{D. Search Procedure and Data Structures}
The kernel of both parallel algorithms previously presented
is based on a serial and hand optimized backtracking for
solving permutation combinatorial problems, originally written
in C [4]. The serial backtracking was then adapted to Chapel,
obeying the handmade optimizations, instruction-level parallel-
ism, data structures, and C-types.
The data structure \texttt{Node} is similar to any permutation
combinatorial problem. It contains an unsigned 8-bit integer
vector of size \textit{cutoff}, identified by \texttt{board}, and an unsigned
integer variable. The vector \texttt{board} stores the feasible and valid
incomplete solution. In turn, the integer variable, identified by
\texttt{bitset}, keeps track of board lines by setting its bit \texttt{n} to \texttt{1} each
time a queen is placed in the \textit{n}-th line.
The search performed by the kernel (Algorithm 1, line 6 and Algorithm 2, line 13) is a non-recursive backtracking that does not use dynamic data structures, such as stacks. Initially $depth$ receives the value of $cutoff$. Next, $board$ and $bitset$ are initialized with the incomplete solution that $Node[i]$ contains.
The semantics of a stack is obtained by using a variable $depth$ and by trying to increment the value of the vector $board$ at position $depth$. If this increment results in a feasible and valid incomplete solution, the $depth$ variable is then incremented, and the search proceeds to the next depth. After trying all configurations for a given depth, the search backtracks to the previous one.
IV. A SINGLE-LOCALE PERFORMANCE EVALUATION OF CHAPEL
The primary objective of this section is to investigate the single-locale programming features and performance of Chapel.
A. Protocol
For this evaluation, the following programs were conceived for enumerating all valid and complete solutions of the N-Queens problem.
- **Multicore**: Chapel and C-OpenMP implementations of the backtracking search algorithm described in Section III-B.
- **Serial**: Chapel and C implementations corresponding to the kernel of the multicore programs above listed (refer to Section III-D).
All implementations apply the data structures and search procedure detailed in Section III-D.
B. Parameters Settings
In the experiments, both multicore and serial implementations enumerate all complete and valid solutions of the N-Queens problem, for which sizes ($N$) range from 10 to 19. The experiments take from few milliseconds to several hours of parallel processing.
The testbed operates under SMP Debian 4.9.65 64 bits, and it is composed of two AMD EPYC 7301, with 32 cores @2.7 GHz, 64 threads, and 128 GB RAM. All C programs were compiled with gcc 6.3.0. The Chapel version used was 1.8.0.
Chapel provides three task layer implementations: qthreads (default), Tokio’s University Massive Threads, and POSIX Threads (Pthreads). A preliminary experiment was performed to verify which thread implementation is the most advantageous in the context of this work. It is important to point out that the task layer is chosen in terms of environment variables and this action means no coding efforts. Fig. 4 shows that both massive threads and Pthreads are much heavier than qthreads for the smaller tree sizes. All task layer implementations perform similarly as the size of the solution space grows.
As OpenMP, Chapel makes available various load balancing strategies, which are implemented as built-in iterators used in forall statements. They are close to OpenMP’s scheduling policies, such as guided and dynamic. A preliminary experiment was carried on to figure out the best Chapel’s built-in load balance strategy for solving the N-Queens. It is shown in Fig. 5 the average execution time required by the multicore Chapel backtracking to solve the N-Queens problem, taking into account different built-in load balance strategies. According to the results, the dynamic approach is the fastest one.
Experiments were also carried out to choose a suitable
<table>
<thead>
<tr>
<th>Implementation</th>
<th>Load Balancing</th>
<th>Chunk Optimization</th>
<th>Optimization</th>
<th>$cutoff$</th>
</tr>
</thead>
<tbody>
<tr>
<td>Chapel</td>
<td>Dynamic</td>
<td>default</td>
<td>--fast</td>
<td>4</td>
</tr>
<tr>
<td>C−OpenMP</td>
<td>Dynamic</td>
<td>default</td>
<td>--O3</td>
<td>4</td>
</tr>
</tbody>
</table>
Table I
List of best parameters found experimentally for the Chapel and C-OpenMP implementations.
cutoff depth for both Chapel and OpenMP implementations. One can see in Table I the best parameters experimentally found for the Chapel and C-OpenMP implementations.
C. Results
One can see in Fig. 6 a comparison between Chapel and C serial implementations. As previously pointed out, it is possible to write in Chapel a hand-optimized code similar to C. For sizes ranging from 10 to 11, the serial implementation in Chapel is on average $1.4 \times$ and $1.21 \times$ slower than its counterpart written in C, respectively. However, as the size of the problem grows, this difference becomes much smaller. In turn, taking into account problem sizes ranging from 12 to 18, the Chapel serial implementation is on average $7\%$ slower than its C counterpart.
It is shown in Fig. 4 the average execution time of spent by Chapel for solving the N-Queens compared to its C-OpenMP counterpart. The results also consider all task implementations of Chapel and use the best parameters found, summarized in Table I. The version running over the qthreads task layer is comparable to C-OpenMP even for the smallest sizes (10 to 13). For sizes ranging from 14 to 18, the version over qthreads is on average $8\%$ slower than the search in C-OpenMP.
Both massive threads and Pthreads task layer implementations contrast to qthreads. For these two task layers, the overhead of managing threads amounts negatively when enumerating small solution spaces, and they perform poorly for sizes ranging from 10 to 13. The massive threads version is from $29 \times (N = 10)$ to $2.52 \times (N = 13)$ slower than its C-OpenMP counterpart. In turn, the implementation over Pthreads is from $16 \times (N = 10)$ to $1.8 \times (N = 13)$ slower than its counterpart written in C-OpenMP. As the size of the solution space grows, both Pthreads and massive threads version stand out. From sizes ranging from 15 to 19, both implementations are on average on average $5\%$ faster than C-OpenMP and $13\%$ faster than its counterpart over qthreads.
V. A MULTI-LOCALE PERFORMANCE EVALUATION OF CHAPEL
In this section, the incrementally conceived distributed algorithm presented in Section III-C is evaluated. The primary goal of this section is to show that it is possible to use a high-productivity language for programming distributed tree search algorithms and achieve metrics similar to MPI+X.
A. Protocol
The following applications were programmed for enumerating all valid and complete configurations of the N-Queens problem.
- **Chapel**: implementation of the multi-locale backtracking search algorithm described in Algorithm 2, written in Chapel.
- **MPI+X**: single program - multiple data (SPMD) counterpart written in C of the program above introduced. In this case, MPI is applied for communication, and $X$ means the use of OpenMP for exploiting all CPU cores/threads a node has.
Both applications implement the data structures and search procedure detailed in Section III-D. In this evaluation, it is investigated how the applications scale according to the number of locales. Furthermore, the influence of the PGAS data structure distribution on the application execution time is also studied. Moreover, the impact of the distributed load balancing strategies on the overall performance of the application is also investigated. Finally, all metrics collected for the implementation in Chapel are compared to the ones achieved by its MPI+X counterpart.
B. Parameters Settings
Problems of size $(N)$ ranging from 15 to 20 are considered. The experiments take from few seconds to several hours of parallel processing. The number of locales ranges from 1 to 32, and the application is the same for either one or more than one computer node(s). The number of locales is passed to the application using Chapel’s built-in command line parameter $-n l$ ($-np l$ for MPI), where $l$ is the number of locales on which the application is executed.
All computer nodes are symmetric and operate under Debian 4.9.130 – 2, 64 bits. They are equipped with two Intel Xeon X5670 @ 2.93 GHz (a total of 12 cores/24 threads), and 96 GB RAM. Thus, up to 384 cores/768 threads are used in the experiments. All locales are interconnected through an InfiniBand network: Mellanox Technologies MT26428 (ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE).
The Chapel implementation was programmed in its current version (1.18), and the default task layer (qthreads) is the one employed. Chapel’s multi-locale code runs on top of GASNet, and several environment variables should be set with the characteristics of the system the multi-locale code is supposed to run. Concerning the MPI+X implementation, OpenRTE 2.0.2 along with g++ 6.3.0 and OpenMP 4.5 were used for compilation and execution.
One can see in Table II a summarization of the runtime configurations for multi-locale execution. The Infiniband GASNet implementation is the one used for communication (CHPL_COMM_SUBSTRATE) along with MPI, which is responsible for getting the executables running on different locales (GASNET_IBV SPAWNER).
Chapel provides several standard distributions to map data structures onto locales. Different tests were also carried out to identify the best option in the context of this work. The one chosen was the one-dimension BlockDist, which horizontally maps elements across locales. For instance, in case $l = 3$ and $|A_{d}| = 8$, elements $0, ..., 2$ are on locale $l_0$, $3, ..., 5$ on locale $l_1$, and $6, 7$ on locale $l_2$. In the scope of the present research, choosing a different standard distribution does not lead to performance improvements.
Experiments were carried out to choose a suitable cutoff depth (Algorithm 2, line 2). This parameter directly influences the size of $A_{d}$ and therefore the time spent in distributing the active set across locales. As observed in Fig. 7, the fastest data structure distribution is observed for $cutoff = 3$. However, such a cutoff value limits parallelism, resulting in a slow distributed search. In contrast, when the cutoff is set to 6, the distribution of $A_{d}$ becomes $10 \times$ slower than the search procedure itself. This behavior happens due to the combinatorial nature of N-Queens: a cutoff depth twice deeper results in an active set $725 \times$ bigger. When choosing $cutoff = 5$, the search takes the same time as for $cutoff = 4$. Despite that, the distribution of $A_{d}$ is on average $9 \times$ slower for $cutoff = 5$. Thus, the cutoff depth chosen is 4. Preliminary experiments also show that $cutoff = 4$ is the best value for the MPI+X implementation.
Chapel also provides two different distributed load balancing iterators: guided and dynamic, which are also similar to OpenMP’s schedules of the same name. Experiments were carried out to identify the best chunk for both load balancing strategies. They present the best performance when using the default chunk size.
C. Results
First of all, the benefits of using distributed load balancing are not observed for the smallest solution space, i.e., for the problem of size $N = 15$. In such a situation, the static search performs slightly better because there is no communication among locales during the search. As shown in Fig. 8, the overhead of data structure initialization and distribution becomes less detrimental as the solution space grows, and the benefits of using distributed load balancing can be observed.
For sizes bigger than 15, using the dynamic iterator is from $1.17 \times$ to $1.51 \times$ times faster than using no load balancing (static version). Moreover, the guided iterator does not seem to be a suitable load balancing in the scope of this work: it shows benefits compared to the static version only for sizes ranging from 18 to 20. For these problem sizes, using the guided iterator makes the search up to $1.21 \times$ faster than its static counterpart. In turn, using the dynamic distributed iterator results in a search from $1.21 \times$ to $1.25 \times$ faster than using the guided one.
It is shown in Fig. 9 how the distributed searches in Chapel and MPI+X scales according to the number of locales. The worst scalability is observed for the smallest size ($N = 15$). In such a situation, the initialization and distribution of $A_{d}$ amount for almost the whole execution time (see Fig. 8). For problem sizes ranging from 17 to 20, the dynamic version scales up to $20.5 \times$ ($N = 19$), whereas guided and static scale up to $16.8 \times$ and $16.9 \times$, respectively (also $N = 19$). The MPI+X version scales up to $25.4 \times$ ($N = 18$). Therefore, the distributed search in Chapel achieves up to 80% of the scalability observed for its MPI+X counterpart.
It is worth to mention that the time spent on distributing \( A_d \) does not grow linearly according to the number of locales, as shown in Fig. 10. The time required to distribute \( A_d \) grows up to size \( N = 16 \), then it becomes almost constant. This behavior comes from the fact that the size of \( A_d \) is the same for one or more locale(s). Thus, as the number of locales grows, the number of messages sent grows as well, but their size decreases. Moreover, the \( A_d \) distribution is performed in parallel (Algorithm 2, line 9), and the Infiniband GASNet implementation supports one-sided communication.
In terms of wall-clock time, Chapel is equivalent to MPI+OpenMP when running on one locale. For the smaller solution space (\( N = 15 \)), Chapel stands out, and it is up to 25% faster than MPI+X. In such a situation, \( A_d \) is not distributed, and the program behaves like a single-locale and multicore one. Moreover, MPI implements the SPMD programming model. This way, MPI is started, and its functions are called even for one locale. Additionally, it is worth to mention that Chapel is a compiled language and it is possible to program in Chapel both search strategy and data structures equivalent to the ones present in its counterpart written in C. In contrast, for multiple locales and bigger problem sizes, the Chapel distributed search is on average 16% slower than its MPI+X counterpart.
VI. DISCUSSION
In this work, all aspects of the search process were programmed in Chapel, even though C code can be incorporated into a Chapel program. It was possible to hand-optimize the search kernel in a way similar to C. Both codes are equivalent in terms of types, data structures and code size, which resulted in a single-threaded performance competitive to C. This fact is essential in the context of this work, otherwise using the parallel features of Chapel would result in low-performance.
Programming a multicore search in Chapel involves almost the same effort as using C-OpenMP. Both provide built-in load balancing features and reduction of variables. However, Chapel presents some advantages, as it provides more load balancing strategies, and it is possible to reduce all metrics at once. Moreover, there are several task layer implementations, which may be advantageous for some users. Concerning performance, the multicore Chapel implementation using the default task layer is competitive to C-OpenMP even when solving the smallest problems.
Thanks to Chapel’s global view of the control flow and data structures, the main difference between the multi- and single-locale versions lies mainly in the use of the PGAS data structures and distributed iterators for load balancing. There is no need for explicitly dealing with communication, metrics reduction, or distributed load balancing. Furthermore, the compiler generates code for exploiting all CPU cores a locale has. Differently from the classic MPI+X, there is no need for an additional library to exploit each level of parallelism.
Concerning the program size, Chapel’s multi-locale implementation is only 8 lines longer than its single-locale counterpart, which results in a code 33% bigger. Consequently, the two communication behaviors presented in Fig. 11 are achieved by the same program, but different parameters. In contrast, it is required to add 24 lines to the backtracking written in C-OpenMP to use MPI, which almost doubles the program size and also incorporates the SPMD programming model. Therefore, Chapel presents an interesting trade-off between programmability and performance.
The most significant limitations found concern neither programmability nor performance. Instead, they are related to
technical issues. For instance, it took much more time to configure the GASNet library for running on a cluster than programming the multi-local backtracking itself. In our case, a modification in the GASNet source code was necessary to run the Chapel distributed search on an MXM network with a non-default partition key. This problem would keep a not so enthusiastic user from Chapel. The bright side is that it was not a Chapel-only effort, as other PGAS libraries, such as UPC, Fortran, SHMEM use GASNet as communication layer.
VII. Conclusion
This work has investigated the use of Chapel high-productivity language for the design and implementation of all aspects involved in the conception of parallel tree search algorithms. This research covered from instruction-level parallelism used to improve the single-threaded search to the distributed and multi-level parallelism. According to the results, Chapel is a suitable language for programming such a complex and compute-intensive application. It is possible to hand-optimize the data structures involved in the search process in a way equivalent to C. Moreover, Chapel’s multicores features are similar to OpenMP. Additionally, programmers familiarized with shared memory programming can incrementally conceive a multi-level and distributed tree search.
Chapel presented an interesting trade-off between performance and programmability, despite the high level of its features for distributed programming. One would argue that it could be possible to program an MPI+X version faster than the one used; however, that is also the case for Chapel. For instance, the code for exploiting all CPU cores a locale could be programmed by hand, as well as the communication and load balancing among locales. However, the latter does not seem necessary, as the use of high-productivity features resulted in performance competitive to MPI+OpenMP.
It is worth to point out that the parallel optimization community already possesses legacy code mainly written in C/C++. Therefore, programmers may be resistant to learn another language and translate programs to Chapel [7]. The capacity of Chapel to include C code can be a partial solution for this situation. One could use C could along with Chapel’s high-productivity features for distributed programming. Finally, graphics processing units are crucial for solving big and challenging combinatorial optimization problems [5]. The adoption of Chapel by the parallel optimization community, besides performance and productivity, also may also depend on the support of GPUs.
Acknowledgments
The experiments presented in this paper were carried out on the Grid’5000 testbed [19], hosted by INRIA and including several another organizations. We thank Bradford Chamberlain, Elliot Ronaghan (Cray inc.) and Paul Hargrove (Berkeley lab.) for helping us to run GASNet on GRID5000. Moreover, we also thank Paul Hargrove for the modifications in GASNet infiniband implementation necessary to run GASNet on GRID’5000 MXM infiniband networks.
References
2http://www.grid5000.fr
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-02170842/file/HPCS.pdf", "len_cl100k_base": 7633, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 32065, "total-output-tokens": 9566, "length": "2e12", "weborganizer": {"__label__adult": 0.00038814544677734375, "__label__art_design": 0.0004239082336425781, "__label__crime_law": 0.000484466552734375, "__label__education_jobs": 0.0007557868957519531, "__label__entertainment": 0.00012636184692382812, "__label__fashion_beauty": 0.00022017955780029297, "__label__finance_business": 0.00030493736267089844, "__label__food_dining": 0.00046539306640625, "__label__games": 0.000972270965576172, "__label__hardware": 0.0020809173583984375, "__label__health": 0.0007567405700683594, "__label__history": 0.0004749298095703125, "__label__home_hobbies": 0.0001519918441772461, "__label__industrial": 0.0008578300476074219, "__label__literature": 0.0002696514129638672, "__label__politics": 0.0004529953002929687, "__label__religion": 0.0007414817810058594, "__label__science_tech": 0.1845703125, "__label__social_life": 0.00012254714965820312, "__label__software": 0.01139068603515625, "__label__software_dev": 0.79248046875, "__label__sports_fitness": 0.0005207061767578125, "__label__transportation": 0.0008831024169921875, "__label__travel": 0.00029659271240234375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38474, 0.02161]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38474, 0.42789]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38474, 0.90163]], "google_gemma-3-12b-it_contains_pii": [[0, 418, false], [418, 5197, null], [5197, 10574, null], [10574, 15436, null], [15436, 19031, null], [19031, 23790, null], [23790, 27737, null], [27737, 31456, null], [31456, 38474, null]], "google_gemma-3-12b-it_is_public_document": [[0, 418, true], [418, 5197, null], [5197, 10574, null], [10574, 15436, null], [15436, 19031, null], [19031, 23790, null], [23790, 27737, null], [27737, 31456, null], [31456, 38474, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38474, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38474, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38474, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38474, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38474, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38474, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38474, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38474, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38474, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38474, null]], "pdf_page_numbers": [[0, 418, 1], [418, 5197, 2], [5197, 10574, 3], [10574, 15436, 4], [15436, 19031, 5], [19031, 23790, 6], [23790, 27737, 7], [27737, 31456, 8], [31456, 38474, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38474, 0.02105]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
070dd123a3cb86eedd7d9bf818662b75822c61a3
|
Visualizing multivariate data using lattice and direct labels
http://directlabels.r-forge.r-project.org
Toby Dylan Hocking
toby.hocking AT inria.fr
15 October 2009
Outline
The lattice system
Adding direct labels using the latticedl package
Brief history of lattice
- Bill Cleveland, Rick Becker, Bell Labs, 1990s: trellis graphics system for S: http://cm.bell-labs.com/cm/ms/departments/sia/project/trellis/
- Deepayan Sarkar, 2000s: the lattice package for R.
Installing the required packages
- I used R 2.9.2 for the following examples.
- lattice is preinstalled with R.
- library(lattice)
- install.packages(c("latticeExtra","latticedl"))
- library(latticeExtra)
- library(latticedl)
Lattice allows easy visualization of many variables
```r
> options(width = 55)
> library(lattice)
> dotplot(variety ~ yield | site, data = barley,
+ groups = year, auto.key = list(space = "right"),
+ layout = c(1, 6), xlab = "Barley Yield (bushels/acre)"
```

Aspect ratio in scatterplots is important.
```r
> xyplot(sunspot.year ~ 1700:1988, xlab = "Year",
+ type = "l", scales = list(x = list(alternating = 2)),
+ main = "Yearly Sunspots")
```

Lattice also automatically calculates aspect ratio for optimal decoding
```r
> xyplot(sunspot.year ~ 1700:1988, xlab = "Year",
+ type = "l", scales = list(x = list(alternating = 2)),
+ main = "Yearly Sunspots", aspect = "xy")
```

Load a data set
```r
> data(Chem97, package = "mlmRev")
> head(Chem97)
lea school student score gender age gcsescore gcsecnt
1 1 1 1 4 F 3 6.625 0.3393
2 1 1 2 10 F -3 7.625 1.3393
3 1 1 3 10 F -4 7.250 0.9643
4 1 1 4 10 F -2 7.500 1.2143
5 1 1 5 8 F -1 6.444 0.1583
6 1 1 6 10 F 4 7.750 1.4643
```
Simple histogram
\textbf{Sample Code:}
\begin{verbatim}
> histogram(~gcsescore, Chem97)
\end{verbatim}
Histograms conditional on a categorical variable
> histogram(~gcsescore | factor(score),
+Chem97)
Box and whisker plots
```r
> bwplot(gcsescore ~ gender | factor(score),
+ Chem97, layout = c(6, 1))
```
Conditioned plots of kernel density estimates
```r
> densityplot(~gcsescore | factor(score),
+ Chem97)
```
Hide the actual points with the `plot.points` argument
```r
> densityplot(~gcsescore | factor(score),
+ Chem97, plot.points = FALSE)
```
Conditioned and grouped density plots
```r
> densityplot(~gcsescore | factor(score),
+ Chem97, plot.points = FALSE, groups = gender)
```
Add a legend with the auto.key argument
```r
> densityplot(~gcsescore | factor(score),
+ Chem97, plot.points = FALSE, groups = gender,
+ auto.key = list())
```
```
gcsescore
Density
0.0
0.2
0.4
0.6
0.8
0 2 4 6 8
0 2
0 2 4 6 8
4
6
0 2 4 6 8
8
0.0
0.2
0.4
0.6
0.8
10
M
F
```
Legend layout with the columns argument
```r
> densityplot(~gcsescore | factor(score),
+ Chem97, plot.points = FALSE, groups = gender,
+ auto.key = list(columns = 2))
```
Legend positioning with the space argument
```r
> densityplot(~gcsescore ~ factor(score),
+ Chem97, plot.points = FALSE, groups = gender,
+ auto.key = list(columns = 2, space = "bottom"))
```
Show all default settings
> show.settings()
Show settings good for printout
```r
> show.settings(standard.theme(color = FALSE))
```
- `superpose.symbol`
- `superpose.line`
- `strip.background`
- `strip.shingle`
- `dot.[symbol, line]`
- `box.[dot, rectangle, umbrella]`
- `add.[line, text]`
- `reference.line`
- `plot.[symbol, line]`
- `plot.shingle[plot.polygon]`
- `histogram[plot.polygon]`
- `barchart[plot.polygon]`
- `superpose.polygon`
- `regions`
Change the settings
```r
> br <- simpleTheme(col = c("black", "red"))
> show.settings(br)
```
- `superpose.symbol`
- `superpose.line`
- `strip.background`
- `strip.shingle`
- `dot.[symbol, line]`
- `box.[dot, rectangle, umbrella]`
- `add.[line, text]`
- `reference.line`
- `plot.[symbol, line]`
- `plot.shingle[plot.polygon]`
- `histogram[plot.polygon]`
- `barchart[plot.polygon]`
- `superpose.polygon`
- `regions`
Change group colors with `par.settings`
```r
> densityplot(~gcsescore | factor(score),
+ Chem97, plot.points = FALSE, groups = gender,
+ auto.key = list(columns = 2, space = "bottom"),
+ par.settings = br)
```
Load a tabular data set
```r
> print(VADeaths)
<table>
<thead>
<tr>
<th></th>
<th>Rural Male</th>
<th>Rural Female</th>
<th>Urban Male</th>
<th>Urban Female</th>
</tr>
</thead>
<tbody>
<tr>
<td>50-54</td>
<td>11.7</td>
<td>8.7</td>
<td>15.4</td>
<td>8.4</td>
</tr>
<tr>
<td>55-59</td>
<td>18.1</td>
<td>11.7</td>
<td>24.3</td>
<td>13.6</td>
</tr>
<tr>
<td>60-64</td>
<td>26.9</td>
<td>20.3</td>
<td>37.0</td>
<td>19.3</td>
</tr>
<tr>
<td>65-69</td>
<td>41.0</td>
<td>30.9</td>
<td>54.6</td>
<td>35.1</td>
</tr>
<tr>
<td>70-74</td>
<td>66.0</td>
<td>54.3</td>
<td>71.1</td>
<td>50.0</td>
</tr>
</tbody>
</table>
```
> vad <- as.data.frame.table(VADeaths)
> names(vad) <- c("age", "demographic", "deaths")
> head(vad)
<table>
<thead>
<tr>
<th>age</th>
<th>demographic</th>
<th>deaths</th>
</tr>
</thead>
<tbody>
<tr>
<td>50-54</td>
<td>Rural Male</td>
<td>11.7</td>
</tr>
<tr>
<td>55-59</td>
<td>Rural Male</td>
<td>18.1</td>
</tr>
<tr>
<td>60-64</td>
<td>Rural Male</td>
<td>26.9</td>
</tr>
<tr>
<td>65-69</td>
<td>Rural Male</td>
<td>41.0</td>
</tr>
<tr>
<td>70-74</td>
<td>Rural Male</td>
<td>66.0</td>
</tr>
<tr>
<td>50-54</td>
<td>Rural Female</td>
<td>8.7</td>
</tr>
</tbody>
</table>
Grouped dotplots work well for these data
```r
> dotplot(age ~ deaths, vad, groups = demographic,
+ type = "o")
```
Plots can be saved as R objects
```r
> dots <- dotplot(age ~ deaths, vad, groups = demographic, + type = "o")
> dots
```
```r
> dots
```
Saved plots can be updated later
```r
> dots2 <- update(dots, type = "l", xlim = c(5, 80))
> dots2
```

Add a confusing legend ... how can we label more intuitively?
```r
> update(dots2, auto.key = list(points = FALSE,
+ lines = TRUE))
```
![Chart showing the number of deaths for different age and gender groups. The chart includes lines for Rural Male, Rural Female, Urban Male, and Urban Female. The x-axis represents deaths, and the y-axis represents age groups from 50-54 to 70-74.]
Load some earthquake measurements
```r
> data(Earthquake, package = "nlme")
> head(Earthquake)
Quake Richter distance soil accel
132 20 5 7.5 1 0.264
133 20 5 8.8 1 0.263
134 20 5 8.9 1 0.230
135 20 5 9.4 1 0.147
136 20 5 9.7 1 0.286
137 20 5 9.7 1 0.157
```
Scatterplot with xyplot
> xyplot(accel ~ distance, Earthquake)
Log scales with scales argument
```r
> xyplot(accel ~ distance, Earthquake,
+ scales = list(log = TRUE))
```
Type "p" is the default
```r
> xyplot(accel ~ distance, Earthquake,
+ scales = list(log = TRUE), type = c("p"))
```
Type "g" adds a grid
```r
> xyplot(accel ~ distance, Earthquake,
+ scales = list(log = TRUE), type = c("p",
+ "g"))
```
Type "smooth" adds a smooth line
```r
> xyplot(accel ~ distance, Earthquake,
+ scales = list(log = TRUE), type = c("p",
+ "g", "smooth"))
```
Add some labels
> xyplot(accel ~ distance, Earthquake,
+ scales = list(log = TRUE), type = c("p",
+ "g", "smooth"), sub = "(log scale)",
+ xlab = "Distance from epicenter (km)",
+ ylab = "Maximum horizontal acceleration (g)",
+ main = "Larger quakes are felt closer to the epicenter")
Larger quakes are felt closer to the epicenter
(log scale)
Distance from epicenter (km)
Maximum horizontal acceleration (g)
Volcano elevation data in matrix form
> dim(volcano)
[1] 87 61
> print(volcano[1:5, 1:5])
[1,] 100 100 101 101 101
[2,] 101 101 102 102 102
[3,] 102 102 103 103 103
[4,] 103 103 104 104 104
[5,] 104 104 105 105 105
Plot volcano elevations in a matrix using color
> levelplot(volcano)
Use a different color scale
```r
> my.colors <- sapply(0:100, function(l) hcl(l = l))
> levelplot(volcano, col.regions = my.colors)
```
Use 3d wireframe plots
```r
> wireframe(volcano, drape = TRUE, col.regions = my.colors)
```
Combine plots using latticeExtra
```r
> library(latticeExtra)
> both <- c(wireframe(volcano, drape = TRUE),
+ levelplot(volcano))
> both
```
```
row
<table>
<thead>
<tr>
<th>volcano</th>
</tr>
</thead>
<tbody>
<tr>
<td>100</td>
</tr>
<tr>
<td>120</td>
</tr>
<tr>
<td>140</td>
</tr>
<tr>
<td>160</td>
</tr>
<tr>
<td>180</td>
</tr>
<tr>
<td>200</td>
</tr>
</tbody>
</table>
column
row
```
Globally change the plot parameters
> `trellis.par.set(regions = list(col = my.colors))`
> `both`
Longitudinal data
```r
> data(BodyWeight, package = "nlme")
> head(BodyWeight)
weight Time Rat Diet
1 240 1 1 1
2 250 8 1 1
3 255 15 1 1
4 260 22 1 1
5 262 29 1 1
6 258 36 1 1
```
Conditional scatterplots reveal difference between treatments
```r
> xyplot(weight ~ Time | Diet, BodyWeight,
+ groups = Rat, type = "l", layout = c(3,
+ 1))
```
Legends with more than a few items are very confusing
```r
> xyplot(weight ~ Time | Diet, BodyWeight,
+ groups = Rat, type = "l", layout = c(3,
+ 1), auto.key = list(space = "right",
+ points = FALSE, lines = TRUE))
```
The lattice system
Adding direct labels using the latticedl package
Why use direct labels instead of legends?
▶ Edward Tufte, professor emeritus of statistics at Yale.
▶ One of his points: legends make it harder to decode a statistical graphic.
▶ Use direct labels whenever possible.
How to plot direct labels in R?
- **Lattice + latticedl**: `direct.label(xyplot(y∼x,data,groups=z),method=f)`
- Positions of direct labels can be specified as a function of the data:
```r
f <- function(d,...){
# d is a data frame with columns x,y,groups of the data points
#... analyze the points and return the label positions:
return(data.frame(x=a,y=b,groups=c))
}
```
<table>
<thead>
<tr>
<th>groups</th>
<th>x</th>
<th>y</th>
<th>hjust</th>
<th>vjust</th>
<th>rot</th>
</tr>
</thead>
<tbody>
<tr>
<td>Rural Male</td>
<td>66.0</td>
<td>5</td>
<td>0</td>
<td>0.5</td>
<td>30</td>
</tr>
<tr>
<td>Rural Female</td>
<td>54.3</td>
<td>5</td>
<td>0</td>
<td>0.5</td>
<td>30</td>
</tr>
<tr>
<td>Urban Male</td>
<td>71.1</td>
<td>5</td>
<td>0</td>
<td>0.5</td>
<td>30</td>
</tr>
<tr>
<td>Urban Female</td>
<td>50.0</td>
<td>5</td>
<td>0</td>
<td>0.5</td>
<td>30</td>
</tr>
</tbody>
</table>
- `latticedl` does the labeling for you, keeping track of the correct colors.
- Common plot types have default direct labeling methods.
Easy fix for confusing legend: direct labels
```r
> library(lattice)
> long <- xyplot(weight ~ Time | Diet, BodyWeight,
+ groups = Rat, type = "l", layout = c(3,
+ 1))
> direct.label(long)
```
Even works in black and white
```r
> longbw <- update(long, par.settings = standard.theme(color = FALSE))
> direct.label(longbw)
```
Change label positions with the method argument
> direct.label(long, method = last.points)
Make your own positioning function using `dl.indep`
```r
> direct.label(long, method = dl.indep(d[which.max(d$x), + ]))
```

You can change text parameters (same as grid::grid.text)
> direct.label(dots2, method = list("last.points", + rot = 30))
Load some data on car fuel efficiency
```r
> data(mpg, package = "ggplot2")
> head(mpg)
<table>
<thead>
<tr>
<th>manufacturer</th>
<th>model</th>
<th>displ</th>
<th>year</th>
<th>cyl</th>
<th>trans</th>
<th>drv</th>
<th>cty</th>
</tr>
</thead>
<tbody>
<tr>
<td>audi</td>
<td>a4</td>
<td>1.8</td>
<td>1999</td>
<td>4</td>
<td>auto(l5)</td>
<td>f</td>
<td>18</td>
</tr>
<tr>
<td>audi</td>
<td>a4</td>
<td>1.8</td>
<td>1999</td>
<td>4</td>
<td>manual(m5)</td>
<td>f</td>
<td>21</td>
</tr>
<tr>
<td>audi</td>
<td>a4</td>
<td>2.0</td>
<td>2008</td>
<td>4</td>
<td>manual(m6)</td>
<td>f</td>
<td>20</td>
</tr>
<tr>
<td>audi</td>
<td>a4</td>
<td>2.0</td>
<td>2008</td>
<td>4</td>
<td>auto(av)</td>
<td>f</td>
<td>21</td>
</tr>
<tr>
<td>audi</td>
<td>a4</td>
<td>2.8</td>
<td>1999</td>
<td>6</td>
<td>auto(l5)</td>
<td>f</td>
<td>16</td>
</tr>
<tr>
<td>audi</td>
<td>a4</td>
<td>2.8</td>
<td>1999</td>
<td>6</td>
<td>manual(m5)</td>
<td>f</td>
<td>18</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>hwy</th>
<th>fl</th>
<th>class</th>
</tr>
</thead>
<tbody>
<tr>
<td>29</td>
<td>p</td>
<td>compact</td>
</tr>
<tr>
<td>29</td>
<td>p</td>
<td>compact</td>
</tr>
<tr>
<td>31</td>
<td>p</td>
<td>compact</td>
</tr>
<tr>
<td>30</td>
<td>p</td>
<td>compact</td>
</tr>
<tr>
<td>26</td>
<td>p</td>
<td>compact</td>
</tr>
<tr>
<td>26</td>
<td>p</td>
<td>compact</td>
</tr>
</tbody>
</table>
```
Plot city versus highway fuel efficiency
> xyplot(cty ~ hwy, mpg, aspect = 1)
Add a reference line $x=y$
```r
> panel.xyref <- function(...) {
+ panel.xyplot(...)
+ panel.abline(0, 1)
+ }
> xyplot(cty ~ hwy, mpg, aspect = 1, panel = panel.xyref)
```
```r
hwy
cty
10
15
20
25
30
35
20 30 40
●
●
●
●
●
● ...
● ●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
```
Jitter the data to see all the points
```r
> xyplot(jitter(cty) ~ jitter(hwy), mpg,
+ aspect = 1, panel = panel.xyref)
```
Group data by number of cylinders in the engine
```r
> direct.label(xyplot(jitter(cty) ~ jitter(hwy),
+ mpg, aspect = 1, panel = panel.xyref,
+ groups = factor(cyl)))
```
Group data by car class
```r
> direct.label(xyplot(jitter(cty) ~ jitter(hwy),
+ mpg, aspect = 1, panel = panel.xyref,
+ groups = class))
```
Compare direct labeling methods
```r
> compare.methods(c("empty.grid", "empty.grid.2"),
+ xyplot, mpg, jitter(cty) ~ jitter(hwy),
+ class, aspect = 1, panel = panel.xyref,
+ horiz = TRUE)
```
First load the libraries in R
▶ library(lattice)
▶ library(latticeExtra)
▶ library(latticedl)
Then you can look at the interactive help pages
- Overview: ?Lattice
- Customizing plots: ?xyplot
- Included panel functions: ?panel.functions, ?llines
- Multiple plots per page: ?plot.trellis, ?c.trellis
- Direct labeling: ?direct.label
R code from the slides available on the web:
http://directlabels.r-forge.r-project.org
Email me directly: toby.hocking AT inria.fr
|
{"Source-Url": "http://directlabels.r-forge.r-project.org/HOCKING-latticedl-semin-r.pdf", "len_cl100k_base": 5060, "olmocr-version": "0.1.50", "pdf-total-pages": 59, "total-fallback-pages": 0, "total-input-tokens": 129042, "total-output-tokens": 7116, "length": "2e12", "weborganizer": {"__label__adult": 0.0003457069396972656, "__label__art_design": 0.0015954971313476562, "__label__crime_law": 0.0004508495330810547, "__label__education_jobs": 0.00270843505859375, "__label__entertainment": 0.00019156932830810547, "__label__fashion_beauty": 0.00019931793212890625, "__label__finance_business": 0.0007634162902832031, "__label__food_dining": 0.0004651546478271485, "__label__games": 0.0005450248718261719, "__label__hardware": 0.001064300537109375, "__label__health": 0.0005884170532226562, "__label__history": 0.00057220458984375, "__label__home_hobbies": 0.0002689361572265625, "__label__industrial": 0.0010013580322265625, "__label__literature": 0.000308990478515625, "__label__politics": 0.000461578369140625, "__label__religion": 0.000576019287109375, "__label__science_tech": 0.190673828125, "__label__social_life": 0.00039076805114746094, "__label__software": 0.1605224609375, "__label__software_dev": 0.63525390625, "__label__sports_fitness": 0.0003497600555419922, "__label__transportation": 0.0005450248718261719, "__label__travel": 0.0003383159637451172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13612, 0.08099]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13612, 0.7028]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13612, 0.5832]], "google_gemma-3-12b-it_contains_pii": [[0, 166, false], [166, 244, null], [244, 597, null], [597, 824, null], [824, 1139, null], [1139, 1375, null], [1375, 1655, null], [1655, 2102, null], [2102, 2207, null], [2207, 2306, null], [2306, 2417, null], [2417, 2530, null], [2530, 2668, null], [2668, 2808, null], [2808, 3087, null], [3087, 3271, null], [3271, 3470, null], [3470, 3515, null], [3515, 3926, null], [3926, 4343, null], [4343, 4560, null], [4560, 5071, null], [5071, 5436, null], [5436, 5553, null], [5553, 5692, null], [5692, 5865, null], [5865, 6257, null], [6257, 6622, null], [6622, 6686, null], [6686, 6802, null], [6802, 6921, null], [6921, 7054, null], [7054, 7205, null], [7205, 7645, null], [7645, 7887, null], [7887, 7957, null], [7957, 8094, null], [8094, 8187, null], [8187, 8459, null], [8459, 8559, null], [8559, 8800, null], [8800, 8975, null], [8975, 9196, null], [9196, 9265, null], [9265, 9540, null], [9540, 10375, null], [10375, 10577, null], [10577, 10711, null], [10711, 10803, null], [10803, 10990, null], [10990, 11116, null], [11116, 11979, null], [11979, 12058, null], [12058, 12356, null], [12356, 12485, null], [12485, 12694, null], [12694, 12844, null], [12844, 13037, null], [13037, 13612, null]], "google_gemma-3-12b-it_is_public_document": [[0, 166, true], [166, 244, null], [244, 597, null], [597, 824, null], [824, 1139, null], [1139, 1375, null], [1375, 1655, null], [1655, 2102, null], [2102, 2207, null], [2207, 2306, null], [2306, 2417, null], [2417, 2530, null], [2530, 2668, null], [2668, 2808, null], [2808, 3087, null], [3087, 3271, null], [3271, 3470, null], [3470, 3515, null], [3515, 3926, null], [3926, 4343, null], [4343, 4560, null], [4560, 5071, null], [5071, 5436, null], [5436, 5553, null], [5553, 5692, null], [5692, 5865, null], [5865, 6257, null], [6257, 6622, null], [6622, 6686, null], [6686, 6802, null], [6802, 6921, null], [6921, 7054, null], [7054, 7205, null], [7205, 7645, null], [7645, 7887, null], [7887, 7957, null], [7957, 8094, null], [8094, 8187, null], [8187, 8459, null], [8459, 8559, null], [8559, 8800, null], [8800, 8975, null], [8975, 9196, null], [9196, 9265, null], [9265, 9540, null], [9540, 10375, null], [10375, 10577, null], [10577, 10711, null], [10711, 10803, null], [10803, 10990, null], [10990, 11116, null], [11116, 11979, null], [11979, 12058, null], [12058, 12356, null], [12356, 12485, null], [12485, 12694, null], [12694, 12844, null], [12844, 13037, null], [13037, 13612, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 13612, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13612, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13612, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13612, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 13612, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13612, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13612, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13612, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13612, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13612, null]], "pdf_page_numbers": [[0, 166, 1], [166, 244, 2], [244, 597, 3], [597, 824, 4], [824, 1139, 5], [1139, 1375, 6], [1375, 1655, 7], [1655, 2102, 8], [2102, 2207, 9], [2207, 2306, 10], [2306, 2417, 11], [2417, 2530, 12], [2530, 2668, 13], [2668, 2808, 14], [2808, 3087, 15], [3087, 3271, 16], [3271, 3470, 17], [3470, 3515, 18], [3515, 3926, 19], [3926, 4343, 20], [4343, 4560, 21], [4560, 5071, 22], [5071, 5436, 23], [5436, 5553, 24], [5553, 5692, 25], [5692, 5865, 26], [5865, 6257, 27], [6257, 6622, 28], [6622, 6686, 29], [6686, 6802, 30], [6802, 6921, 31], [6921, 7054, 32], [7054, 7205, 33], [7205, 7645, 34], [7645, 7887, 35], [7887, 7957, 36], [7957, 8094, 37], [8094, 8187, 38], [8187, 8459, 39], [8459, 8559, 40], [8559, 8800, 41], [8800, 8975, 42], [8975, 9196, 43], [9196, 9265, 44], [9265, 9540, 45], [9540, 10375, 46], [10375, 10577, 47], [10577, 10711, 48], [10711, 10803, 49], [10803, 10990, 50], [10990, 11116, 51], [11116, 11979, 52], [11979, 12058, 53], [12058, 12356, 54], [12356, 12485, 55], [12485, 12694, 56], [12694, 12844, 57], [12844, 13037, 58], [13037, 13612, 59]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13612, 0.09146]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
c1d5c8eecebef8a8374c04c26fd4c9a75aa22ffa
|
Hit and Peak Finding Algorithms
This note is about n-d array processing algorithms implemented in ImgAlgos.PyAlgos. Algorithms can be called from python but low level implementation is done on C++ with boost/python wrapper. All examples are shown for python level interface.
Content
- Content
- Common features of algorithms
- n-d arrays
- Windows
- Mask
- Make object and set parameters
- Define ROI using windows and/or mask
- Hit finders
- Number of pixels above threshold
- number_of_pix_above_thr
- Total intensity above threshold
- intensity_of_pix_above_thr
- Peak finders
- Peak selection parameters
- Two threshold "Droplet finder"
- peak_finder_v1
- peak_finder_v4
- Flood filling algorithm
- peak_finder_v2
- Local maximums search algorithm
- peak_finder_v3
- Demonstration for local maximum map
- Evaluation of the background level, rms, and S/N ratio
- Matrices of pixels for r0=3 and 4 and different dr values
- Matrices of pixels for r0=5 and 6 and different dr values
- Matrix of pixels for r0=7
- Test of peak finders
- Photon counting
- References
Common features of algorithms
n-d arrays
LCLS detector data come from DAQ as n-d arrays (ndarray in C++ or numpy.array in Python). In simple case camera data is an image presented by the 2-d array. For composite detectors like CSPAD, CSPAD2X2, EPIX, PNCCD, etc. data comes from a set of sensors as 3-d or 4-d arrays. If relative sensors' positions are known, then sensors can be composed in 2-d image. But this image contains significant portion of "fake" empty pixels, that may be up to ~20-25% in case of CSPAD. Most efficient data processing algorithms should be able to work with n-d arrays.
Windows
In some experiments not all sensors contain useful data. It might be more efficient to select Region of Interest (ROI) on sensors, where data need to be processed. To support this feature a tuple (or list) of windows is passed as a constructor parameter. Each window is presented by the tuple of 5 parameters \((\text{segnum}, \text{rowmin}, \text{rowmax}, \text{colmin}, \text{colmax})\), where \(\text{segnum}\) is a sensor index in the n-d array, other parameters constrain window 2-d matrix rows and columns. Several windows can be defined for the same sensor using the same \(\text{segnum}\). For 2-d arrays \(\text{segnum}\) parameter is not used, but still needs to be presented in the window tuple by any integer number. To increase algorithm efficiency only pixels in windows are processed. If \(\text{windows} = \text{None}\), all sensors will be processed.
The array of windows can be converted in 3-d or 2-d array of mask using method `pyimgalgos.GlobalUtils.mask_from_windows`.
Mask
Alternatively ROI can be defined by the mask of good/bad (1/0) pixels. For 2-d image mask can easily be defined in user's code. In case of 3-d
arrays the Mask Editor helps to produce ROI mask. Entire procedure includes
- conversion of n-d array to 2-d image using geometry,
- production of ROI 2-d mask with Mask Editor,
- conversion of the 2-d mask to the mask n-d array using geometry.
All steps of this procedure can be completed in Calibration Management Tool under the tab ROI.
In addition mask accounts for bad pixels which should be discarded in processing. Total mask may be a product of ROI and other masks representing good/bad pixels.
**Make object and set parameters**
Any algorithm object can be created as shown below.
```python
import numpy as np
from ImgAlgos.PyAlgos import PyAlgos
# create object:
alg = PyAlgos(windows=winds, mask=mask, pbits=0)
```
**Define ROI using windows and/or mask**
Region Of Interest (ROI) is defined by the set of rectangular windows on segments and mask, as shown in example below.
```python
# List of windows
winds = None # entire size of all segments will be used for peak finding
winds = (( 0, 0, 185, 0, 388),
( 1, 20, 160, 30,300),
( 7, 0, 185, 0, 388))
# Mask
mask = None # (default) all pixels in windows will be used for peak finding
mask = det.mask() # see class Detector.PyDetector
mask = np.loadtxt(fname_mask)#
mask.shape = <should be the same as shape of data n-d array>
```
**Hit finders**
Hit finders return simple values for decision on event selection. Two algorithms are implemented in ImgAlgos.PyAlgos. They count number of pixels and intensity above threshold in the Region Of Interest (ROI) defined by windows and mask parameters in object constructor.
Both hit-finders receive input n-d array `data` and threshold `thr` parameters and return a single value in accordance with method name.
**Number of pixels above threshold**
`number_of_pix_above_thr`
```python
npix = alg.number_of_pix_above_thr(data, thr=10)
```
**Total intensity above threshold**
intensity_of_pix_above_thr
\[
\text{intensity} = \text{alg.intensity_of_pix_above_thr}(\text{data}, \ thr=12)
\]
Peak finders
Peak finder works on calibrated, background subtracted n-d array of data in the region of interest specified by the list of windows and using only good pixels from mask n-d array. All algorithms implemented here have three major stages:
1. find a list of seed peak candidates
2. process peak candidates and evaluate their parameters
3. apply selection criteria to the peak candidates and return the list of peaks with their parameters
The list of peaks contains 17 (float for uniformity) parameters per peak:
- seg - segment index beginning from 0, example for CSPAD this index should be in the range (0,32)
- row - index of row beginning from 0
- col - index of column beginning from 0
- npix - number of pixels accounted in the peak
- amp_max - pixel with maximal intensity
- amp_total - total intensity of all pixels accounted in the peak
- row_cgrav - row coordinate of the peak evaluated as a "center of gravity" over pixels accounted in the peak using their intensities as weights
- col_cgrav - column coordinate of the peak evaluated as a "center of gravity" over pixels accounted in the peak using their intensities as weights
- raw_sigma - row sigma evaluated in the "center of gravity" algorithm
- col_sigma - column sigma evaluated in the "center of gravity" algorithm
- row_min - minimal row of the pixel group accounted in the peak
- col_min - minimal column of the pixel group accounted in the peak
- row_max - maximal row of the pixel group accounted in the peak
- col_max - maximal column of the pixel group accounted in the peak
- bkgd - background level estimated as explained in section below
- noise - r.m.s. of the background estimated as explained in section below
- son - signal over noise ratio estimated as explained in section below
There is a couple of classes helping to save/retrieve peak parameter records in/from the text file:
- \text{pyimgalgs.PeakStore}
- \text{pyimgalgs.TDFileContainer}
Peak selection parameters
Internal peak selection is done at the end of each peak finder, but all peak selection parameters need to be defined right after algorithm object is created. These peak selection parameters are set for all peak-finders:
```python
# create object:
alg = PyAlgos(windows=winds, mask=mask)
# set peak-selector parameters:
alg.set_peak_selection_pars(npix_min=5, npix_max=5000, amax_thr=0, atot_thr=0, son_min=10)
```
- npix_min: minimum number of pixels that pass the "low threshold" cut
- npix_max: maximum number of pixels that pass the "low threshold" cut
- amax_thr: pixel value must be greater than this high threshold to start a peak
- atot_thr: to be considered a peak the sum of all pixels in a peak must be greater than this value
- son_min: required signal-over-noise (where noise region is typically evaluated with radius/dr parameters). **set this to zero to disable the signal-over-noise cut.**
All peak finders have a few algorithm-dependent parameters
- nda - calibrated n-d array of data, pedestals and background should be subtracted, common mode - corrected
• thr* - different type of thresholds
• rank - peak rank as explained in section below.
• r0, dr - ring internal radius and width to evaluate background and noise rms as explained in section below.
Two threshold "Droplet finder"
two-threshold peak-finding algorithm in restricted region around pixel with maximal intensity. Two threshold allows to speed-up this algorithms. It is assumed that only pixels with intensity above thr\_high are pretending to be peak candidate centers. Candidates are considered as a peak if their intensity is maximal in the (square) region of radius around them. Low threshold in the same region is used to account for contributing to peak pixels.
\[
\text{peak\_finder\_v1} \\
\text{peaks} = \text{alg.peak\_finder\_v1}(\text{nda}, \text{thr\_low}=10, \text{thr\_high}=150, \text{radius}=5, \text{dr}=0.05)
\]
Parameter radius in this algorithm is used for two purpose:
- defines (square) region to search for local maximum with intensity above thr\_high and contributing pixels with intensity above thr\_lo,
- is used as a r0 parameter to evaluate background and noise rms as explained in section below.
\[
\text{peak\_finder\_v4} \\
\text{peaks} = \text{alg.peak\_finder\_v4}(\text{nda}, \text{thr\_low}=10, \text{thr\_high}=150, \text{rank}=4, \text{r0}=5, \text{dr}=0.05)
\]
The same algorithm as peak\_finder\_v1, but parameter radius is split for two (unsigned) rank and (float)r0 with the same meaning as in peak\_finder\_v3.
Flood filling algorithm
define peaks for regions of connected pixels above threshold
\[
\text{peak\_finder\_v2} \\
\text{peaks} = \text{alg.peak\_finder\_v2}(\text{nda}, \text{thr}=10, \text{r0}=5, \text{dr}=0.05)
\]
Two neighbor pixels are assumed connected if have common side. Pixels with intensity above threshold thr are considered only.
Local maximums search algorithm
define peaks in local maximums of specified rank (radius), for example rank=2 means 5x5 pixel region around central pixel.
\[
\text{peak\_finder\_v3} \\
\text{peaks} = \text{alg.peak\_finder\_v3}(\text{nda}, \text{rank}=2, \text{r0}=5, \text{dr}=0.05)
\]
- makes a map of pixels with local maximums of requested rank for data ndarray and mask, pixel code in the map may have bits 0/1/2/4 standing for not-a-maximum / maximum-in-row / maximum-in-column / maximum-in-rectangular-region of radius=rank.
- for each pixel with local maximal intensity in the region defined by the rank radius counts a number of pixels with intensity above zero, total positive intensity, center of gravity coordinates and rms.
- using parameters r0 (ex.=5.0), dr (ex.=0.05) evaluates background level, rms of noise, and S/N for the pixel with maximal
Demonstration for local maximum map
Test for 100x100 image with random normal distribution of intensities
Example of the map of local maximums found for rank from 1 to 5:
color coding of pixels:
- blue=0 - not a local maximum
- green=1 - local maximum in row
- yellow=1+2 - local maximum in row and column
- red=1+2+4 - local maximum in rectangular region of radius=rank.
Table for rank, associated 2-d region size, fraction of pixels recognized as local maximums for rank, and time consumption for this algorithm.
<table>
<thead>
<tr>
<th>rank</th>
<th>2-d region</th>
<th>fraction</th>
<th>time, ms</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>3x3</td>
<td>0.1062</td>
<td>5.4</td>
</tr>
<tr>
<td>2</td>
<td>5x5</td>
<td>0.0372</td>
<td>5.2</td>
</tr>
<tr>
<td>3</td>
<td>7x7</td>
<td>0.0179</td>
<td>5.1</td>
</tr>
<tr>
<td>4</td>
<td>9x9</td>
<td>0.0104</td>
<td>5.2</td>
</tr>
<tr>
<td>5</td>
<td>11x11</td>
<td>0.0066</td>
<td>5.2</td>
</tr>
</tbody>
</table>
Evaluation of the background level, rms, and S/N ratio
When peak is found, its parameters can be precised for background level, noise rms, and signal over background ratio (S/N) can be estimated. All these values can be evaluated using pixels surrounding the peak on some distance. For all peak-finders we use the same algorithm. Surrounding pixels are defined by the ring with internal radial parameter $r_0$ and ring width $dr$ (both in pixels). The number of surrounding pixels depends on $r_0$ and $dr$ parameters as shown in matrices below. We use notation
- + central pixel with maximal intensity,
- 1 pixels counted in calculation of averaged background level and noise rms,
- 0 pixels not counted.
Matrices of pixels for $r_0=3$ and 4 and different $dr$ values
r0=3, dr=0.1 and r0=4 dr=0.2 examples
Matrices of pixels for r0=5 and 6 and different dr values
Matrix of pixels for $r_0=7$
<table>
<thead>
<tr>
<th>$r_0=5$, $dr=0.05$ (12 pixels)</th>
<th>$r_0=5$, $dr=0.5$ (28 pixels)</th>
</tr>
</thead>
<tbody>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0</td>
<td>0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0</td>
<td>0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0</td>
<td>0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>$r_0=6$, $dr=0.2$ (12 pixels)</th>
<th>$r_0=6$, $dr=0.5$ (28 pixels)</th>
</tr>
</thead>
<tbody>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
</tr>
</tbody>
</table>
Matrix of pixels for $r_0=7$
Photon conversion in pixel detectors is complicated by the split photons between neighboring pixels. In some cases, energy deposited by a photon is split between two or (sometimes) more pixels. The photon counting algorithm described here is designed to account for this effect and return an unassembled array with correct number of photons per pixel. Pythonic API for this algorithm is as follows:
```python
# Import
import psana
# Initialize a detector object
det = psana.Detector('myAreaDetectorName')
# Merges photons split among pixels and returns n-d array with integer number of photons per pixel.
nphotons_nda = det.photons(evt, nda_calib=None, mask=None, adu_per_photon=None)
```
The `det.photons()` function divides the pixel intensities (ADUs) by `adu_per_photon`, resulting in a fractional number of photons for each pixel. This function is a wrapper around `photons()` method in PyAlgos:
```python
# Import from ImgAlgos.PyAlgos import photons
# Merges photons split among pixels and returns n-d array with integer number of photons per pixel.
nphotons_nda = photons(fphotons, adu_per_photon=30)
```
**Sphinx doc**
Method `photons` receives (float) n-d numpy array `fphotons` representing image intensity in terms of (float) fractional number of photons and an associated mask of bad pixels. Both arrays should have the same shape. Two lowest dimensions represent pixel rows and columns in 2-d pixel matrix arrays. Algorithm works with good pixels defined by the mask array (1/0 = good/bad pixel). Array `fphotons` is represented with two arrays; An array containing whole number of photons (integer) and the leftover fractional number of photon array (float) of the same shape. Assuming the photons are only split between two adjacent pixels, we round up the adjacent pixels if they sum up to be above 0.9 photons. The algorithm is best explained using an example:
Let's say we measured the following ADUs on our detector. "adu_per_photon" is user-defined, but for this example let's set it to 1:
<table>
<thead>
<tr>
<th>ADUs (adu_per_photon=1):</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.0 3.5 0.1 0.2</td>
</tr>
<tr>
<td>0.2 0.4 0.0 1.2</td>
</tr>
<tr>
<td>0.1 4.7 3.4 0.0</td>
</tr>
<tr>
<td>0.5 0.4 0.4 0.1</td>
</tr>
</tbody>
</table>
We expect the converted photon counts to be:
<table>
<thead>
<tr>
<th>Photons:</th>
</tr>
</thead>
<tbody>
<tr>
<td>0 4 0 0</td>
</tr>
<tr>
<td>0 0 0 1</td>
</tr>
<tr>
<td>0 5 3 0</td>
</tr>
<tr>
<td>1 0 0 0</td>
</tr>
</tbody>
</table>
To see how we get from ADUs to Photons, we split the ADUs into whole photons and fractional photons.
<table>
<thead>
<tr>
<th>ADUs</th>
<th>= Whole photons + Fractional photons</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.0 3.5 0.1 0.2</td>
<td>0 3 0 0 + 0.0 0.5 0.1 0.2</td>
</tr>
<tr>
<td>0.2 0.4 0.0 1.2</td>
<td>0 0 0 1 + 0.2 0.4 0.0 0.2</td>
</tr>
<tr>
<td>0.1 4.7 3.4 0.0</td>
<td>0 4 3 0 + 0.1 0.7 0.4 0.0</td>
</tr>
<tr>
<td>0.5 0.4 0.4 0.1</td>
<td>0 0 0 0 + 0.5 0.4 0.4 0.1</td>
</tr>
</tbody>
</table>
Assuming the photons are only split by two adjacent pixels, we search for a pixel that has at least 0.5 photons with an adjacent pixel that sum up to above 0.9 photons. In cases where a pixel has multiple adjacent pixels which sum up to above 0.9 photons, we take the largest adjacent pixel. If such an adjacent pair of pixels is found, then the adjacent pixel values are merged into one pixel. It is merged into the pixel with the larger value. (See "After merging adjacent pixels" example below).
The merged adjacent pixels are then rounded to whole photons. (See "Rounded whole photons" example below).
Fractional photons
0.0 0.5 0.1 0.2
0.2 0.4 0.0 0.2
0.1 0.7 0.4 0.0
0.5 0.4 0.4 0.1
After merging adjacent pixels:
0.0 0.9 0.1 0.2
0.2 0.0 0.0 0.2
0.1 1.1 0.0 0.0
0.9 0.0 0.4 0.1
Rounded whole photons:
0 1 0 0
0 0 0 0
0 1 0 0
1 0 0 0
Photons is then the sum of “Whole photons” and “Rounded whole photons”:
```
<table>
<thead>
<tr>
<th></th>
<th>Whole photons</th>
<th>Rounded whole photons</th>
</tr>
</thead>
<tbody>
<tr>
<td>0 4 0 0</td>
<td>0 3 0 0</td>
<td>0 1 0 0</td>
</tr>
<tr>
<td>0 0 0 1 =</td>
<td>0 0 0 1</td>
<td>+ 0 0 0 0</td>
</tr>
<tr>
<td>0 5 3 0</td>
<td>0 4 3 0</td>
<td>0 1 0 0</td>
</tr>
<tr>
<td>1 0 0 0</td>
<td>0 0 0 0</td>
<td>1 0 0 0</td>
</tr>
</tbody>
</table>
```
References
- [ImgAlgos.PyAlgos](#) - code documentation
- [psalgos](#) - new peak-finder and other algorithms code documentation
- [Peak Finding](#) - short announcement about peak finders
- [Hit and Peak Finders](#) - examples in Chris' tutorial
- [GUI for tuning peak finding](#) - Chun's page in development
- [Auto-generated documentation](#) - references to code-based documentation for a few other useful packages
- [pyimgalgos.PeakStore](#) - class helping to save peak parameter records in the text file
- [pyimgalgos.TDFileContainer](#) - class helping to retrieve peak parameter records from the text file
- [Test of Peak Finders](#) - example of exploitation of peak finders
- [Test of Peak Finders - V2](#) - example of exploitation of peak finders after revision 1 (uniformization)
- [photons - sphinx doc](#)
- [Peak Finding Module](#) - (depricated) psana module, it demonstration examples and results
- [Psana Module Catalog](#) - (depricated) peak finding psana modules
- [Psana Module Examples](#) - (depricated) peak finding examples in psana modules
|
{"Source-Url": "https://confluence.slac.stanford.edu/download/temp/pdfexport-20190125-250119-1232-13701/PSDM-HitandPeakFindingAlgorithms-250119-1232-13702.pdf?contentType=application/pdf", "len_cl100k_base": 6602, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 24594, "total-output-tokens": 6755, "length": "2e12", "weborganizer": {"__label__adult": 0.00039005279541015625, "__label__art_design": 0.0014562606811523438, "__label__crime_law": 0.0004322528839111328, "__label__education_jobs": 0.0005884170532226562, "__label__entertainment": 0.00014960765838623047, "__label__fashion_beauty": 0.00023233890533447263, "__label__finance_business": 0.00017344951629638672, "__label__food_dining": 0.0004868507385253906, "__label__games": 0.0009069442749023438, "__label__hardware": 0.007122039794921875, "__label__health": 0.0005826950073242188, "__label__history": 0.00044465065002441406, "__label__home_hobbies": 0.00030922889709472656, "__label__industrial": 0.0013990402221679688, "__label__literature": 0.00023734569549560547, "__label__politics": 0.0003190040588378906, "__label__religion": 0.0007462501525878906, "__label__science_tech": 0.302734375, "__label__social_life": 0.00013577938079833984, "__label__software": 0.0207672119140625, "__label__software_dev": 0.65869140625, "__label__sports_fitness": 0.0005517005920410156, "__label__transportation": 0.0005483627319335938, "__label__travel": 0.0002899169921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19090, 0.08737]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19090, 0.84592]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19090, 0.76494]], "google_gemma-3-12b-it_contains_pii": [[0, 2938, false], [2938, 4881, null], [4881, 8044, null], [8044, 10727, null], [10727, 10925, null], [10925, 12328, null], [12328, 12425, null], [12425, 14019, null], [14019, 14923, null], [14923, 17391, null], [17391, 19090, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2938, true], [2938, 4881, null], [4881, 8044, null], [8044, 10727, null], [10727, 10925, null], [10925, 12328, null], [12328, 12425, null], [12425, 14019, null], [14019, 14923, null], [14923, 17391, null], [17391, 19090, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 19090, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19090, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19090, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19090, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19090, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19090, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19090, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19090, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19090, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19090, null]], "pdf_page_numbers": [[0, 2938, 1], [2938, 4881, 2], [4881, 8044, 3], [8044, 10727, 4], [10727, 10925, 5], [10925, 12328, 6], [12328, 12425, 7], [12425, 14019, 8], [14019, 14923, 9], [14923, 17391, 10], [17391, 19090, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19090, 0.18794]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
d10b5ba70a24da28ef15aba719eb4a04a253c2aa
|
Applicability of Agile and Scrum to Product-Service Systems
Ramirez Hernandez, Tabea; Kreye, Melanie; Eppinger, Steven
Publication date: 2019
Document Version
Publisher's PDF, also known as Version of record
Link back to DTU Orbit
Citation (APA):
Applicability of Agile and Scrum to Product-Service Systems
Tabea Ramírez Hernández ([email protected])
Department of Management Engineering, Technical University of Denmark,
Kongens Lyngby, Denmark
Melanie Kreye
Department of Management Engineering, Technical University of Denmark,
Kongens Lyngby, Denmark
Steven Eppinger
Sloan School of Management, Massachusetts Institute of Technology,
Cambridge, Massachusetts, USA
Abstract
Developing Product-Service Systems (PSS) is uniquely challenging in terms of both the offering and the development process due to the combination of product and service components. This paper investigates the applicability of agile and scrum method, having originate in the software industry, to the development of PSS to address these challenges in practice. Based on a combination of agile and servitization literature, this paper offers a conceptual framework detailing the applicability of four agile elements (application, management, technical, personnel), and nine scrum elements in three groups (events, artefacts, roles). This research contributes to the servitization literature by extending the knowledge on PSS development and deriving suitable management practices.
Keywords: Agile, Scrum, Product-Service System, Project Management
Introduction
Manufacturers are increasingly seeking to servitize their business through the provision of Product-Service Systems (PSS), compound offerings of products and services. This trend promises the provider high gains including closer customer contact, stable revenue streams, and higher profit margins (Isaksson et al., 2009). However, by far not all manufacturers experimenting with the concept of PSS are able to harvest these benefits. Indeed, the history of servitization shows many examples of PSS development projects, which fail already during the development and never even reach the market. In response, a stream emerged in the academic servitization literature, which discusses in particular the challenges of PSS development.
Core challenges for manufacturing firms in the development of PSS often arise because of the radical nature of the final offering (Baines et al., 2017), the systemic complexity of parallel development of the product and service (Trevisan and Brissaud, 2017), and the difficulty of project execution (Morelli, 2006). Here challenges can arise in the course of defining and testing intangible service elements, as many services are
produced and consumed simultaneously (Lankhorst, 2012). In addition, manufacturers often have to manage the systemic complexity of developing not only the product and the service distinctly, but as a system. Here manufacturers often lack knowledge regarding the diverse interfaces in this systemic integration (Trevisan and Brissaud, 2017). Lastly, uncertainty arising from the unpredictability of the competitors’ actions, the precise customer needs, or other macro-economic changes can impede the development (Kreye, 2017). In short, the development of PSS is often characterized through high uncertainty and complexity (Ramírez Hernández et al., 2018).
While contributions in the servitization literature investigated the challenges of PSS development, no suitable solution has been identified up to date. The PSS development methodologies offered today (Dingsøyr et al., 2012; Vasantha et al., 2012a) are still strongly oriented on the traditional stage gate approach (Aurich et al., 2006; Vasantha et al., 2012b; Weber et al., 2004). Academic literature has however reflected upon uncertainty management in new product or service development. As such, Rice et al. (2008) proposed the use of more “agile” methods under circumstances of high uncertainty, and more “staged” methods, under circumstances of low uncertainty. Short development cycles through testing of assumptions about uncertain conditions and incorporating these learnings into the development project to plan the next short iteration are used to navigate these uncertainty conditions. Moreover, Boehm and Turner have investigated the concept of agile further and identified the basis of agile, i.e. when agile works most successful. They distinguish four elements; Application, Management, Technical and Personnel, and discussed their variance for the optimal application in agile. These four elements of agile represent a guidance for where to apply agile.
Further, the concept of agile manifests itself in several methods, of which scrum is one of the most mature and widely applied (Dingsøyr et al., 2012). It is divided into three groups; events, artefacts and roles, with three elements each (Cooper and Sommer, 2016a; Schwaber and Sutherland, 2017). The events include the sprint planning and sprint, the daily scrum, and the review and retrospective meeting. The artefacts contain the product backlog, the sprint backlog, and the increment. The roles are distinguished into the product owner, the scrum master, and the scrum team. These nine elements of scrum provide guidance on how to apply agile.
While the body of knowledge about agile (and its manifestation in scrum) has grown substantially in the field of software, its application outside this realm is still nascent. Specifically, the applicability of agile and scrum in contexts such as PSS development is promising, yet underexplored. Accordingly, we ask the following research question to close this gap:
Which elements of agile and the scrum methodology are applicable to the development of Product-Service Systems?
Based on the analysis of existing servitization and agile literature, we offer a conceptual framework detailing the above-mentioned four elements of agile and nine elements of scrum, in terms of their applicability in PSS development. While describing these elements of agile and scrum is not in itself a new contribution to the literature, assessing their application and adaptation to the PSS development context contributes to theory building in the field of servitization and agile.
**Research Design**
To answer the research question, we conducted an exploratory literature review based on contributions in the field of agile and servitization. The aim of the literature review was
to create a rich understanding of the state-of-the-art literature and to comprehend the applicability of agile and scrum to the PSS development context. The literature review is based on contributions identified through a keyword search in the search databases including Scopus and Web of Science.
The review of the agile literature included search strings derived from the following keywords: “agile” (Boehm and Turner, 2003; Dingsøyr et al., 2012; Moran, 2015), “agile development” (Conforto et al., 2014; Nerur and Balijepally, 2007), “scrum” (Dybå and Dingsøyr, 2008; Schwaber and Sutherland, 2017), “agile service development” (Cocca et al., 2015; Lankhorst, 2012), “agile product development” (Cooper and Sommer, 2018; Karlström and Runeson, 2006). Similarly, the review of the servitization literature was conducted using keywords “Product-Service System” or “PSS” (Beuren et al., 2013; Mont, 2002; Tukker, 2004), “integrated solution” (Storbacka, 2011), “bundled services” (Schmenner, 2009), “servitization” (Baines et al., 2017; Díaz-Garrido et al., 2018), “PSS development” (Aurich et al., 2006; Wallin et al., 2015; Wuest and Wellsandt, 2016), and “new service development” (Papastathopoulou and Hultink, 2012; Santos and Spring, 2013). Based on the initial findings, we refined and combined the keywords further in the course of the literature review.
The literature review revealed the need to differentiate between the application of agile as a concept and its manifestation in a specific method (Boehm and Turner, 2004). Agile as a concept provides guidelines of a general setting under which agile is best applied. Boehm and Turner, (2004) summarized a framework which distinguished four elements as the general basis of agile: Application, Management, Technical and Personnel. The Application of agile details that it unfolds its full potential in volatile conditions through rapid value creation in small teams. The Management relies strongly on intense customer involvement in the project, with qualitative control mechanisms and strong utilization of tacit, interpersonal knowledge. The Technical element details simple designs, which are easily refactorable in short increments with test cycles, as well as prioritized requirements, which are evolving continuously. Lastly, agile relies strongly on Personnel who are 100% dedicated to the project, working co-located and with a culture of empowerment. These four elements of agile constitute the overall applicability of agile to a certain setting and thus form the basis for our discussion in the PSS development context.
The manifestation of agile finds its way into several methods in practice. One of the most applied and researched methods is scrum, which describes an iterative development process with incremental value delivery. Although scrum is often modified to fit the particular situation, for the purpose of the present research we will refer to the original form derived from the software development (Schwaber and Sutherland, 2017). It distinguishes events, artefacts and roles, with three elements each (Cooper and Sommer, 2016a; Schwaber and Sutherland, 2017). The events include the sprint planning and sprint, the daily scrum, and the review and retrospective meeting. The sprint planning is an event in which the work packages for the upcoming development are planned. The sprint represents the subsequent intense development period of usually 1-4 weeks duration, in which the previously defined work packages are created. The daily scrum represents a stand-up meeting on each day of the sprint, in which each team member reflects on the progress of the developments, as well as potential problems. After the sprint a review and retrospective meeting is held, in which the team reflects upon the developed work, as well as the process through which it was developed.
The artefacts are the product backlog, the sprint backlog, and the increment. The product backlog represents the prioritized list of requirements, which is continually updated to incorporate the learnings of each sprint. The sprint backlog is the amount of
work chosen by the development team to be executed in the course of one sprint. Unlike the product backlog, the sprint backlog requirements do not change during the sprint. The increment is the outcome of the development work in the course of one sprint. It is used in the review and retrospective meeting to test and seek feedback from customers and stakeholders. Based on this feedback, the product backlog is re-prioritized.
The roles include the product owner, the scrum master, and the scrum team. The product owner is the person responsible to update and manage the product backlog to achieve the desired product. The scrum master is the process owner and facilitates the team in the application of scrum, as well as the removal of impediments of the development project. Lastly, the scrum team is responsible for the actual development and consists of a cross-functional, fully dedicated team.
While the application through reduction of uncertainty promises a beneficial application of agile and scrum in PSS, it is however important to note that PSS also differ from the origin of agile in pure software development. While software is intangible, infinitely divisible, and easily refactorable, this is not true for PSS. Particularly the product element of PSS is tangible, most likely not infinitely divisible, and, once produced, only refactorable under additional costs. The service element on the other hand is intangible and often easily refactorable (or adjustable to the customer conditions), but can only fully be tested in the field as it is produced and consumed simultaneously. As such, it remains to be investigated which elements of agile and scrum can be applied to PSS development to address the strong challenges during the development.
**Conceptual Framework: Applicability of Agile and Scrum in PSS Development**
To answer our research question, we utilize a conceptual framework combining the four bases of agile defined by Boehm and Turner (2003) with the nine elements from the events, artefacts and roles described by (Cooper and Sommer, 2016a; Schwaber and Sutherland, 2017). We apply this framework to assess the suitability and adaptation of agile and scrum in the specific context of PSS development.
**Agile Elements**
1. **The Application**
The first basis of agile, the Application, is highly similar to the original description, as regularly also in PSS development, volatile conditions have to be managed and customer needs addressed. In addition, PSS often possess systemic complexity between the product and the service part, which implies that scrum needs to be scaled to coordinate the separate developments of several components (e.g. service and product components) in parallel through e.g. “scrum-of-scrum” (Dingsøyr et al., 2018). Overall, no adaptation to the element of Application to PSS development is needed.
2. **The Management**
For the basis of Management, small adaptations have to be considered in the PSS context. The development of PSS may be highly customer focused and involve a close collaboration or even co-creation with the customer (Kristensson et al., 2008; Vargo and Lusch, 2008). It also often relies strongly on communication and team collaboration (Wolfenstetter et al., 2015). However, large and traditional enterprises moving towards servitizing their business through offering PSS are likely unable to abandon their legacy plan-based and KPI-driven development and solely rely on qualitative control mechanisms and tacit, interpersonal knowledge (Boehm and Turner, 2005). This organizational resistance to agile may be overcome through change management
practices. As such, the agile basis of *Management* is generally applicable to PSS development, calls however often for additional change management practices.
*(3) The Technical*
The *Technical* basis of agile partly conflicts with the characteristics of PSS. Some PSS can possess a high systemic complexity, which arises from the combination of (tangible) product and (intangible, process-focused) service elements. This combination creates high interdependencies to ensure operability of the Product-Service System. As such, the service has to be tailored to the product characteristics, and the product design should consider the service-ability (Trevisan and Brissaud, 2017). Due to this strong limitation, literature proposes a more structured approach such as the application of e.g. the Scaled Agile Framework (SAFe) (Leffingwell et al., 2013), to coordinate the integrated development. In addition, the product element is not as easily refactorable as pure software code due to its tangibility (Conforto et al., 2014) and thus, limits the optimal operation of agile as suggested by the *Technical* basis. However, PSS generally complies with the struggle of volatile requirements and the need for testing the developed increments (Morelli, 2006; Wolfenstetter et al., 2015) mentioned for the *Technical* basis. Concluding, while some parts of PSS development characteristics comply this *Technical* basis, others call for strong adjustments.
*(4) The Personnel*
Lastly, the agile basis of *Personnel* is again partially applicable to PSS development. While PSS development thrives on cross-functional teams with high customer engagement (Wolfenstetter et al., 2015), traditional manufacturers regularly struggle with full staffing of the employees on the project. In addition, large organizations are often regionally spread out, which hampers the ability to develop with co-located team members (Conforto et al., 2014). Furthermore, traditional manufacturers may struggle with the transition from a hierarchically towards a flat and empowered culture (Paasivaara et al., 2018). As such, in principle PSS development complies with the *Personnel* basis of agile; in practice however, manufacturers may need to adapt agile to operate within the existing structures of the organization.
*Scrum Elements*
*(1) Sprint and Sprint Planning*
The first event consists of the *sprint planning and sprint*. The agile literature has already investigated the applicability of the sprint planning and sprint to new service development as well as to new product development in separation. In new service development, the service may be developed through planned, time-boxed iterations and short feedback cycles with the customer (Cocca et al., 2015; Lamberth-Cocca and Meiren, 2017; Lankhorst, 2012). In new product development however, the *sprint planning and sprint* is not as easily applied. Due to the tangibility of the product, many teams struggle to decompose the physical product into several fully-functional sub-products which are developed in sequential, periodic sprint cycles. As such, not every *sprint planning and sprint* may be able to create a functional sub-product that can be demonstrated to the customer as originally defined. Rather, several *sprint planning and sprints* may be required to deliver the concept, the CAD-model or drawing, the testable component, an integrated prototype, and finally the product (Cooper and Sommer, 2016b). In addition, the systemic complexity of integrating product and service elements calls for a more structured approach to coordinate the interrelation between them (Morelli, 2006; Wolfenstetter et al., 2015). To answer these limitations of PSS development to a pure *sprint planning and sprint*, literature proposes a more linear agile process, called the Agile-Stage-Gate hybrid (Cooper and Sommer, 2016a). Here the linear development
mode and the periodic control of the stage gate process are merged with agile sprints in between the gates.
(2) The Daily Scrum
The second event, the daily scrum, is intended to foster a short, intense exchange of the most critical information regarding the development project (Paasivaara et al., 2012). PSS development frequently also builds upon intense collaboration between the team members to coordinate the systemic complexity (Trevisan and Brissaud, 2017). The daily scrum is thus easily transferrable to the PSS development context and may even enhance the collaboration.
(3) The Retrospective and Review Meeting
The third event of scrum is called the retrospective and review meeting. Here lessons learned are implemented already in the course of the development project. In contrast, in traditional PSS development literature, the revision of the PSS developed and a reflection on the underlying process is conducted after finalizing the PSS (Aurich et al., 2006; Vasantha et al., 2012b). Thus, the lessons-learned are implemented in the subsequent development project. Given the often high degree of uncertainty in PSS development, fast learning and adaptation is not only transferrable, but also strongly recommendable.
(4) The Product Backlog
The first artefact of scrum is the product backlog and comprises a prioritized list of features the final offering should have. It represents the counterpart to the detailed requirement specifications in PSS development (Aurich et al., 2006). However, the product backlog of scrum is a tool which acknowledges the degree of uncertainty connected to the requirements and is thus constantly updated (Schwaber and Sutherland, 2017). Due to the uncertainty in the context of PSS development (Morelli, 2006), the application of ongoing adaptable requirements can be recommended. As the most important items of the product backlog assure the most important items for the creation of customer value (Schwaber and Sutherland, 2017), the application of the product backlog (in combination with iterations collecting customer feedback) enhances customer satisfaction (Cooper and Sommer, 2018). Since PSS often aim to create strong long-term customer relationships (Beuren et al., 2013; Visnjic et al., 2016), the application of the product backlog may not only reduce uncertainty, but also strengthen the customer relationship (and satisfaction).
(5) The Sprint Backlog
The second artefact is the sprint backlog. The sprint backlog is the selection of the most important requirements to be developed in one sprint and remains unchanged in the course of this sprint. At its core, the sprint backlog provides the team with the necessary structure and implies a small plan-based approach: after planning the requirements, the actual development is carried out. Traditional PSS development methodologies follow this logic (just at a larger scale) (Aurich et al., 2006; Vasantha et al., 2012b). Accordingly, if the structure of periodic sprints is to be used, the sprint backlog should be easily transferrable to PSS development.
(6) The Increment
The third artefact is the Increment. This is the complete, functional, testable and releasable outcome of a sprint (Schwaber and Sutherland, 2017). Although in the context of new service development, the increment could be easily applied (Cocca et al., 2015; Lamberth-Cocca and Meiren, 2017; Lankhorst, 2012), in the context of new product development the original definition of the increment is troublesome (Cooper and Sommer, 2018; Karlström and Runeson, 2006). As elaborated before, the physicality of the product hampers the development complete and functional product increment (Karlström and Runeson, 2006). Research from the use of scrum in product development
proposes here a redefinition of the increment towards “a complete and testable deliverable”, which can thus also be applied in the context of PSS development (Cooper and Sommer, 2016b).
(7) The Product Owner
The first role is the product owner. The product owner is responsible for the prioritization of the product backlog and the stakeholder management to ensure management support (Schwaber and Sutherland, 2017). In PSS development, this role requires a strong understanding of both the product and the service elements, as the product owner must continually re-prioritize the requirements for the entire project. While the role as such is easily applicable to PSS development, in practice it may require senior experts to execute this role (Dikert et al., 2016).
(8) The Scrum Master
The second role is the scrum master. The scrum master is responsible for the correct execution of the scrum methodology and the removal of obstacles the development team may encounter (Schwaber and Sutherland, 2017). In the context of PSS development, organizational resistance can arise as both, the PSS offering (Visnjic et al., 2016) and the scrum process (Dikert et al., 2016), may be novel to the organization. Therefore focus should be laid on a properly trained scrum master with strong stakeholder management capabilities (Boehm and Turner, 2005). Overall, the scrum master should be easily applied to any PSS development project.
(9) The Scrum Team
The scrum team is defined as the last role of the scrum methodology. Here, in the original definition the team should be fully dedicated, co-located, empowered and cross-functional (Schwaber and Sutherland, 2017). As mentioned in the Personnel element, full dedication, co-location and empowerment can be challenging for traditional manufacturers. The scrum methodology specifies however, that the full potential of scrum can only be reached if the elements are kept as defined – specifically the fully dedicated and co-located team (Boehm and Turner, 2005). Weakening of this requirement would strongly impact the team’s ability to learn and adapt fast. Accordingly, an adaptation for this challenge could be to apply scrum only to highly critical projects of PSS development with high uncertainty. In short, the application of the scrum team in its original sense poses challenges to traditional manufacturers, but should not be compromised when applied in the PSS context.
Figure 1 summarizes our conceptual framework. In sum, the concept of agile is generally applicable to the PSS development context. While some elements are fully or through smaller adaptations directly transferrable to PSS development, the Technical element requires major adaptation in the PSS context. The same accounts for the application of agile through the method scrum. Some elements are easily transferrable to the PSS development context, while others need major adaptations.
Implications and Conclusion
In this paper, we investigate the question of which elements of agile and scrum are applicable to the development of Product-Service Systems. Through an exploratory literature review, we derive a conceptual framework based on the literature streams of agile and servitization. This framework distinguishes four elements of agile, and nine elements of scrum, which are each discussed in the context of PSS development. While the concept of agile and scrum have already been discussed in depth in the software development literature, we investigate the expansion of its application areas to the context of PSS, which has not been discussed previously.
This framework contributes to the servitization literature by discussing a theoretically founded, alternative development approach of PSS through the application of agile and scrum. Through uniting the bases of agile with a method of implementation, scrum, we help solving the challenges during the development of PSS due to its often volatile and uncertain conditions.
This paper also contributes to the agile literature by expanding its areas of application. Through the theoretical discussion of the application of agile and scrum in PSS development, we test its functionality from the original realm of software. We reveal its strengths and limitations in the context of PSS, and expand the discussion through the proposition of potential adaptations needed for this application.
For managers, this research holds several implications. The proposed framework raises awareness to the distinct circumstances of PSS development. It provides managers further with a guideline on when to apply agile management methods, and how scrum can be utilized in the context of PSS development. It also gives suggestions on how to adapt scrum specifically to the PSS development setting.
This research bases on the retrospective and conceptual analysis of academic literature, which represents a major limitation for the validity of the framework. Further research is planned to advance the insights from this framework through case-study research.
References
|
{"Source-Url": "https://backend.orbit.dtu.dk/ws/portalfiles/portal/175720749/Euroma_Paper_final.pdf", "len_cl100k_base": 5618, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 36263, "total-output-tokens": 9317, "length": "2e12", "weborganizer": {"__label__adult": 0.0003821849822998047, "__label__art_design": 0.00034928321838378906, "__label__crime_law": 0.00031113624572753906, "__label__education_jobs": 0.003536224365234375, "__label__entertainment": 5.060434341430664e-05, "__label__fashion_beauty": 0.0001962184906005859, "__label__finance_business": 0.001708984375, "__label__food_dining": 0.00044608116149902344, "__label__games": 0.0005064010620117188, "__label__hardware": 0.0005998611450195312, "__label__health": 0.0006070137023925781, "__label__history": 0.0002512931823730469, "__label__home_hobbies": 9.66787338256836e-05, "__label__industrial": 0.0007395744323730469, "__label__literature": 0.0003571510314941406, "__label__politics": 0.0002236366271972656, "__label__religion": 0.0003933906555175781, "__label__science_tech": 0.0086212158203125, "__label__social_life": 9.638071060180664e-05, "__label__software": 0.00543975830078125, "__label__software_dev": 0.9736328125, "__label__sports_fitness": 0.00035190582275390625, "__label__transportation": 0.0007009506225585938, "__label__travel": 0.00021135807037353516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36073, 0.03377]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36073, 0.07992]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36073, 0.90164]], "google_gemma-3-12b-it_contains_pii": [[0, 427, false], [427, 2882, null], [2882, 6632, null], [6632, 10752, null], [10752, 14392, null], [14392, 18292, null], [18292, 22055, null], [22055, 24966, null], [24966, 27099, null], [27099, 31875, null], [31875, 36073, null]], "google_gemma-3-12b-it_is_public_document": [[0, 427, true], [427, 2882, null], [2882, 6632, null], [6632, 10752, null], [10752, 14392, null], [14392, 18292, null], [18292, 22055, null], [22055, 24966, null], [24966, 27099, null], [27099, 31875, null], [31875, 36073, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36073, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36073, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36073, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36073, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36073, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36073, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36073, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36073, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36073, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36073, null]], "pdf_page_numbers": [[0, 427, 1], [427, 2882, 2], [2882, 6632, 3], [6632, 10752, 4], [10752, 14392, 5], [14392, 18292, 6], [18292, 22055, 7], [22055, 24966, 8], [24966, 27099, 9], [27099, 31875, 10], [31875, 36073, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36073, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
26e2772ef0400f75847c016ab800209274fd8d02
|
Pattern-Supported Architecture Recovery*
Martin Pinzger and Harald Gall
Distributed Systems Group
Vienna University of Technology
Argentinierstrasse 8/184-1, A-1040 Vienna, Austria, Europe
{pinzger, gall}@infosys.tuwien.ac.at
Abstract
Architectural patterns and styles represent important design decisions and thus are valuable abstractions for architecture recovery. Recognizing them is a challenge because styles and patterns basically span several architectural elements and can be implemented in various ways depending on the problem domain and the implementation variants. Our approach uses source code structures as patterns and introduces an iterative and interactive architecture recovery approach built upon such lower-level patterns extracted from source code. Associations between extracted pattern instances and architectural elements such as modules arise which result in new and higher-level views of the software system. These pattern views provide information for a consecutive refinement of pattern definitions to aggregate and abstract higher-level patterns which finally enable the description of a software system’s architecture.
1. Introduction
Developing complex software systems requires a description of the structure or structures, which comprise software components, the externally visible properties of those components, and the relationships among them [1]. Such a description, called software architecture, also is basic for further engineering activities concerning reuse, maintenance, and evolution of existing software components and systems.
Changes are in the nature of software systems and also have impacts on the architecture of a system. In most cases they cause a drift between the as-designed and as-built architecture because not seldom they are only realized in the implementation but not in the design of a software system.
In the field of product lines the impact of such changes increases even more because there is not only one single system but a family of systems. Therefore, changes are one primary reason for analyzing and recovering the architecture of existing systems.
Architecture recovery refers to all techniques and processes used to abstract a higher-level representation (i.e., software architecture) from available information such as existing artifacts (e.g., source code, profiling information, design documentation) and expert knowledge (e.g., software architects, maintainers). Basically this means the extraction of those building blocks which constitute architectural properties and finally the software architecture. From point of this view we think of architectural styles and patterns which are inherent in almost any design and thus are primary objectives for architecture recovery. But recognizing such styles and patterns is a challenge because they comprise several architectural elements (subsystems, modules, classes, functions, variables) and are implemented in various ways (depending on the problem at hand and on the programming language).
In this paper we extend our architecture recovery framework described by Jazayeri et al. [8] and introduce an iterative and interactive architecture recovery approach which is based on patterns. In this context we refer to patterns as solutions to recurring problems on different levels of abstraction (e.g., code patterns, design patterns, and architectural patterns) [13, 5]. Each pattern has typical properties and elements which indicate it. These so called hot-spots are the point to start architecture recovery, because they enable an abstraction of higher-level patterns. In our approach we primarily base on existing knowledge of experts and design documents and on a fast and effective recognition of hot-spots. Therefore, we start architecture recovery with gathering knowledge about the software system and specify pattern definitions, which pertain to hot-spots in terms of source code structures. We use an extended string pattern matching technique to match the pattern definitions with source code. Associations between extracted pattern
---
*This work is funded by the European Commission under EUREKA 2023/ITEA-ip00004 ‘from Concept to Application in system-Family Engineering (CAFE)’.
instances and architectural elements such as modules arise which result in new views of the software system. These pattern views contain the key information of higher-level pattern and enable the refinement of pattern definitions until finally the software architecture is reconstructed.
Following this introduction, Section 2 provides related work concerning architecture recovery using patterns. Section 3 describes the pattern-supported architecture recovery approach in detail. Evaluation of the method with a case study is presented in Section 4. Finally, Section 5 summarizes this work and indicates future work.
2. Related work
Architecture recovery has received considerable attention recently and various frameworks, techniques and tools have been developed. Basically, existing knowledge, obtained from experts and design documents, and various tools are mandatory to solve the problem. Hence, a common idea is to integrate several tools in architecture work-benches such as Dali [9]. In this a variety of lexical-based, parser-based and profiling-based tools are used to examine a system and extract static and dynamic views to be stored in a repository. Analyses of these views are supported by visualization and specific analysis tools. They enable an interaction with experts to control the recovery process until the software architecture is reconstructed.
Concerning architecture reconstruction much work has been on techniques which combine bottom-up and top-down approaches. Bottom-up they use reverse engineering tools to extract source models (e.g., Abstract Syntax Tree) and top-down they apply queries to extract expected patterns. Fiutem et al. [3] describe such an approach. They use a hierarchical architectural model that drives the application of a set of recognizers. Each recognizer works on the Abstract Syntax Tree (AST) and is related to a specific level of the architectural model. They produce different abstract views of the source code which describe some architectural aspects of the system and are represented by hierarchical architectural graphs.
Harris et al. [7] outline a framework that integrates reverse engineering technology and architectural style representations. In bottom-up recovery the bird’s eye view is used to display the file structure and file components of the system, and to reorganize information into more meaningful clusters. Top-down style definitions place an expectation on what will be found in the software system. These expectations are specified by recognition queries which are then applied to an extrated AST. Each recognized style provides a view of the system and the collection of these views partially recovers the overall design of the software system.
Guo et al. [6] outline an iterative and semi-automatic architecture recovery method called ARM. Existing knowledge gained from design documentation is used to define queries for potential pattern instances which are then applied automatically to extracted and fused source model views. Human evaluation is required to determine which of the detected pattern instances are intended, which are false positive and false negative. ARM supports patterns at various abstraction levels and uses lower-level patterns to build higher-level patterns and also composite patterns. In this way the approach aims particularly at systems that have been developed using design patterns whose implementations have not eroded over time.
Another approach which uses source models and queries as basic inputs for architecture recovery is introduced by Sartipi et al. [12] and called Alborz. The problem is viewed as approximate graph matching problem whereas the extracted source models and defined queries are represented as attributed relational graphs. Based on existing knowledge obtained from experts and design documents abstract patterns are defined using an Architectural Query Language (AQL). Each query is expanded into a graph which next is approximately matched with the source model graph using the branch and bound search algorithm. In each iteration the user may refine his queries and generate a more accurate model.
These related approaches [3, 7, 6, 12] and our approach have in common that they all take into account patterns to reconstruct the architecture of a software system. But there are two basic differences: first in the view of patterns and second in their extraction. We regard patterns as the key elements of software systems residing in all levels of abstraction. Thereby we start pattern recognition from the lowest level (i.e., source level) and use hot-spots to stepwise abstract higher-level patterns. Hot-spots indicate patterns and are represented by meaningful source code structures (e.g., variables, functions, data structures, program structures). To detect such hot-spots in source code we apply extended string pattern matching which facilitates fast and effective queries. In contrast the related approaches mentioned above regard patterns as associated architectural elements (e.g., a specific sequence of function calls) residing in higher levels of abstraction (e.g., code-structure level). Basically, these approaches transform the software system into a source model representation such as an AST and apply queries to recognize expected patterns. The transformation of existing artifacts (e.g., source code) implies the use of reverse engineering tools such as parsers and profilers which extract source models containing the architectural elements. But these reverse engineering tools typically are time and memory consuming and in a first step of architecture recovery too costly. In this context our string pattern matching approach represents a more effective solution which also allows a later involvement of other pattern matching techniques, such as those described before.
3. Pattern views
Source code typically is structured and contains semantically rich programming constructs such as variables, functions, data structures, and program structures which indicate patterns and therefore are valuable inputs for architecture recovery. The extraction of these basic patterns provides the user with additional views of the software system which we call pattern views. In this paper we primarily focus on the generation of such views and introduce an approach which consists of the following steps:
1. Analysis and pattern candidate identification:
Based on design documentation and expert knowledge expected pattern candidates are identified.
2. Pattern definition:
Based on the expected pattern candidates appropriate pattern definitions are taken from a pattern repository or otherwise generated using a specific pattern language.
3. Pattern recognition:
Pattern definitions are matched with source code and information about recognized patterns is stored in a repository.
4. Pattern view computation:
Associations between recognized pattern instances and other architectural elements are computed. They result in various new views of the software architecture.
5. Analysis of patterns and views:
Resulting views and recognized patterns are analyzed to abstract architectural patterns. Already applied patterns are refined, new pattern definitions are validated and stored in the repository.
In the following sections we describe each of the steps in more detail.
3.1. Pattern identification
The primary focus of architecture recovery is on finding key information (i.e., patterns) which enables the description of architectural properties of an existing software system [8]. The information base containing this patterns consists of all artifacts comprising the software system (e.g., source code, documents, running system). This leads to a huge amount of data so that knowledge from experts and existing software documents is necessary to extract the essential information. Therefore the primary activity in this step is to investigate existing design documents and contact experts who are familiar with the design of the software system to gain knowledge about the software system and its primary architectural properties and their implementation by patterns. Clues about these expected patterns are crucial to initialize and control the recovery process (e.g., communication between components is implemented in C using sockets).
3.2. Pattern definition
Based on the information gathered in the first step appropriate pattern definitions are either user-defined or taken from a repository. Because our approach takes into account significant text and structural information of source code we use a pattern definition language which facilitates regular expressions and source code structures. Currently there are several tools for string-based source code analyses (e.g., grep, perl, LSME, SCRUPLE) available but they all either do not support structures, need a huge amount of memory or disk space or require a parser for each target programming language. We extended Knor’s et al. [10] ESPaRT (Enhanced String Pattern Recognition Tool) to allow pattern specification in XML. ESPaRT overcomes the mentioned shortcomings by implementing a lexical tool which is based on regular expressions and considers structural information. It provides a definition language which enables the specification of patterns with preconditions and follow-up examinations (a pattern match has to fulfill the precondition and can be further investigated through a sub match definition). Figure 1 shows the primary structure of an ESPaRT pattern definition in XML-format. The term pattern expression stands for an ESPaRT definition which can be a simple regular expression or a more complex one containing a combination of ESPaRT specific commands and regular expressions.
```xml
<pattern id="patternid">
<precondition match="true">
<!-- pattern expression -->
</precondition>
<match>
<!-- pattern expression -->
</match>
<submatch match="true">
<!-- pattern expression -->
</submatch>
</pattern>
```
Figure 1. ESPaRT pattern definition
The organization of pattern definitions in different sections is crucial for addressing the problem that patterns are implemented in various ways. The clue is to give a more general pattern definition (e.g., a text block containing one or more hot-spots) which limits the search space but nev-
The visualization of views we use existing tools such as SniFF+ and guiding the recovery process in the right direction. For engineers with information for refining pattern definitions and to visualize them. The results are views which provide the composition of patterns (Figure 2).
### 3.3. Pattern recognition
The pattern recognition process of ESPaRT takes the definitions specified in the former step as input and matches them with the information base (e.g., source files). In a first matching process the huge amount of information is sliced and text blocks are extracted based on the specified pattern definition. A further optional condition for matched text blocks can be defined in the precondition section. This filters possible wrong matches and minimizes the input for the following matching process where the extracted pattern instances are investigated in more detail by the application of sub match pattern definitions. The result of this stepwise pattern recognition process contains all detected primary and sub pattern instances described by quadruples \((\text{pid}, \text{fid}, \text{start}, \text{end})\) where \(\text{pid}\) indicates the pattern definition and \(\text{fid}\), \(\text{start}\), and \(\text{end}\) the location (source file name, start and end line number) of the matched pattern. All generated quadruples are stored in a central repository for further analysis and computation of pattern views.
### 3.4. Pattern view computation
Based on the idea of views from Kazman et al. [9] related pattern instances consider the architecture of a software system from an important point of view - patterns. Like other extracted views - such as static and dynamic call views - pattern views overlap and complement one another. The primary objective of this step is to associate matched pattern instances with themself and other architectural elements and to visualize them. The results are views which provide engineers with information for refining pattern definitions and guiding the recovery process in the right direction. For the visualization of views we use existing tools such as Rigi from Wong et al. [14]. Regarding the associations we focus on three different pattern views: pattern composition view, pattern-element view, and pattern-module view.
The architecture recovery process is initiated by pattern definitions which primarily focus on the key elements of patterns. To continue the process and abstract higher-level patterns it is necessary to refine these pattern definition statements. One possible way is to analyze the composition of patterns. Taking the location property of the extracted and stored pattern instances as input a simple algorithm (e.g., SQL-statement) computes a directed graph showing the composition of patterns (Figure 2).
A directed association between two pattern instances \(\text{PI}_1\) and \(\text{PI}_2\) indicates that pattern \(\text{PI}_2\) is part of pattern \(\text{PI}_1\). Particularly pattern instances with a high fan-in (\(\text{PI}_1\)) or fan-out (\(\text{PI}_4\)) are of interest because they depict pattern instances which on the one hand are aggregated and on the other hand are key elements of patterns.
Commonalities of architectural elements are an interesting aspect for further investigations because they provide information which patterns are appropriate for aggregation and abstraction. But additional views that show these commonalities in more detail are required. One step towards a more detailed view is the combination of pattern instances and source model elements, such as functions, variables or data structures. For example Figure 3 represents a combination of patterns and function calls. The source model elements are obtained from reverse engineering tools such as Imagix4D or SniFF+ and stored in a central repository. A simple algorithm expressed, for example, in SQL is appropriate for relating detected pattern instances and source model elements. A directed association between a pattern and a source model element is established if the element is part of the pattern. The resulting directed graph shows all considered source model elements which constitute the pattern and the commonalities between patterns (e.g., \(\text{PI}_1\) and \(\text{PI}_2\) have \(f_1\) and \(f_2\) in common).
The pattern-element view considers associations between pattern instances at a function-level. Continuing the architecture recovery process increases the abstraction level because architectural elements are aggregated and abstracted. At higher levels aggregated and more abstracted elements such as modules are added to the input data of succeeding recovery iterations. Typically a module is implemented in one source file containing functions, variables, and data structures. Based on the file-relation of modules and matched pattern instances a pattern-module view is computed which shows associations between modules. This means that two modules are associated if a pattern in-
An example is shown in Figure 4 where the modules $M_1$, $M_2$, and $M_3$ are associated by pattern definition $PD_1$. Similar pattern definitions often indicate similar responsibility and assist the engineer in classifying architectural elements (e.g., modules). This is performed in the next step of our architecture recovery process.
3.5. Analysis of patterns and views
Pattern views show significant pattern instances which could be key building blocks of a software system. Particular attention is paid to information which supports aggregating and abstracting patterns. A basic guiding principle is to investigate those pattern instances in more detail which are similar because they are potential candidates for aggregation and abstraction. Similarities arise from common used architectural elements such as functions, variables, data structures, lower-level patterns, already aggregated and abstracted patterns, and also components. Considering potential candidates additional views and also the calculation of metrics such as proposed by Sartipi et al. [12] assist in understanding the degree of similarity. They guide engineers towards the right decision which pattern definitions to refine and which pattern instances to aggregate and abstract in the next recovery iteration.
4. Case study
We applied our pattern-supported software architecture recovery to a distributed intrusion detection system called SPARTA [11] which consists of approximately 100 modules (100 KLOC) implemented in $C$ and Java. The primary task of this software system is to detect distributed intrusion patterns (e.g., telnet chains, spreading worms). This is done by sniffing network traffic and applying certain rules to the input data. Matched packets are stored in a database and queried by mobile agents. For the purpose of demonstration and evaluation we basically focused on one architectural property called data communication [8] and considered to answer basic questions, such as:
- Which components contribute to communication?
- Who are the senders and receivers?
- Does the sender block the receiver?
- Is an architectural style such as Client/Server used?
First phase of recovery
We started the architecture recovery process with gathering knowledge about the problem domain of intrusion detection and the basic building blocks of related software systems. First, there is a tool interacting with the user to configure the sniffer by defining rules that specify the network packets which should be captured. Second, there is the sniffer-tool which, based on its configuration, observes the network traffic and generates an event whenever a packet occurs which conforms to a specified rule. Each event generated in this way and its properties are sent from the sniffer-tool to the logging-tool that writes the data to a repository. All three primary building blocks are connected through communication channels realized by TCP/IP-sockets.
Based on this information we specified initial pattern definitions to query socket-patterns of both implementation languages Java and C. The clue is to specify the hot-spots of patterns and to take into account as many implementation variations as possible. Relating to sockets such a hot-spot is the socket-creation statement (e.g., $mySocket = socket(...)$ in $C$ or $mySocket = new Socket(...)$ in Java). In terms of $C$ this results in a pattern definition as shown in Figure 5. The interpretation of this definition is: match text blocks...
starting with "{" and ending with "}" containing a string "= socket(...)" where "..." can be an arbitrary string. An analog pattern definition was also specified for sockets implemented in Java.
<pattern id="C-Socket">
<match>
<block start="{" end=""}">
<text>= socket(</text>
<anytext />
<text>);</text>
</block>
</match>
</pattern>
Figure 5. Initial pattern definition for matching potential socket implementations in C
For the recognition of pattern instances we fed all source files of each programming language and the corresponding pattern definition to ESPaRT [10]. The result of this first pattern recognition process is presented in Table 1 describing the location of each recognized pattern instance by source file name, start and end line number, and the identifier of the pattern definition (C-S for socket in C, J-S for socket in Java). Each match models a quadruple (fid, pid, start, end) which is stored in the repository.
<table>
<thead>
<tr>
<th>file</th>
<th>location start - end</th>
<th>pattern-id</th>
</tr>
</thead>
<tbody>
<tr>
<td>log.c</td>
<td>116 - 195</td>
<td>C-S</td>
</tr>
<tr>
<td></td>
<td>810 - 826</td>
<td>C-S</td>
</tr>
<tr>
<td>snort.c</td>
<td>321 - 354</td>
<td>C-S</td>
</tr>
<tr>
<td></td>
<td>2039 - 2095</td>
<td>C-S</td>
</tr>
<tr>
<td>SnortPlugin.java</td>
<td>889 - 943</td>
<td>J-S</td>
</tr>
<tr>
<td></td>
<td>945 - 973</td>
<td>J-S</td>
</tr>
<tr>
<td></td>
<td>976 - 1064</td>
<td>J-S</td>
</tr>
<tr>
<td></td>
<td>1068 - 1105</td>
<td>J-S</td>
</tr>
<tr>
<td></td>
<td>1108 - 1129</td>
<td>J-S</td>
</tr>
</tbody>
</table>
Table 1. Detected C and Java socket patterns
The advantage of this string pattern specification over regular expressions is the capability to manage text blocks. These blocks contain important information around detected hot-spots which is mandatory to recognize potential higher-level patterns.
Before the analysis process is started the stored data has to be preprocessed and represented in a form which supports the user in detecting eye-catching elements. We used Wong’s Rigi-tool [14] for the visualization of views and some small Perl Scripts to transform the stored data into the Rigi Standard Format (RSF).
Regarding the first architectural question we built the pattern-module view shown in Figure 6. Each stored quadruple is read from the repository. The file and pattern identifiers constitute this view: each unique file identifier indicates a node; an edge between two nodes is generated if two quadruples contain the same pattern identifier but different file identifiers. One problem occurred because of the need to separate pattern definitions for each implementation language (Java and C). Both have different pattern identifiers but were used for the same purpose. The view does not differ between varying implementation languages and hence we assigned the same identifiers for the Java and C pattern definitions.
Figure 6. Pattern-module view of source files containing potential socket pattern instances
The resulting graph in Figure 6 shows that the modules log.c, snort.c, and SnortPlugin.java contain expected socket implementations. The mutual associations between the components arise from the fact that Rigi cannot display bidirectional associations. Unfortunately, by analyzing this graph it was not possible to conclude more detailed architectural information such as, for example, which module implements a server and which one a client. To get this information we had to reconsider the implementation of server and client sockets in each programming language and refine our pattern definitions.
Second phase of recovery
Before going deeper into investigations of the matched text blocks we looked up some implementation details about socket programming. In C client and server sockets have the same data type, but differ by the function calls following the socket()-statement. A client typically executes connect(socket, address, ...) to connect to a server. On the
other hand a server first binds his socket to an address and
next listens and waits for requests. The corresponding state-
ments are bind(socket, address, ...), listen(socket, ...), and
accept(socket, ...). In Java the implementation differences
between client and server sockets are similar to C and ad-
ditionally varies in different data types (Socket for clients
and ServerSocket for servers). Based on this information
we refined our pattern definitions to detect implementations
of client and server components. Figure 7 shows the refined
pattern definition of a client socket implemented in C.
```
<pattern id="C-ClientSocket">
<precondition match="true">
<text>SOCKET</text>
<variable id="mySocket" />
</precondition>
<match>
<block start="{" end="}">
<variable id="mySocket" />
<text> = socket(</text>
<anytext />
<text> );
</block>
</match>
<submatch match="true">
<text>connect(</text>
<variable id="mySocket" />
<anytext />
<text> );
</submatch>
</pattern>
```
**Figure 7. Refined pattern definition for matching client sockets implemented in C**
Basically we extended the number of elements and added
a precondition and a postcondition which perform additional
checks. The precondition contains a check for a data type definition “SOCKET” in the extracted text block. A variable called “mySocket” was introduced to reuse a
matched socket identifier in the postcondition. This leads to
the following extended interpretation: match text blocks starting with “{" and ending with “}” containing a string “= socket(...);” where the string of the matched identi-
fier is assigned to the variable “mySocket”; check each
matched text block as valid if it contains a variable def-
inition “SOCKET” for the matched identifier; further in-
vestigate each valid text block if it contains a string “connect(mySocket...);” where “mySocket...” stands for the
matched variable identifier followed by an arbitrary string.
While the match and precondition elements specify the cre-
ation of a socket, the sub match element specifies the state-
ments which indicate a socket as client or server.
Based on the information obtained from the first iteration
we applied the specified pattern definitions (2 for C and 2
for Java) on the reduced search space. The outcome of the
recognition process is shown in Table 2. Four pattern iden-
tifiers indicate the various socket implementations for Java
and C (C-CS for client socket in C, C-SS for server socket in
C, J-CS for client socket in Java, and J-SS for server socket in
Java).
<table>
<thead>
<tr>
<th>File</th>
<th>Start - End</th>
<th>Pattern-ID</th>
</tr>
</thead>
<tbody>
<tr>
<td>log.c</td>
<td>116 - 195</td>
<td>C-CS</td>
</tr>
<tr>
<td>snort.c</td>
<td>321 - 354</td>
<td>C-SS</td>
</tr>
<tr>
<td>SnortPlugin.java</td>
<td>889 - 943</td>
<td>J-CS</td>
</tr>
<tr>
<td>SnortPlugin.java</td>
<td>945 - 973</td>
<td>J-CS</td>
</tr>
<tr>
<td>SnortPlugin.java</td>
<td>976 - 1064</td>
<td>J-CS</td>
</tr>
<tr>
<td>SnortPlugin.java</td>
<td>1068 - 1105</td>
<td>J-CS</td>
</tr>
<tr>
<td>SnortPlugin.java</td>
<td>1108 - 1129</td>
<td>J-CS</td>
</tr>
</tbody>
</table>
**Table 2. Detected client and server socket patterns**
Two previously recognized pattern instances in the
source files log.c and snort.c were omitted because they
did not match the precondition. The remaining quadruples
represent client and server components. Whereas log.c im-
plements a client and snort.c a server, JavaPlugin.java im-
plements both. Regarding the next architectural questions
mentioned above we first had to examine which client com-
municates with which server. Basically this is not always
expressed in the source code and further expert knowledge
is necessary, such as which kinds of sockets are used. Refer-
ing to our case study we knew that the sockets under study
are based on TCP/IP and thus use an IP-address and a TCP-
port number as connection parameters. A client who wants
to communicate with a server opens a socket with the corre-
sponding server-IP-address and the port to which the server
is bound. One of the possible ways to get this information is
by computing a view of the various client and server pattern
instances and their called functions plus parameters. Taking
the involved quadruples and an extracted source model we
generated the pattern view shown in Figure 8.
This view represents a server socket in C (C-CS) and a
client socket in Java (J-CS), their called functions and
accessed variables. To extract the essential information
we had to perform a more detailed analysis. The server
accesses the variables sd and addr in its <em>bind()-statement</em>
where sd is the socket identifier and addr contains the port
which the socket is bound to. To identify the port number
we further investigated the variable <em>sin_port</em> and retrieved
a constant value <em>p</em>. The four recognized Java client socket
implementations also use this constant port number in their
<em>Socket()-statement</em>. The result showed that module Snort-
Plugin.java implements client sockets which connect to the server socket implemented in snort.c. Another such analysis showed that JavaPlugin.java also implements a server socket listening on the constant port number q which is accessed by a client socket implemented in log.c. Further analyses of this combined pattern and source model element view also indicated that both servers use threads to handle requests and are not blocked by clients.
Finally, we discussed our gained information with an expert and got corresponding results: There are two socket servers waiting for client requests which are either used for rule or event transfer. Rules are transferred from four clients implemented in JavaPlugin.java to the server in snort.c. Events are transferred from one client realized in log.c to the server implemented in JavaPlugin.java. Both servers execute a separate thread for each client request.
5. Conclusions
In this paper we presented an architecture recovery approach based on patterns defined on the level of source code structures. Using expert knowledge and design documents the key elements of expected patterns are specified. An extended string pattern matching technique allows a fast and effective recognition of these pattern definitions. The extracted pattern instances are associated with other architectural elements and form new views of the software architecture: a pattern-composition view, pattern-element view, and a pattern-module view. Analyzing these views pattern definitions are refined to aggregate and abstract higher-level patterns until the software architecture is reconstructed to the extend required by the engineer. The architecture recovery process thereby starts with specific properties of a system (e.g., "socket communication") and iteratively completes the recovered architecture descriptions by refinement of patterns.
We demonstrated the applicability of our approach on a real-world case study where we recovered the data communication property of an intrusion detection system. The case study also showed the necessity of existing knowledge to analyze and interpret the generated pattern views. Our approach is straightforward in the respect that it starts with patterns built from source code structures but exhibits its full strengths in the generation of views revealing (inter-)relationships between architectural elements, patterns, and modules.
Ongoing work concentrates on the computation of additional pattern views and the formulation of guidelines to analyze these views and control the recovery process. More case studies will be performed to further demonstrate the applicability of the approach in different application domains, especially in the field of product families.
References
|
{"Source-Url": "http://swerl.tudelft.nl/twiki/pub/Trash/MartinPinzgerPublications/Pinzger2002-recovery.pdf", "len_cl100k_base": 6823, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 26555, "total-output-tokens": 8044, "length": "2e12", "weborganizer": {"__label__adult": 0.0003659725189208984, "__label__art_design": 0.0004684925079345703, "__label__crime_law": 0.000335693359375, "__label__education_jobs": 0.0003986358642578125, "__label__entertainment": 4.780292510986328e-05, "__label__fashion_beauty": 0.00014126300811767578, "__label__finance_business": 0.0001474618911743164, "__label__food_dining": 0.00029015541076660156, "__label__games": 0.0004584789276123047, "__label__hardware": 0.000743865966796875, "__label__health": 0.0003540515899658203, "__label__history": 0.00019979476928710935, "__label__home_hobbies": 6.324052810668945e-05, "__label__industrial": 0.00030350685119628906, "__label__literature": 0.00020635128021240232, "__label__politics": 0.00022912025451660156, "__label__religion": 0.00042319297790527344, "__label__science_tech": 0.008056640625, "__label__social_life": 5.745887756347656e-05, "__label__software": 0.0040435791015625, "__label__software_dev": 0.98193359375, "__label__sports_fitness": 0.0002675056457519531, "__label__transportation": 0.0003628730773925781, "__label__travel": 0.00019228458404541016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37589, 0.01868]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37589, 0.747]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37589, 0.89957]], "google_gemma-3-12b-it_contains_pii": [[0, 4234, false], [4234, 10082, null], [10082, 14575, null], [14575, 19572, null], [19572, 23062, null], [23062, 27056, null], [27056, 31938, null], [31938, 36023, null], [36023, 37589, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4234, true], [4234, 10082, null], [10082, 14575, null], [14575, 19572, null], [19572, 23062, null], [23062, 27056, null], [27056, 31938, null], [31938, 36023, null], [36023, 37589, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37589, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37589, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37589, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37589, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37589, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37589, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37589, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37589, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37589, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37589, null]], "pdf_page_numbers": [[0, 4234, 1], [4234, 10082, 2], [10082, 14575, 3], [14575, 19572, 4], [19572, 23062, 5], [23062, 27056, 6], [27056, 31938, 7], [31938, 36023, 8], [36023, 37589, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37589, 0.0885]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
dc42bdda0b10dca9c4b3fe3b19c14b0984d697d7
|
OpenStudyBuilder – Status & Workshop on EDC Integrations
COSA Spotlight Q1 – 26 March 2024
Nicolas de Saint Jorre
Introduction
What is the OpenStudyBuilder?...
A NEW APPROACH TO STUDY SPECIFICATION
• Compliance with external and internal standards
• Facilitates automation and content reuse
• Ensures a higher degree of end-to-end consistency
3 ELEMENTS OF OpenStudyBuilder
• Clinical Metadata and Study Definition Repository (central repository for all study specification data)
• OpenStudyBuilder application / Web UI
• API layer (allowing interoperability with other applications) (DDF API Adaptor – enabling DDF SDR Compatibility)
# OpenStudyBuilder Components
<table>
<thead>
<tr>
<th>STUDIES</th>
<th>LIBRARY</th>
</tr>
</thead>
<tbody>
<tr>
<td>TITLE</td>
<td>CONTROLLED</td>
</tr>
<tr>
<td>CRITERIA</td>
<td>TERMINOLOGY</td>
</tr>
<tr>
<td>REGISTRY IDENTIFIERS</td>
<td>MEDICAL</td>
</tr>
<tr>
<td>INTERVENTIONS</td>
<td>DICTIONARIES</td>
</tr>
<tr>
<td>STRUCTURE</td>
<td>(e.g., MedDRA)</td>
</tr>
<tr>
<td>PURPOSE</td>
<td>CONCEPTS</td>
</tr>
<tr>
<td>POPULATION</td>
<td>(ACTIVITIES,</td>
</tr>
<tr>
<td></td>
<td>UNITS, CRFs,</td>
</tr>
<tr>
<td></td>
<td>COMPOUNDS)</td>
</tr>
<tr>
<td>ACTIVITIES</td>
<td>SYNTAX</td>
</tr>
<tr>
<td></td>
<td>TEMPLATES</td>
</tr>
<tr>
<td></td>
<td>DATA EXCHANGE</td>
</tr>
<tr>
<td></td>
<td>STANDARDS</td>
</tr>
</tbody>
</table>
Goal of OpenStudyBuilder
Metadata driven
End-2-End Automation!
Connectivity is key!
<table>
<thead>
<tr>
<th>CDISC</th>
<th>ODM</th>
<th>Define.xml</th>
</tr>
</thead>
<tbody>
<tr>
<td>CDASH</td>
<td>SDTM</td>
<td>ADaM</td>
</tr>
<tr>
<td>CDASHIG</td>
<td>SDTMIG</td>
<td>ADAIG</td>
</tr>
<tr>
<td>COSMoS</td>
<td>Controlled</td>
<td>Terminology</td>
</tr>
</tbody>
</table>
**Sponsor Library**
**Dictonaries**
- SNOMED
- MeDDRA
- MED-RT
- UNII
- LOINC
- UCUM
**Software Tools**
- Word Addin
- DDF Adaptor
- Any DDF Compatible System
- TFL Builder
**Output Formats**
- Sponsor version
- M11 version
- CPT version
- Ct.gov...
As ODM or CSV
Blank CRF
Annotated CRF
With Vendor Extensions
As Define.xml
(pre-version for both SDTM and ADaM)
Connectivity is key!
- Standards & Study Definitions
- API & DDF API
- OpenStudyBuilder Application
- Protocol (Word Add-In coming as open-source)
- Electronic Data Capture
- Scripts, CTMS, other MDR, SCE, TLF Builder, ...
Protocol Generation
StudyBuilder ribbon (Word add-in)
- One-way connection
- Code recognizes the document type
- User-friendly ribbon and ‘fly-out’ in Word
- Styles ensure proper formatting in Word
1.2 Flowchart
<table>
<thead>
<tr>
<th>Procedure</th>
<th>Screening</th>
<th>Treatment</th>
<th>Follow-up</th>
</tr>
</thead>
<tbody>
<tr>
<td>Visit short name</td>
<td>V1</td>
<td>V6</td>
<td>V11</td>
</tr>
<tr>
<td>Study day</td>
<td>-14</td>
<td>29</td>
<td>183</td>
</tr>
<tr>
<td>Visit window (days)</td>
<td>±0</td>
<td>±1</td>
<td>±1</td>
</tr>
<tr>
<td>Randomisation</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>End of Study</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Body Measurements</td>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>Eligibility Criteria</td>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>Laboratory Assessments</td>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
</tbody>
</table>
Structured content including SoA will be transferred to the content controls Word based Protocol Template.
CRF Standards & Metadata
Manage
Standard & Study CRF
Including rules, checks
Support vendor extensions
EDC Setup, Test, Execution
Finetuning, Layout
OpenStudyBuilder to drive EDC setup
A COSA Workshop
CDISC Interchange 2024
Use OpenStudyBuilder to drive EDC setup - a COSA Workshop
23 April 2024 9:00-16:00, Berlin, Germany
Problem Statement
Data Exchange Formats
- CDASH
- ODM.XML
- USDM
- Biomedical Concepts
Implementation
- Native formats
- Limited interface capabilities
- Limited selection of standards
- Custom extensions
Workshop Focus
- Challenges & Opportunities
- ODM.XML integrations
- API based integrations
- Knowledge exchange
- OpenStudyBuilder functionality
- Integration status, challenges and opportunities from EDC vendors
- Discussion
- Integration strengths, weaknesses, opportunities & threats
- Options and next steps
Workshop Agenda
- Information Exchange
- Introduction
- OpenStudyBuilder status with CRF & SoA for EDC & plans
- EvidentIQ ODM.xml integration (Marvin EDC)
- Veeva EDC integration via SDS files and future API integration
- Oracle ClinicalOne API integration & EvidentIQ ePRO API integration
- The potential future of API standards
- Breakouts
- Discuss strengths, weaknesses, opportunities & threats
- Options and next steps
- Share and discuss in plenum
CRF for EDC Status & Questions
# eCRF API endpoints
<table>
<thead>
<tr>
<th>Category</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>ODM Study Events</td>
<td></td>
</tr>
<tr>
<td>ODM Forms</td>
<td></td>
</tr>
<tr>
<td>ODM Item Groups</td>
<td></td>
</tr>
<tr>
<td>ODM Item</td>
<td></td>
</tr>
<tr>
<td>ODM Conditions</td>
<td></td>
</tr>
<tr>
<td>ODM Methods</td>
<td></td>
</tr>
<tr>
<td>ODM Formal Expressions</td>
<td></td>
</tr>
<tr>
<td>ODM Descriptions</td>
<td></td>
</tr>
<tr>
<td>ODM Aliases</td>
<td></td>
</tr>
<tr>
<td>ODM Vendor Namespaces</td>
<td></td>
</tr>
<tr>
<td>ODM Vendor Attributes</td>
<td></td>
</tr>
<tr>
<td>ODM Vendor Elements</td>
<td></td>
</tr>
<tr>
<td>ODM Metadata Import/Export</td>
<td></td>
</tr>
</tbody>
</table>
CRF Specification in the Library
- **Study Events**
- **Forms**
- **ItemGroups**
- **Items**
- **eCRF specs**
- **Vendor Extensions**
- **Alias**
- **eCRF views**
Form def. as ODM (Vendor Extensions + Alias)
ItemGroup def. as ODM (Vendor Extensions + Alias)
Item def. as ODM (Vendor Extensions + Alias) 1/2
Vendor Extensions
Concept: CRFs
Templates used to defined multiple CRF version
Annotated CRF following MSG 2.0 standard
ODM.xml with vendor extensions (or CSV)
PDF format
Vendor Extension in ODM
Please complete this Vital Signs form before starting the treatment.
Vital signs form
Please complete the Vital Signs item group at each expected time point.
Vital signs
Pulse
Pulse
beats/min
Odm.xml API endpoint
Level of Metadata in the ODM (uid):
- StudyEvent
- Form
- ItemGroup
Target Type:
- StudyEvent
- Form
- ItemGroup
Status of the metadata
PDF or CSV
Stylesheet ref.
**API Endpoints to work with the SoAs...**
<table>
<thead>
<tr>
<th>Method</th>
<th>Endpoint</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>GET</td>
<td>/studies</td>
<td>Returns all studies in their latest/oldest version.</td>
</tr>
<tr>
<td>POST</td>
<td>/studies</td>
<td>Creates a new Study Definition.</td>
</tr>
<tr>
<td>GET</td>
<td>/studies/headers</td>
<td>Returns possible values from the database for a given header.</td>
</tr>
<tr>
<td>GET</td>
<td>/studies/{uid}/fields-audit-trail</td>
<td>Returns the audit trail for the fields of a specific study definition identified by ‘uid’.</td>
</tr>
<tr>
<td>GET</td>
<td>/studies/{uid}/audit-trail</td>
<td>Returns the audit trail for the subparts of a specific study definition identified by 'uid'.</td>
</tr>
<tr>
<td>GET</td>
<td>/studies/{uid}/protocol-title</td>
<td>Retrieve all information related to Protocol Title.</td>
</tr>
<tr>
<td>PATCH</td>
<td>/studies/{uid}/copy-component</td>
<td>Copy study form from another study.</td>
</tr>
<tr>
<td>GET</td>
<td>/studies/{uid}/time-units</td>
<td>Gets a study preferred time unit.</td>
</tr>
<tr>
<td>PATCH</td>
<td>/studies/{uid}/time-units</td>
<td>Edits a study preferred time unit.</td>
</tr>
<tr>
<td>PATCH</td>
<td>/studies/{uid}/order</td>
<td>Reorder Study Subparts within a Study Parent Part.</td>
</tr>
<tr>
<td>GET</td>
<td>/studies/{uid}/design.svg</td>
<td>Builds and returns a Study Design visualization image in SVG format.</td>
</tr>
<tr>
<td>GET</td>
<td>/studies/{uid}/flowchart/coordinates</td>
<td>Returns uid to [x,y,coordinates] coordinates mapping of items included in SoA Protocol Flowchart table.</td>
</tr>
<tr>
<td>GET</td>
<td>/studies/{uid}/flowchart</td>
<td>Protocol, Detailed or Operational SoA table with footnotes as JSON.</td>
</tr>
<tr>
<td>GET</td>
<td>/studies/{uid}/flowchart.html</td>
<td>Builds and returns an HTML document with Protocol, Detailed or Operational SoA table with footnotes.</td>
</tr>
<tr>
<td>GET</td>
<td>/studies/{uid}/flowchart.docx</td>
<td>Builds and returns a DOCX document with Protocol, Detailed or Operational SoA table with footnotes.</td>
</tr>
<tr>
<td>GET</td>
<td>/studies/{uid}/detailed-soa-history</td>
<td>Returns the history of changes performed to a specific detailed SoA.</td>
</tr>
<tr>
<td>GET</td>
<td>/studies/{uid}/detailed-soa-exports</td>
<td>Exports the Detailed SoA content.</td>
</tr>
<tr>
<td>GET</td>
<td>/studies/{uid}/operational-soa-exports</td>
<td>Exports the Operational SoA content.</td>
</tr>
<tr>
<td>GET</td>
<td>/studies/{uid}/protocol-soa-exports</td>
<td>Exports the Protocol SoA content.</td>
</tr>
</tbody>
</table>
SoA and Biomedical Concepts...
Schedule of Activities (SoA) at multiple levels
**Protocol SoA**
- For the high level SoA in protocol section 1.2
- Main purpose is for the investigator and site staff to get an overview of the operational schedule
**Detailed SoA**
- Specifying the semantic data observations to be collected in the study – but not specific to representation in ADaM, SDTM or data collection
- Will be part of protocol section 8 and appendixes or other supplementary documents
**Operational SoA**
- The data specification to support data collection specification
- Correspond to our existing legacy BCs (Topic Codes)
- Will also related to specific ADaM PARAM/PARAMCD
**Data Capture / Collection Specification**
- How data is to be collected in the study and when
- What is pre-set, what is collected and how
Detailed SoA
The detailed SoA describe scheduling of the specific Activities and their grouping for the study.
Each level in the Activity hierarchy can be selected for display in the “Protocol SoA”.
Protocol and Operational SoA
### Study Activities (CDISC DEV-0)
<table>
<thead>
<tr>
<th>Activity Instructions</th>
<th>Protocol SoA</th>
<th>Activity Instructions</th>
<th>Protocol SoA</th>
</tr>
</thead>
<tbody>
<tr>
<td>Protocol SoA</td>
<td></td>
<td>Protocol SoA</td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Study Activities</th>
<th>Detailed SoA</th>
<th>SAP overrides</th>
<th>Protocol SoA</th>
<th>Activity Instructions</th>
</tr>
</thead>
<tbody>
<tr>
<td>Study Activities</td>
<td></td>
<td></td>
<td>Protocol SoA</td>
<td>Activity Instructions</td>
</tr>
</tbody>
</table>
#### Study Data Specifications (CDISC DEV-0)
<table>
<thead>
<tr>
<th>Study Activity</th>
<th>Operational SoA</th>
</tr>
</thead>
</table>
<table>
<thead>
<tr>
<th>Study Activity Data Specifications</th>
<th>Operational SoA</th>
</tr>
</thead>
</table>
### Screen and Treatment
<table>
<thead>
<tr>
<th>Visit short name</th>
<th>Screening</th>
<th>Treatment</th>
<th>Follow-up</th>
</tr>
</thead>
<tbody>
<tr>
<td>V1</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>V2</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>V3</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>V4</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>V5</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>V6</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>V7</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>V8</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>V9</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>V10</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
#### End of Study
- General
- Physical Examination - early phase
- Self-Measured Plasma Glucose
- Body Measurements
- Body Measurements
- Eligibility Criteria
- Eligibility Criteria
- Laboratory Assessments
- Glucose Metabolism
- Lipids
- Biochemistry
- Haematology
- Ac Requiring Additional Data
- Laboratory Assessment
- Advance Event
- Vital Signs
- Vital Signs
- Medical History/Concomitant Illness
- Medical History/Concomitant Illness
- Informed Consent and Demography
- Informed Consent and Demography
### Study Activities (CDISC DEV-0)
**Protocol SoA**
**SoA layout:** Operational SoA
**Preferred time unit:** Week
**Follow-up**
<table>
<thead>
<tr>
<th>Visit short name</th>
<th>Study week</th>
<th>Visit window (days)</th>
<th>SUBJECT RELATED INFORMATION</th>
</tr>
</thead>
<tbody>
<tr>
<td>Visit short name</td>
<td>Study week</td>
<td>Visit window (days)</td>
<td>SUBJECT RELATED INFORMATION</td>
</tr>
<tr>
<td>Visit short name</td>
<td>Study week</td>
<td>Visit window (days)</td>
<td>SUBJECT RELATED INFORMATION</td>
</tr>
<tr>
<td>Visit short name</td>
<td>Study week</td>
<td>Visit window (days)</td>
<td>SUBJECT RELATED INFORMATION</td>
</tr>
<tr>
<td>Visit short name</td>
<td>Study week</td>
<td>Visit window (days)</td>
<td>SUBJECT RELATED INFORMATION</td>
</tr>
<tr>
<td>Visit short name</td>
<td>Study week</td>
<td>Visit window (days)</td>
<td>SUBJECT RELATED INFORMATION</td>
</tr>
</tbody>
</table>
**ADAM Parameter Code**
<table>
<thead>
<tr>
<th>Topic Code</th>
<th>ADAM Parameter Code</th>
<th>Screening</th>
<th>Treatment</th>
<th>Follow-up</th>
</tr>
</thead>
<tbody>
<tr>
<td>Visit short name</td>
<td>Study week</td>
<td>Visit window (days)</td>
<td>SUBJECT RELATED INFORMATION</td>
<td></td>
</tr>
<tr>
<td>Visit short name</td>
<td>Study week</td>
<td>Visit window (days)</td>
<td>SUBJECT RELATED INFORMATION</td>
<td></td>
</tr>
<tr>
<td>Visit short name</td>
<td>Study week</td>
<td>Visit window (days)</td>
<td>SUBJECT RELATED INFORMATION</td>
<td></td>
</tr>
<tr>
<td>Visit short name</td>
<td>Study week</td>
<td>Visit window (days)</td>
<td>SUBJECT RELATED INFORMATION</td>
<td></td>
</tr>
<tr>
<td>Visit short name</td>
<td>Study week</td>
<td>Visit window (days)</td>
<td>SUBJECT RELATED INFORMATION</td>
<td></td>
</tr>
</tbody>
</table>
**Randomisation**
<table>
<thead>
<tr>
<th>Randomisation</th>
<th>Randomized</th>
</tr>
</thead>
<tbody>
<tr>
<td>Randomisation Date</td>
<td>RANDOMISATION_DATE</td>
</tr>
<tr>
<td>Randomisation Date</td>
<td>RANDDOT</td>
</tr>
<tr>
<td>Randomisation Date</td>
<td>X</td>
</tr>
</tbody>
</table>
**End of Study**
<table>
<thead>
<tr>
<th>End of Study</th>
</tr>
</thead>
<tbody>
<tr>
<td>End of Study</td>
</tr>
<tr>
<td>End of Study</td>
</tr>
<tr>
<td>End of Study</td>
</tr>
<tr>
<td>End of Study</td>
</tr>
</tbody>
</table>
**General**
- Cardiovascular System
- Abdomen
- Central and Peripheral Nervous System
- Gastrointestinal System incl. Mouth
- General Appearance
- Musculoskeletal System
- Respiratory System
**Body Measurements**
<table>
<thead>
<tr>
<th>Body Measurements</th>
</tr>
</thead>
<tbody>
<tr>
<td>Weight</td>
</tr>
<tr>
<td>Body Weight</td>
</tr>
<tr>
<td>Height</td>
</tr>
<tr>
<td>Height</td>
</tr>
</tbody>
</table>
**Eligibility Criteria**
<table>
<thead>
<tr>
<th>Eligibility Criteria</th>
</tr>
</thead>
<tbody>
<tr>
<td>Eligibility Criteria Met</td>
</tr>
<tr>
<td>Subject Eligible to Continue the Trial</td>
</tr>
</tbody>
</table>
**Protocol SoA** displaying the selected activity level of detail as a preview.
Produce a copy of the SoA compatible with Word.
M11 – Section 8 = Detailed SoA
1. Protocol summary
2. Introduction
3. Trial objectives, endpoints and estimands
4. Trial design
5. Trial population
6. Trial intervention and concomitant therapy
7. Discontinuation of trial intervention and participant withdrawal from trial
8. Trial assessments and procedures
9. Statistical considerations
10. General considerations: regulatory, ethical, and trial oversight
11. GENERAL CONSIDERATIONS: RISK MANAGEMENT AND QUALITY assurance
12. Appendix: adverse events and serious adverse events – definitions, severity, and causality
13. Appendix: definitions and supporting operational details
14. Appendix: glossary of terms
15. Appendix: references
Selection process of Activities for SoA
For Protocol Outline / Protocol
- Select Activities in relevant grouping
- When selecting an Activity within a specific grouping, then this will drive ActivityInstance – this should be visible for Protocol Writers (like a COL)
- Some ActivityInstances can be marked as default for an Activity, and will then be pre-selected
- Some ActivityInstances can be marked as mandatory – and cannot be un-selected
- Select what to display or hide in high-level Protocol SoA
For Operational Data Specification
- Confirm or Select Activity Instances for each selected Activity
- If the correct ActivityInstance will change Grouping – this will require a change to the Protocol SoA – this will then
For Data Collection Specification
- The data collection specification
- Lab specs
- CRF
- Other eSources
- What is pre-set
From Activity to Activity Instance
Activity to Activity Instance to Activity Item – As Biomedical Concept (COSMOS project from CDISC)
Digital Data Flow Adaptor (TransCelerate DDF)
Our vision
Status of the OpenStudyBuilder
- Already working:
- Protocol SoA
- Detailed SoA
- eCRF in the Library
- Vendor Extensions
- Alias
- Models integration (like SDTM/SDTMIG with version control in the Library)
- Work in progress:
- Operational SoA
- Connection between Activity Instances with Activity Items to eCRF, SDTM domains and variables, ADaM domains and variables with a sharing CT management and units
- Integration of external data like Labs
- What is planned:
- eCRF at the Study level (with integration to the Operational SoA
- Production of the define.xml (pre version) based on the Protocol SoA and Detailed SoA
Questions to discuss
- Extensions / configurations required for vendors
- Additional attributes, e.g. to link to systems & versions
- ODM.xml additional information
- API endpoints, additional requirements
- General aspects
- API versioning
- Continuous development challenges, up versioning
- Adoptions & implications according license
- Standards
- Additional standard requirements, recommendations, wishes
Additional Information
CDISC Interchange 2024
Use OpenStudyBuilder to drive EDC setup - a COSA Workshop
23 April 2024 9:00-16:00, Berlin, Germany
CDISC Interchange 2024
Use OpenStudyBuilder as MDR Meetup
23 April 2024 17:00-18:00, Berlin, Germany
➢ Reach out to [email protected]
Meet us at the Interchange
24-25 April 2024
➢ Reach us at the COSA booth for demonstration and exchange
From OpenStudyBuilder to the Digital Data Flow - USDM Format
15 April 2024 – 14:40-15:00, Presentation
OpenStudyBuilder
The OpenStudyBuilder is an open-source project for clinical study specifications. This tool is a new approach for working with studies that once fully implemented will drive end-to-end consistency and more efficient processes - all the way from protocol development and CRF design - to creation of datasets, analysis, reporting, submission to health authorities and public disclosure of study information.
https://openstudybuilder.com/
Links
• Project Homepage: https://openstudybuilder.com/
• Newsletter: https://www.linkedin.com/newsletters/openstudybuilder-6990328054849916928/
• YouTube Demonstration (30'): https://youtu.be/dL5CY0BwfEs
• GitLab (Solution, Description): https://gitlab.com/Novo-Nordisk/nn-public/openstudybuilder
• Slack: https://join.slack.com/t/openstudybuilder/shared_invite/zt-19mtauzc-jvrhtmy7hGstgyiIvB1Wsw
• E-Mail: [email protected]
Sandbox:
• Mail [email protected] – Subject “Request Sandbox access”
• Note: when add/modify/delete, you mail might be exposed in the version history
Thanks!
Questions?
|
{"Source-Url": "https://www.cdisc.org/sites/default/files/pdf/2024-03-26-COSA-Spotlight.pdf", "len_cl100k_base": 5658, "olmocr-version": "0.1.50", "pdf-total-pages": 49, "total-fallback-pages": 0, "total-input-tokens": 54317, "total-output-tokens": 6105, "length": "2e12", "weborganizer": {"__label__adult": 0.00112152099609375, "__label__art_design": 0.0011091232299804688, "__label__crime_law": 0.0014514923095703125, "__label__education_jobs": 0.02008056640625, "__label__entertainment": 0.0002949237823486328, "__label__fashion_beauty": 0.0007638931274414062, "__label__finance_business": 0.00270843505859375, "__label__food_dining": 0.0012769699096679688, "__label__games": 0.0017709732055664062, "__label__hardware": 0.0033779144287109375, "__label__health": 0.09100341796875, "__label__history": 0.0007781982421875, "__label__home_hobbies": 0.0004343986511230469, "__label__industrial": 0.00136566162109375, "__label__literature": 0.000973224639892578, "__label__politics": 0.0006022453308105469, "__label__religion": 0.0011196136474609375, "__label__science_tech": 0.2342529296875, "__label__social_life": 0.0006103515625, "__label__software": 0.28466796875, "__label__software_dev": 0.34619140625, "__label__sports_fitness": 0.0024166107177734375, "__label__transportation": 0.0008993148803710938, "__label__travel": 0.0005626678466796875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17953, 0.01104]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17953, 0.07556]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17953, 0.73936]], "google_gemma-3-12b-it_contains_pii": [[0, 115, false], [115, 128, null], [128, 640, null], [640, 1319, null], [1319, 1383, null], [1383, 1949, null], [1949, 2173, null], [2173, 2373, null], [2373, 3166, null], [3166, 3319, null], [3319, 3372, null], [3372, 3496, null], [3496, 3705, null], [3705, 4033, null], [4033, 4507, null], [4507, 4538, null], [4538, 5236, null], [5236, 5400, null], [5400, 5445, null], [5445, 5495, null], [5495, 5544, null], [5544, 5544, null], [5544, 5562, null], [5562, 5719, null], [5719, 5942, null], [5942, 6129, null], [6129, 8208, null], [8208, 8239, null], [8239, 9035, null], [9035, 9048, null], [9048, 9235, null], [9235, 11326, null], [11326, 13559, null], [13559, 14247, null], [14247, 15111, null], [15111, 15146, null], [15146, 15245, null], [15245, 15291, null], [15291, 15302, null], [15302, 15302, null], [15302, 15302, null], [15302, 15949, null], [15949, 16375, null], [16375, 16398, null], [16398, 16522, null], [16522, 16875, null], [16875, 17330, null], [17330, 17934, null], [17934, 17953, null]], "google_gemma-3-12b-it_is_public_document": [[0, 115, true], [115, 128, null], [128, 640, null], [640, 1319, null], [1319, 1383, null], [1383, 1949, null], [1949, 2173, null], [2173, 2373, null], [2373, 3166, null], [3166, 3319, null], [3319, 3372, null], [3372, 3496, null], [3496, 3705, null], [3705, 4033, null], [4033, 4507, null], [4507, 4538, null], [4538, 5236, null], [5236, 5400, null], [5400, 5445, null], [5445, 5495, null], [5495, 5544, null], [5544, 5544, null], [5544, 5562, null], [5562, 5719, null], [5719, 5942, null], [5942, 6129, null], [6129, 8208, null], [8208, 8239, null], [8239, 9035, null], [9035, 9048, null], [9048, 9235, null], [9235, 11326, null], [11326, 13559, null], [13559, 14247, null], [14247, 15111, null], [15111, 15146, null], [15146, 15245, null], [15245, 15291, null], [15291, 15302, null], [15302, 15302, null], [15302, 15302, null], [15302, 15949, null], [15949, 16375, null], [16375, 16398, null], [16398, 16522, null], [16522, 16875, null], [16875, 17330, null], [17330, 17934, null], [17934, 17953, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 17953, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17953, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17953, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17953, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17953, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17953, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17953, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17953, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17953, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17953, null]], "pdf_page_numbers": [[0, 115, 1], [115, 128, 2], [128, 640, 3], [640, 1319, 4], [1319, 1383, 5], [1383, 1949, 6], [1949, 2173, 7], [2173, 2373, 8], [2373, 3166, 9], [3166, 3319, 10], [3319, 3372, 11], [3372, 3496, 12], [3496, 3705, 13], [3705, 4033, 14], [4033, 4507, 15], [4507, 4538, 16], [4538, 5236, 17], [5236, 5400, 18], [5400, 5445, 19], [5445, 5495, 20], [5495, 5544, 21], [5544, 5544, 22], [5544, 5562, 23], [5562, 5719, 24], [5719, 5942, 25], [5942, 6129, 26], [6129, 8208, 27], [8208, 8239, 28], [8239, 9035, 29], [9035, 9048, 30], [9048, 9235, 31], [9235, 11326, 32], [11326, 13559, 33], [13559, 14247, 34], [14247, 15111, 35], [15111, 15146, 36], [15146, 15245, 37], [15245, 15291, 38], [15291, 15302, 39], [15302, 15302, 40], [15302, 15302, 41], [15302, 15949, 42], [15949, 16375, 43], [16375, 16398, 44], [16398, 16522, 45], [16522, 16875, 46], [16875, 17330, 47], [17330, 17934, 48], [17934, 17953, 49]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17953, 0.29286]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
b6f56fcfb9ef6982ac3ea84eaf3b6a71c3319265
|
Concurrency Control
Serializable Schedules and Locking Protocols
Serializability Revisited
Goals of DBMS:
1. Ensure consistency of database states.
2. Process transactions efficiently.
Serial schedules ensure database consistency, but serial execution of transactions is inefficient.
Serializable schedules are defined in order to bridge the gap between consistency and efficiency.
- As their outcome must be equivalent to the result of some serial schedule, serializable schedules are guaranteed to preserve consistency of the database.
- As they are not required to be serial, serializable schedules allow for interleaving of actions from different transactions.
Question 1: How much concurrency is allowed by serializable schedules?
Given transaction $T$, we denote as $\text{dom}(T)$ the set of all database objects that $T$ accesses (both in read and write modes).
Proposition 1 Let $T_1$ and $T_2$ be two transactions such that $\text{dom}(T_1) \cap \text{dom}(T_2) = \emptyset$. Then
1. Both serial schedules $T_1; T_2$ and $T_2; T_1$ result in the same database state.
2. Any valid schedule over $T_1$ and $T_2$ is serializable.
Proof. (sketch)
1. To prove that both serial schedules result in the same database state, we examine the resulting value of each database object in $D = \text{dom}(T_1) \cup \text{dom}(T_2)$. If $A \in D$, then by our assumption, either $A \in \text{dom}(T_1)$ and $A \notin \text{dom}(T_2)$ or vice versa. Therefore, its value gets modified by only one transaction, and does not change throughout the execution of the second transaction.
2. We notice that any schedule over the set $\{T_1, T_2\}$ has no RW, WR or WW conflicts. Then the statement follows from the following, more general lemma.
Lemma 1 Let $S$ be a schedule without conflicts over two committed transactions $T_1$ and $T_2$. Then $S$ is a serializable schedule.
Proof. (sketch) We need to show that the final state of the database after $S$ is executed is equivalent to the final state of the database after either $T_1; T_2$ or $T_2; T_1$.
We consider all objects in $\text{dom}(T_1) \cup \text{dom}(T_2)$. Two possibilities arise:
1. $\text{dom}(T_1) \cap \text{dom}(T_2) \neq \emptyset$. In this case, one or more objects $T_1$ and $T_2$ access are the same. Given an object $A \in \text{dom}(T_1) \cap \text{dom}(T_2)$, there are only two possibilities: (i) either both $T_1$ and $T_2$ have read-only access to $A$ or (ii) one of the transactions accesses $A$ after the other transaction has committed.
If there is at least one object $A \in \text{dom}(T_1) \cap \text{dom}(T_2)$ for which (ii) holds, then we claim that this object uniquely defines the serial schedule to which $S$ is equivalent. Indeed, assume that $A$ is accessed first by $T_1$ and then – by $T_2$ after $T_1$ commits.
We claim that $S$ in this case is equivalent to $T_1; T_2$. Clearly, the value of $A$ after $S$ is equivalent to the value of $A$ after $T_1; T_2$. The values of all objects $B \in \text{dom}(T_1) \cup \text{dom}(T_2) - \text{dom}(T_1) \cap \text{dom}(T_2)$ after $S$ will match those after $T_1; T_2$ (see Proposition 1.1).
Let $A' \in \text{dom}(T_1) \cap \text{dom}(T_1)$. If both $T_1$ and $T_2$ only read $A'$ then its value after $S$ will be the same as its value after $T_1; T_2$. We need to show that it is impossible for transaction $T_2$ to modify $A'$'s value before $T_1$. This is indeed so. We know that $T_1$ accesses $A$'s value.
If $T_1$ writes $A$'s value, $T_2$ cannot access $A'$ until after $T_1$ commits (as $S$ is conflict-free), therefore the final value of $A'$ will be the same as in the serial schedule $T_1; T_2$.
If $T_1$ reads $A$'s value, $T_2$ cannot write it until $T_1$ commits (otherwise, a conflict would be registered in $S$). This, however means that again, the final value of $A'$ is determined by $T_2$, and therefore is equal to that of $T_1; T_2$ serial schedule, which proves the first part of the lemma.
2. \( \text{dom}(T1) \cap \text{dom}(T2) = \emptyset \). This is really Proposition 1.2. From Proposition 1.1 we know that both \( T1; T2 \) and \( T2; T2 \) yield the same database state. Simple analysis of the values of each object in \( \text{dom}(T1) \cup \text{dom}(T2) \) after \( S \) ends shows that these values will be the same as those in any serial schedule.
From the two result above, we notice that
- **Serializable schedules** allow for arbitrary interleaving of the transactions that access **completely different sets of database objects**.
- **Serializable schedules** do allow certain degree of interleaving when transactions access the same object.
- **All conflict-free schedules** are serializable.
One would want to know if the reverse of the latter is true:
**Question 2: Are all serializable schedules conflict-free?**
The answer to this question is **NO**.
Below is an example of a **serializable schedule** that is not conflict-free (contains an unrepeatable read).
<table>
<thead>
<tr>
<th>( T1 )</th>
<th>( T2 )</th>
</tr>
</thead>
<tbody>
<tr>
<td>( R(A) )</td>
<td>( W(A) )</td>
</tr>
<tr>
<td>commit</td>
<td>commit</td>
</tr>
</tbody>
</table>
The schedule above is equivalent to serial schedule \( T1; T2 \). There is a **RW** conflict in the schedule, but it never “materializes” to affect the outcome.
**Locking**
**Goal of DBMS (revised):**
Ensure **serializability** of all schedules.
To achieve this goal, DBMS may want to ensure the following properties of its schedules:
- If some transaction \( T \) has **read** some object \( A \), **no** transaction \( T' \) can **write** \( A \) until \( T \) commits or **aborts**.
- If some transaction \( T \) has **written** some object \( A \), **no** transaction \( T' \) can **access** \( A \) until \( T \) **aborts** or **commits**.
Object Locking has been proposed as the way to assure these properties of the schedules.
**Lock:** permission by a DBMS to a transaction to access the content of a particular database object.
**Shared Lock:** permission to read the value of the object. More than one transaction can hold a shared lock on the same object at the same time.
**Exclusive Lock:** permission to write the value of the object. At most one transaction can hold an exclusive lock on an object at a time, and no shared locks are allowed by other transactions on an object for which an exclusive lock exists.
New Rules For Transaction Processing
- Before accessing a database object, any transaction must request an appropriate lock on it.
- If the lock is granted by the DBMS, the transaction may proceed.
- If the lock cannot be granted immediately, the lock request is queued and the transaction is suspended until the lock can be granted.
- Transaction must release all the locks it holds before it terminates (commits or aborts).
**Notation:** $S(A)$ – request for a shared lock on object $A$. $X(A)$ – request for an exclusive lock on object $A$. $U(A)$ – request to release current lock on the object $A$.
**Note:** We assume that abort and commit commands result in automatic release of all locks held by a transaction at that time, therefore, we do not specify all unlock requests explicitly, unless necessary.
Scheduling with locking is illustrated in the following example:
<table>
<thead>
<tr>
<th></th>
<th>$T_1$</th>
<th>$T_2$</th>
<th>$T_3$</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>$S(A)$</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>$X(A)$</td>
<td></td>
</tr>
<tr>
<td></td>
<td>$R(A)$</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>$X(B)$</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>$S(A)$</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>$R(A)$</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>$U(A)$</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>$W(A)$</td>
<td></td>
</tr>
<tr>
<td>commit</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>$R(A)$</td>
<td></td>
<td>$commit$</td>
</tr>
<tr>
<td></td>
<td></td>
<td>$W(A)$</td>
<td></td>
</tr>
<tr>
<td>commit</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Figure 1: Non-serializable schedule with locking.
- Locking by itself does not guarantee serializability of schedules.
This is illustrated by the schedule in Figure 1.
In this schedule, the value of object A in the final state is written by T2 and
the value of object B – by T1, therefore it is not a serializable schedule.
Locking Mechanisms: 2-Phase Locking
Locking Mechanism: set of rules for locking and unlocking objects.
We want the locking mechanism to produce only serializable schedules.
2-Phase Locking (2PL)
2-Phase Locking (2PL) works according to the following two rules:
1. If a transaction T wants to read/write an object A, it must first
request a shared/exclusive lock on A.
2. Once a transaction released one lock it cannot request any additional locks.
Informally, with 2-Phase Locking, the life of any transaction consists of
two periods (phases): (i) the period during which the transaction acquires
new locks (growing) and (ii) the period during which the transaction releases
its locks (shrinking).
It is easy to see that schedule from Figure 1 does not satisfy the conditions
of 2PL as transaction T1 acquires a lock on B after it has released its lock
on A.
Figure 2: Schedules that follow 2PL (left) and Strict 2PL (right) locking mechanisms.
**Strict 2-Phase Locking (Strict 2PL)**
**Strict 2-Phase Locking** is a modification of **2-Phase Locking** which disallows nontrivial shrinking phase in any transaction. It can be described as follows:
1. If a transaction $T$ wants to read/write an object $A$, it must first request a shared/exclusive lock on $A$.
2. Once a transaction have acquired a lock it cannot release it until it commits or aborts.
Figure 2 contains the examples that illustrate 2PL and Strict 2PL locking mechanisms. The schedule on the left satisfies 2-Phase Locking requirements, but will not be accepted according to Strict 2-Phase Locking. The schedule on the left satisfies both 2PL and Strict 2PL mechanisms.
**Properties of 2 Phase and Strict 2 Phase Locking**
Questions
- Want mechanisms for producing serializable schedules.
- Locking and locking mechanisms required.
- 2 Phase Locking and Strict 2 Phase Locking.
**Question 1** Are the schedules produced by 2PL/ Strict 2PL serializable ?
**Question 2** Are all serializable schedules produced by 2PL (Strict 2PL)?
Question 3 What is the difference between 2PL and Strict 2PL?
Question 4 How do we characterize the schedules produced by 2PL / Strict 2PL?
Conflict Serializability
Let us consider for now only schedules consisting of committed transactions.
Dependency (Serializability) Graph.
Let $S$ be a schedule over the set of transactions $T = \{T_1, \ldots, T_N\}$. A dependency graph of $S$, denoted $G_S$, is defined as follows:
- The set of nodes of $G_S$ is $T$.
- $G_S$ has an edge from $T_i$ to $T_j$ labeled with a database object $A$ if
1. both $T_i$ and $T_j$ access some object $A$;
2. at least one of these accesses is a write;
3. no other transaction accesses $A$ between $T_i$'s and $T_j$'s accesses.
Conflict Equivalence.
Two schedules $S_1$ and $S_2$ over the set of transactions $T$ are conflict-equivalent iff $G_{S_1} = G_{S_2}$.
Conflict Serializability.
A schedule $S$ is conflict-serializable iff it is conflict-equivalent to some serial schedule.
Figure 3 shows three different schedules for the same set of transactions $T = \{T_1, T_2\}$. Schedule $S_1$ is not serializable as at the end $A$ will have value set by $T_2$ and $B$ will have value set by $T_1$. Schedule $S_3$ is serial. To see that schedule $S_2$ is conflict-serializable, we construct the dependency graphs $G_{S_1}$, $G_{S_2}$ and $G_{S_3}$ (see Figure 4).
Relationships Between Schedule Types
Theorem 1 A conflict serializable schedule over a static database is serializable.
The requirement that the database is static, i.e., no new objects created and no existing objects deleted while the schedule is executed is important. We will discuss this requirement and predicate locks later in the course.
The inverse of Theorem 1 is not true as manifested in the counterexample on Figure 5. Here, the schedule depicted is equivalent to the serial schedule $T_1; T_2; T_3$ (because $T_3$ blindly overwrites the actions of $T_1$ and $T_2$) but it
<table>
<thead>
<tr>
<th>S1: T1</th>
<th>S2: T1</th>
<th>S3: T1</th>
</tr>
</thead>
<tbody>
<tr>
<td>X(A)</td>
<td>X(A)</td>
<td>X(A)</td>
</tr>
<tr>
<td>S(C)</td>
<td>X(B)</td>
<td>X(B)</td>
</tr>
<tr>
<td>R(C)</td>
<td>S(C)</td>
<td>W(A)</td>
</tr>
<tr>
<td>X(B)</td>
<td>R(C)</td>
<td>W(B)</td>
</tr>
<tr>
<td>W(A)</td>
<td>X(A)</td>
<td>commit</td>
</tr>
<tr>
<td>W(B)</td>
<td>W(A)</td>
<td>S(C)</td>
</tr>
<tr>
<td>U(A)</td>
<td>U(A)</td>
<td>R(C)</td>
</tr>
<tr>
<td>X(A)</td>
<td>X(A)</td>
<td>X(A)</td>
</tr>
<tr>
<td>W(A)</td>
<td>W(B)</td>
<td>X(B)</td>
</tr>
<tr>
<td>commit</td>
<td>commit</td>
<td>commit</td>
</tr>
<tr>
<td>commit</td>
<td>commit</td>
<td>commit</td>
</tr>
</tbody>
</table>
Figure 3: Non-serializable, conflict-serializable and serial schedules.
Figure 4: Dependency Graphs for schedules S1, S2 and S3.
Figure 5: A serializable schedule that is not conflict-serializable.
is not conflict-equivalent to any serial schedule (as can be verified by building appropriate graphs).
Note: Conflict-serializability is a syntactic property of a schedule while serializability is a semantic property. Conflict-serializability is a stronger property.
Lemma 2 1. The dependency graph for a serial schedule is acyclic.
2. A schedule is conflict-serializable iff its dependency graph is acyclic.
Theorem 2 Let $S$ be a schedule generated according to a 2 Phase Locking Mechanism over a set of transactions $T = \{T_1, \ldots, T_N\}$. Then $G_S$ is acyclic.
From Lemma 2 and Theorem 2 we infer
Theorem 3 Any schedule produced by 2 Phase Locking mechanism is conflict-serializable (and hence, serializable).
2 Phase Locking vs. Strict 2 Phase Locking
Theorem 4 Every schedule conforming to Strict 2 Phase Locking also conforms to 2 Phase Locking.
Question 5: What schedules are excluded by Strict 2PL?
Strict Schedules.
A schedule $S$ over the set of transactions $T = T_1, \ldots, T_N$ is called strict iff any value written by any transaction $T_i$ to any database object $A$ does not get accessed by other transactions until $T_i$ terminates (commits or aborts).
Figure 6: 2 Phase Locking in insufficient to ensure strictness of the schedules.
**Theorem 5** Any schedule produced by Strict 2 Phase Locking Mechanism is strict.
**Question 6:** Why is strictness of schedules important?
In a non-strict schedule, if a transaction aborts, it may cause other transactions to abort.
Figure 6 illustrates (i) the problems with non-strict schedules in the presence of aborted transactions and (ii) that 2 Phase Locking may produce non-strict schedules.
When transaction $T_2$ commits, the changes it makes to data become permanent. However, when $T_1$ aborts, the changes it made to object $A$ should be rolled back. However, the changes made by $T_2$ to $A$ and (possibly) $C$ depend on the value of $A$ as written by $T_1$. Therefore, after $T_1$ aborts, the results of $T_2$ become stale and have to be undone.
**Bottomline:** Strict 2 Phase Locking ensures that no transaction has access to data that may become stale, by enforcing strictness of the schedules generated by it. Because of this, despite the extra limitations Strict 2PL puts on interleaving of transactions, it is preferable to simple 2 Phase Locking.
|
{"Source-Url": "http://users.csc.calpoly.edu/~dekhtyar/560-Fall2012/lectures/lec06.560.pdf", "len_cl100k_base": 4472, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 29357, "total-output-tokens": 4634, "length": "2e12", "weborganizer": {"__label__adult": 0.0002655982971191406, "__label__art_design": 0.00019216537475585935, "__label__crime_law": 0.0003962516784667969, "__label__education_jobs": 0.0005269050598144531, "__label__entertainment": 4.5299530029296875e-05, "__label__fashion_beauty": 0.00010210275650024414, "__label__finance_business": 0.0003399848937988281, "__label__food_dining": 0.00025463104248046875, "__label__games": 0.0005617141723632812, "__label__hardware": 0.0007266998291015625, "__label__health": 0.0004036426544189453, "__label__history": 0.00016820430755615234, "__label__home_hobbies": 0.00010651350021362303, "__label__industrial": 0.0003843307495117187, "__label__literature": 0.0002244710922241211, "__label__politics": 0.00019228458404541016, "__label__religion": 0.0003170967102050781, "__label__science_tech": 0.0223541259765625, "__label__social_life": 8.213520050048828e-05, "__label__software": 0.0098419189453125, "__label__software_dev": 0.96142578125, "__label__sports_fitness": 0.00024044513702392575, "__label__transportation": 0.0004935264587402344, "__label__travel": 0.00013339519500732422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14794, 0.01412]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14794, 0.50142]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14794, 0.87182]], "google_gemma-3-12b-it_contains_pii": [[0, 1087, false], [1087, 3968, null], [3968, 5705, null], [5705, 7615, null], [7615, 8811, null], [8811, 9959, null], [9959, 11904, null], [11904, 12380, null], [12380, 13638, null], [13638, 14794, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1087, true], [1087, 3968, null], [3968, 5705, null], [5705, 7615, null], [7615, 8811, null], [8811, 9959, null], [9959, 11904, null], [11904, 12380, null], [12380, 13638, null], [13638, 14794, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14794, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14794, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14794, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14794, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14794, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14794, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14794, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14794, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14794, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14794, null]], "pdf_page_numbers": [[0, 1087, 1], [1087, 3968, 2], [3968, 5705, 3], [5705, 7615, 4], [7615, 8811, 5], [8811, 9959, 6], [9959, 11904, 7], [11904, 12380, 8], [12380, 13638, 9], [13638, 14794, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14794, 0.19255]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
c06d9c54d5dd1a071d1b12a509b4968da6feb331
|
Software Architectural Alternatives for User Role-Based Security Policies
S. A. Demirjian, Sr., T. C. Ting, and J. A. Reisner
Computer Science & Engrg. Dept., The University of Connecticut
191 Auditorium Road, Storrs, Connecticut 06269-3155, USA,
860.486.3719, 860.486.4817, \{steve,ting,reisner\}@eng2.uconn.edu
Abstract
Security concerned users and organizations must be provided with the means to protect and control access to object-oriented software, especially with an exploding interest in designing/developing object-oriented software in Java, C++, and Ada95. Our user-role based security (URBS) approach has emphasized: a customizable public interface that appears differently at different times for specific users; security policy specification via a role hierarchy to organize and assign privileges based on responsibilities; and, extensible/reusable URBS enforcement mechanisms. This paper expands our previous work in URBS for an object-oriented framework by exploring software architectural alternatives for realizing enforcement, with the support of assurance and consistency as a key concern for security policies that evolve and change.
Keywords
Software architectures, object-oriented, enforcement mechanisms
1 INTRODUCTION
How will assurance and consistency be attained during the definition and usage of an application’s user-role based security policy, particularly in an object-oriented context that stresses change and evolution? This question is interesting, particularly with the explosive growth of object-oriented software development. While C++ has been a strong player since the late 1980s, Ada95 and Java offer new opportunities that are targeted for diverse and significant market segments. Security has and will continue to be a major concern, especially in Java, where security must be present to control the effects of platform-independent software. Health care systems require both high levels of consistency and assurance, while simultaneously needing instant access to data in life-critical situations. In CAD applications, the most up-to-date specifications on mechanical parts must be available in a shared manner to
promote cooperation and facilitate productivity, making consistency and assurance important from a business perspective.
Over the past few years, we have concentrated on discretionary access control, by defining a user-role based security (URBS) model that can be utilized in the design and development of object-oriented applications. The current public interface provided by most object-oriented languages is the union of all privileges (methods) needed by all users of each class. This allows methods intended for only specific users to be available to all users. For example, in a health care application (HCA), a method placed in the public interface to allow a Physician (via a GUI tool) to prescribe medication on a patient can't be explicitly hidden from a Nurse using the same GUI tool. Rather, the software engineer is responsible for insuring that such access does not occur, since the object-oriented programming language cannot inherently enforce the required security access. We have proposed a user-role definition hierarchy (URDH) to organize responsibilities and to establish privileges. Privileges can be assigned (can invoke a set of methods) or prohibited (cannot invoke a set of methods) to roles, thereby customizing the public interfaces of classes on a role-by-role basis. Our recent efforts have proposed extensible and reusable URBS enforcement mechanisms, with the goal to minimize the amount of knowledge a software engineer must have on URBS. Work on the object-oriented design model [5] and URBS [1, 4] have been published.
This paper expands our previous work to include assurance and consistency, particularly since we are committed to a continued exploration of automatically generated URBS enforcement mechanisms. Since class libraries may not offer a secure enough venue to insure high consistency and assurance, we have turned to the field of software architectures to investigate potential solutions to augment our previous enforcement mechanisms work [2, 3]. Software architectures [6] expand traditional software engineering by looking at how the major components of a system can mesh and interact. This is especially relevant for object-oriented software, where a class library for a problem is initially developed, with software engineers designing and building tools against that class library to implement the overall capabilities of an application. In such a model, the URBS enforcement mechanism must interact with both the class library and the tools for an application, to insure that users utilizing tools only access those portions of the application on which they have been granted access. In our approach, this translates to the users only being able to invoke methods that have been authorized to their respective roles.
The remainder of this paper contains three sections. In Section 2, we discuss the critical need of consistency for security, as we seek to guarantee a level of assurance to designers and users utilizing an URBS/object-oriented approach. In Section 3, we propose and explore software architectural variants that can offer varying degrees of assurance and consistency, and critique the variants by comparing and contrasting their capabilities from multiple perspectives. Section 4 concludes this paper.
Role-based security policies and enforcement mechanisms must have high consistency in order to support a high assurance, secure system. The consistency must be maintained at all levels within the policy, including individual roles, role hierarchies, and end-user authorizations, to insure that their creation, modification, and deletion will always maintain the required URBS policy. Consistency is the foundation upon which high integrity and assured secure systems must be built. A set of techniques/tools must be provided that allow URBS policies to be analyzed and assured at all times during design, development, and maintenance of object-oriented software.
In general, URBS policies are application dependent, and consequently, data security requirements vary widely from application to application. For example, sensitive health care data must be both protected from unauthorized use while simultaneously be almost instantaneously available in emergency and life critical situations. On the other hand, in some design environments such as CAD, the most up-to-date specifications on mechanical parts must be available in a shared manner to promote cooperation. In this case, the URBS policies may not protect sensitive personal information, but may protect information which is equally sensitive from a business perspective.
The ultimate responsibility for URBS policies is on the shoulders of the application’s management personnel and organization’s data security officer. In order to have these critical policy makers take full advantage of URBS, tools and techniques must be made available. Design techniques are critical to allow software and security engineers to accurately and precisely specify their applications’ functional and security requirements. To augment these techniques, a suite of tools is required, that can provide many different and diverse analytical capabilities. These tools should automatically alert these engineers when potential conflicts occur during the creation or modification of roles, role hierarchies, and end-user authorizations, thereby heading off possible inconsistencies. There must also be tools that provide on-demand analyses, allowing engineers to gauge their realized software and/or security requirements against their specifications. Once the URBS policy has stabilized, the tools should provide the means to capture and realize it via a URBS enforcement mechanism that is automatically generated. The overriding intent is to finish with an object-oriented system that embodies a strong confidence with respect to the URBS policy and its attainment.
The remainder of this section explores these and other issues from two perspectives. In Section 2.1, we examine the consistency issues that must be attainable as roles and dependencies among roles are created and modified. In Section 2.2, we investigate similar consistency issues as actual individuals (people) are authorized to play certain roles within an object-oriented application or system.
2.1 Consistency for User Roles
When a security engineer is creating and modifying user roles for an object-orientated application or system, the consistency of the definition is critical in order to insure that the URBS policy is maintained. This is a time-oriented issue; changes to the policy are needed, especially in object-oriented situations, where evolution and extensibility are the norm. Regardless of the changes that are made, there must be assurance that the privileges of each user role are adequate to satisfy the functions of the user role. Moreover, the privileges must not exceed the required capabilities of the user role, to insure that misuse and corruption do not occur. In addition, since user roles are often interdependent upon one another (e.g., our approach uses a hierarchy), it may be necessary to examine their interactions to insure that privileges aren’t being passed inadvertently from role to role, yielding a potentially inconsistent state.
There are many different scenarios of evolution that must be handled. A security engineer may create new roles for a group of potential users or may create specific roles that are targeted for a particular end-user for a special assignment under a special circumstance. Each newly created role must be **internally consistent** so that no conflicts occur within the role itself. This is also true when a role is modified, which we term **intra-role consistency**. For the object-oriented case, when privileges are assigned to each role, this assignment implicitly grants object-access privileges to the role holder (end-user). Such an assignment process utilizes the **least-privilege principle** which grants only necessary access privileges but no more. Only those privileges that are relevant to the user role are permitted. The policy is intentionally very conservative and restrictive, requiring that the URBS policy be validated by either the software engineer, security engineer, or both. In some organizations, there are dedicated security officers who possess the ultimate responsibility with respect to security requirements/policies for all applications.
To complement the least-privilege principle, user roles often must satisfy **mutual exclusion conditions**. Here, there must be a careful balance between permitting access to certain objects while simultaneously prohibiting access to other, special objects. For example, in HCA, an individual assigned the role of Pharmacist can read the prescription of a patient, update the number of refills after processing the prescription, but is explicitly prohibited from modifying the dosage or drug of the prescription. Thus, access and modification to some information is balanced against exclusion from other information. This strong mutual exclusion situation is clearly observed by the medical profession and is mandated by law. The URBS policy must ensure that security requirements such as these are not violated. In our approach, these mutual exclusions are supported in the URDH by allowing the security engineer to define prohibited methods, which provides the means to insure that the prohibited privileges of a role do not contradict with the assigned privileges.
When one extrapolates to consider the interdependence of user roles, such as
within our URDH, the internal consistency as captured by least privilege and mutual exclusion, must be expanded to inter-role consistency. In any approach with interdependence among user roles, there is the potential for user roles to acquire privileges (both positive and negative privileges) from other roles. In addition, to provide versatile design tools to the security engineer, it should be possible to establish superior, inferior, and equivalence relationships among different user roles. These relationships must also be validated as privileges are defined, acquired, and change. From the perspective of the entire URDH, intra-hierarchy consistency must be attained.
To support a URBS definition process with least privilege and mutual exclusion, the security engineer must be provided with a set of techniques and tools. There must be tools for meaningful comprehension on user roles, including all positive privileges, negative privileges, and relationships to other roles, supporting intra-role consistency. Once any initial definition has occurred, there must be tools to support analyses for both internal and inter-role consistency. Automated analysis tools are necessary for an exhaustive search to follow all possible object access paths as required by all of the positive and negative privileges in the security definition. Conflicts discovered during the search will have to be resolved by the application's management personnel and security engineer. Feedback must be available to assist the human designer in arriving at a viable resolution to any conflicts or inconsistencies. Analyses are available in the ADAM environment for the application's content/context, and for its security requirements [4, 5]. Capabilities analyses allows one to review the permissions given to a chosen URDH node on an application's OTs, methods, and/or private data, supporting the intra-role or internal consistency of the URBS policy. Authorization analyses allows one to investigate which user roles have what kinds of access to an OT, a method, or a private data item, supporting inter-role and intra-hierarchy consistency.
2.2 Consistency in End-User Authorization
When considering consistency in end-user authorization, the assumptions of the policy must be clearly understood. For example, in any organization where end-users can be assigned multiple roles, there are two scenarios of permissible behavior against an application: 1. end-users can only play exactly one role at any given time; and, 2. end-users can play multiple roles concurrently at any given time. The first assumption does not cause significant problems, since for an end-user, only one role is active. As long as that role is intra-role and inter-role consistent, there is no problem. However, the first assumption alone does not provide the needed security, but instead raises a number of interesting issues that are addressed by the second assumption.
Namely, when an end-user may play multiple user roles simultaneously at any given time within an application and within the organization, a level of end-user consistency in introduced. In end-user consistency, the privileges of the multiple roles for are aggregated, which may introduce conflicts between
positive and negative privileges that span multiple roles. Further, when a new privilege is assigned to an established user role, with internal and inter-role consistency assured, it may still impact the end-user consistency. Automated tools are needed for the user authorization model in a secure data system so that no URBS policy violations are possible for any end-user in the organization. Thus, the techniques/tools in Section 2.1 must be extended to consider end-user consistency, allowing the security engineer to focus on the conflicts of privileges for single end-users with multiple concurrent roles.
Once intra-role, inter-role, and intra-hierarchy, and end-user consistency have been attained at a definitional level, there are two remaining requirements: (1) the defined URBS policy must be captured within the object-oriented application; and (2) once captured, at both compile time and runtime, the policy must be enforced. For both requirements, our previous work on URBS enforcement approaches [2, 3] is intended to support, in part, the consistency and assurance of the URBS policy. However, as we will see in Section 3, through software architectures we can provide a higher level of assurance regarding the guarantee that must be met concerning a defined URBS policy for an object-oriented application.
3 SOFTWARE ARCHITECTURES AND URBS MECHANISMS
To understand our efforts in this section, it is critical that we define our assumptions concerning the composition of object-oriented software. Basically, the crux of an object-oriented system is an underlying, shared object type/class library to represent the kernel or core functionality. Once such a library has been developed, other software engineers will use it to design and develop tools. Thus, end-users are not able to write programs to access data directly. Rather, end-users utilize tools that embody the apropo security code to enforce the required URBS policy.
Software architectures [6] is an emerging discipline whose intent is to force software engineers to view software as a collection of interacting components. Interactions occur both locally (within each component) and globally (between components). In understanding interactions, the key consideration is to identify the communication and synchronization requirements which will allow the functionality of the system to be precisely captured. By taking a broader view of the problem definition process, software architectures permits database needs, performance/scaling issues, and security requirements, to be considered. These considerations are critical as large-scale object-oriented software design, development, and usage becomes increasingly dominant.
Our purpose in this section is to present and critique multiple software architectural variants for URBS enforcement of object-oriented software, with a constant focus on the attainment of consistency and assurance. Our intent in this section is to step back from our previous work [3] and consider the ways that these approaches can fit into an overall architectural scheme to
security enforcement for object-oriented software. Nevertheless, we start this section by briefly reviewing two of our previous approaches, since they set the context for our subsequent discussion related to software architectures. Then, we focus on two architectural styles: layered systems, which are most known from ISO layers; and, communicating processes, which underlie today's client/server paradigm. For both styles, multiple variants are presented and analyzed. The final section critiques the six different variants by comparing and contrasting their capabilities.
3.1 UCLA and GEA Approaches
The URDH-class-library approach (UCLA) employs inheritance to implement the enforcement mechanism by creating a class hierarchy for the URDH. For each URDH node, positive method access is based on the defined assigned methods. At runtime, a user's role guides the invocation of the appropriate methods that are used to verify whether the user's role has the required permissions. From an evolvability perspective, as user roles are added, or as privileges are changed, only the URDH class library must be recompiled.
The generic exception approach (GEA) utilizes reusable template classes to realize a significant core of generic code that encapsulates the URBS policy. In GEA, when a method is invoked, the user role of the current user is checked to verify if access can be granted. If the user's role doesn’t permit access, an exception is thrown, and the invoking method will not allow its functionality to be executed and affect instances. The code in the GEA security template is hidden from the software engineer. Software reuse is promoted since the template is reused by all classes that require URBS enforcement.
3.2 Architectural Alternatives
From a software architecture perspective, the URBS enforcement mechanism can be located in many different places and function in many different ways. It can be integral part of the OT/class library, to be automatically included when any tool utilizes a portion of the library. Alternatively, it may be an independent and self-contained library that is compiled with each application tool, similar in concept to a math library being included. Other choices could have a separately executing process through which all security requests must be handled. Regardless of the choice, the key underlying characteristics must be the attainment of high consistency and assurance. This must be balanced against the need to minimize the amount of knowledge a software engineer must have on URBS and to have an approach that is evolvable, since object-oriented software and its security policy will change over time.
To standardize terminology regarding the assumptions on object-oriented systems given in the introduction of Section 3, we define:
**AppCL:** Represents the shared, object-oriented class library for an application.
**SCL:** Represents the security class library for an application that embodies URBS definition and enforcement.
**TCL:** Represents the tool class library for individual tools (e.g., a patient GUI, an admissions subsystem, etc.) against the application.
Note that when the L is dropped from either AppCL or SCL, we are referring to an individual class of the library. The remainder of this section explores layered systems, and communicating processes and the client/server paradigm, as software architectural alternatives for URBS enforcement. For each alternative, multiple variants are presented and discussed, and then analyzed with respect to: the level of consistency and assurance that each variant provides for security concerned users; the dimensions of evolvability, which is critical since both the URBS policy and object-oriented software tend to be dynamic over time; and, the impact of the absence/presence of a persistent store.
(a) **Layered Systems**
*Layered systems* are a classic technique for software architectures, where layers of functionality are built upon one another to provide a controlled environment for access to information. In Figure 1, there are two layered system variants for URBS enforcement: LS1 an application-based approach on the left, and LS2 a class-based approach on the right. In both variants, security is at the level of the method invocation, which is processed by the SCL prior to its actual runtime call against an instance of a class in the AppCL. In either case, the SCL can be either the UCLA or GEA approach. In the LS2 variant, each individual class handles the method invocations that apply to its instances as they are received by the various tools. The difference is one of granularity. In LS1, security is managed at the application level overall, and once it has been determined that the tool can invoke a method, it is passed through to the involved instance or instances. In LS2, security is managed at the instance level only. This may cause a problem when instances refer to other instances, i.e., a security request by the tool involves multiple instances of either the same or different classes.
From a consistency/assurance perspective, it appears that LS1 has the advantage, since all of the method invocations must pass forward through the security layer for authentication and all results must pass back for enforcement. That is, when utilizing a tool and its various options, users end up calling various methods based on his/her UR and under the control of the tool. However, variant LS2's view of allowing each instance to maintain its own security is superior to LS1 from a software evolution perspective, since changes to the security policy of one class may not affect the policies of other
classes. When a persistent store is included, LS1 has the edge, since all accesses must proceed via a common security layer. In LS2, there are potential concurrent access issues if some or all AppCs are connected to a database.
(b) Communicating Processes - C/S Paradigm
In the communicating processes approach to URBS enforcement, a process-oriented, client/server (C/S) paradigm is adopted. TCL, SCL, and AppCL are integrated into single and/or multiple processes, resulting in a total of four different variants: CP1, CP2, CP3, and CP4. The variants differ in their number of processes and the grouping of the TCL, SCL, and/or AppCL into various processes. Note that for both CP1 and CP2, each SCL and/or AppCL represents the minimal subset needed by the tool/TCL to support its functionality and enforce its security policy.
In Figure 2, variant CP1 is given, which is similar to LS1, except that each tool is compiled as a separate, standalone process. In this case, SCL and AppCL are analogous to a math library that is compiled when needed by the software. Functionally, within each process, TCL sends method invocations to SCL which in turn passes them through to AppCL according to the URBS policy. Results are passed back from AppCL to SCL, which may then filter the response before passing them back to TCL. Note that SCL and AppCL in each process represent those subsets of the class libraries needed by each tool, and that either UCLA or BEA can be the enforcement mechanism realized within SCL.
Figure 1 Application-Based (LS1) and Class-Based (LS2) Approaches.
Figure 2 CP1: A Single Process, Non-C/S Approach.
From a consistency/assurance perspective, it would be a requirement that each tool be compiled into a single process with SCL and AppCL included. Thus, the level of assurance and consistency that is attained is tied to the accuracy
and completeness of the URBS policy. But, note that since each tool may have a only a portion of the overall URBS policy, consistency becomes a prominent concern whenever changes need to be made, i.e., updates must be made to all tools that use the portion of the policy that changed. Extensibility in CP1 presents major problems. While it is easy to add new tools, and new tools when added won’t effect existing tools, changes to either the SCL or AppCL definitely cause problems. If changes to the SCL are localizable to data files that can be dynamically loaded, then URBS policy changes should be supportable. But, if the changes require the SCL to be rebuilt, unless the compilation/runtime environment supports dynamically linkable class libraries, all affected tools must be recompiled. Changes to AppCL have a dramatic impact for all affected tools and all affected portions of the SCL. In addition, since the AppCL is compiled with each tool, it is unclear whether this approach can successful work when AppCL is linked to a database.
Variants CP2 and CP3, shown in Figure 3, are both multi-process approaches with clearly defined client/server separation of functionality. In CP2, shown in the left side of Figure 3, each client is a TCL/SCL pair that interacts with a shared AppCL server. In this case, each SCL represents that subset of the overall URBS policy/enforcement that is needed by the specific tool, i.e., if a tool only uses one or two classes, the SCL is that subset of the overall URBS policy for those needed classes. Thus, the URBS policy/enforcement is specifically bound to each tool. Like CP1, the level of consistency/assurance that is attained depends on the realization of the URBS policy within SCL. The fact that the policy is spread across multiple tools does introduce potential consistency concerns when changes to the policy are made. Changes to the URBS policy impact SCL in the same way as CP1. However, there are improvements in changes to AppCL; since it is in a separate process, careful planning will allow some changes to have no impact on the joint TCL/SCL clients. Drastic changes to AppCL (e.g., deletion of classes, additions of classes, major functionality upgrades) are likely to impact SCL thereby requiring the recompilation of tools. UCLA and GEA are tightly linked to AppCL, making them inappropriate for CP2. From a database perspective, the presence of a persistent store within or coupled to AppCL should be supportable and invisible to the clients.
In CP3, shown in the right side of Figure 3, the client is each individual tool (TCL), with the server containing the joint SCL/AppCL functionality. By decoupling the URBS policy/enforcement from each tool, the tool becomes relatively independent from changes to the security policy. Each tool simply makes requests to the joint server and the way that those requests are satisfied can be hidden using typical object-oriented design approaches. Thus, unlike CP1 and CP2, changes to the URBS policy shouldn’t impact tool code. The placement of the entire URBS policy/enforcement in one location greatly improves consistency and assurance, since all changes to the policy occur in one place. This is superior to both the CP1 and CP2 variants. Like CP1, SCL can be realized with UCLA or GEA.
Changes to the URBS policy and/or the AppCL may require that the joint server be periodically rebuilt, i.e., changes to AppCL may still impact SCL. As long as those changes don’t alter the signatures of the various methods/protocols that tools utilize, there should be no impact on the tool code. Basically, the dimension of evolvability allows the easy addition of new tools or new users utilizing existing tools. Database integration of AppCL is the same as CP2. However, from a performance perspective, since all security requests are processed by a joint server, there is the potential that the server will become a bottleneck as the throughput of the system increases, i.e., with more tools, or more users utilizing existing tools.
Variant CP4, as shown in Figure 4, is presented as a means to alleviate the remaining consistency, assurance, and performance concerns of CP3. Variant CP4 is truly a multi-process, multi-leveled, client/server architecture. In this case, each TCL is a client to an SCL server that provides security for the entire AppCL, i.e., the SCL’s are replicated. Each SCL, in turn, is a client to the shared AppCL. Like CP2, SCL’s separation from AppCL negates UCLA and GEA as appropriate solutions.
The relationship between each TCL\_i and its respective SCL\_i acquires the advantages of CP3 with respect to: the independence of the tool code from SCL (and AppCL); the ability to add new tools; and, the lack of impact of changes to SCL (and AppCL) on the tool code. The multiple SCL servers to the TCL clients also alleviate a level of performance concerns from CP3, allowing more SCLs to be added as more tools (and hence, more users) need to be served. Consistency and assurance in CP4 maintain the benefits of CP3 over the other two variants: each SCL has the entire URBS policy/enforcement, so any changes to the policy can be made and replicated. CP4 still may have performance bottlenecks with respect to access to AppCL. But those bottlenecks have now been delineated from the SCLs, and can be handled by replacing AppCL by a distributed object-oriented class library with database support.
3.3 Critiquing the Architectural Variants
This sections summarizes the evaluative statements for the six variants into a cohesive discussion that clearly compares and contrasts their capabilities. Our first critique is based on the location and structure of the URBS policy/enforcement within each variant, as shown in Table 1. This is important from a consistency and assurance perspective. In LS1, CP3, and CP4, the entire policy/enforcement is present and captured within SCL (replicated in CP4). In
<table>
<thead>
<tr>
<th>Table 1</th>
<th>Critiquing Security Policy Location and Structure.</th>
</tr>
</thead>
<tbody>
<tr>
<td>LS1, CP3, CP4</td>
<td>Full/Entire Policy</td>
</tr>
<tr>
<td>LS2, CP1, CP2</td>
<td>Partial-Distributed Across Tools</td>
</tr>
<tr>
<td>Assessment</td>
<td>Key is Modularity of Security Policy</td>
</tr>
</tbody>
</table>
LS2, CP1, and CP2, the policy is partially captured, to the level required by the tool/TCL. From a consistency perspective, whenever the URBS policy changes, there must be assurance that the policy is still enforced by all existing tools. The centralized nature of LS1, CP3, and CP4, lends itself to a maintenance of the assurance after the change. In the case of LS2, CP1, and CP2, the tools/TCLs must be recompiled to insure that all SCLs are updated. Also, since the policy is spread across multiple SC/AppC pairs (in LS2) or is unique to each process (in CP1 and CP2), there is a chance that inconsistencies can arise that impact on assurance, if all recompiations are not carefully performed.
Our second critique, shown in Table 2, involves the impact of changes on
each variant when either the security policy or application classes are changed. For LS1, LS2, CP3, and CP4, as long as accepted object-oriented design techniques (abstraction, representation independence, etc.) have been followed, it should only be necessary to recompile SCLs and/or AppCLs; there should be no impact on tools/TCL. In fact, depending on the actual enforcement approach
<table>
<thead>
<tr>
<th>Table 2</th>
<th>Critiquing Changes to Policy or Application</th>
</tr>
</thead>
<tbody>
<tr>
<td>LS1, LS2, CP3, CP4</td>
<td>Recompile Tools Only</td>
</tr>
<tr>
<td>CP1, CP2</td>
<td>Rebuild/Change Code Possible Since SCL Linked with TCL</td>
</tr>
<tr>
<td>Assessment</td>
<td>Understand Change Potential</td>
</tr>
</tbody>
</table>
(UCLA, GEA, or other), two situations might occur: when the security policy changes, SCL or SCL/AppCL may need recompilation; and, when some application classes change, AppCL or AppCL/SCL may need recompilation. Both situations are dependent on the interrelation of the enforcement approach to the application classes. For other variants: when the policy changes, CP1 and CP2 must be rebuilt, since SCL is within the same process/client as the tool/TCL; when some application classes change, each tool/TCL in CP1 that uses the subset that has changed must be recompiled. CP2 behaves in a similar fashion to CP3 and CP4 for changes to the AppCL.
A third critique involves the utility of our existing enforcement mechanism approaches (UCLA and GEA) for the architectural variants. As currently designed, both UCLA and GEA are tightly coupled to AppCL. That is, it would be difficult to cleanly and completely separate out the SCL from the AppCL. This being the case, it is apparent that some variants are more conducive to the two approaches than others. Namely, LS1, LS2, CP1, and CP3, can all function with either UCLA or GEA as SCL, since SCL is linked to AppCL. On the other hand, neither CP2 nor CP4 can support UCLA and GEA for the AppCL, without changes to UCLA and GEA that decisively separate the security policy/enforcement from the application class library. It will be necessary to either rework UCLA/GEA, or design new variants to support CP2/CP4.
Our final critique, shown in Table 3, focuses on the case when database interactions are required from the AppCL to a persistent store. LS1, CP2, CP3, and CP4 all separate AppCL from the tools/TCL, meaning that a persistent store can be easily supported. LS2 and CP1 have problems, since each approach utilizes a partial AppCL, for only those classes that are needed by each tool/TCL. Thus, for LS2 and CP1, if database access was to occur, it would likely require that the tools interact to synchronize their requests, which raises many major roadblocks. From a performance perspective, all but CP4 have potential bottlenecks at either the SCL, AppCL, or both. CP4 offers the best solution, and if
needed, the AppCL can be expanded to a distributed object-oriented database to satisfy increases in either tools or users.
**Table 3** Critiquing Security Policy Location and Structure.
<table>
<thead>
<tr>
<th>Assessment</th>
<th>Key is Modularity of Security Policy</th>
</tr>
</thead>
<tbody>
<tr>
<td>LS1, CP3, CP4</td>
<td>Full/Entire Policy</td>
</tr>
<tr>
<td>LS2, CP1, CP2</td>
<td>Partial-Distributed Across Tools</td>
</tr>
</tbody>
</table>
Finally, based on Tables 1, 2, and 3, we can compare/contrast the capabilities of the variants, as given in Table 4. In Table 4, LS1 has a definite edge over LS2, with respect to attaining assurance/consistency and supporting persistence, since LS1 is very central in nature with one copy of AppC and SCL. However, the distributed nature of AppC and SCL in LS2 gives it an edge when security policy changes occur. In Table 4, CP3 and CP4 are superior and comparable. From assurance/consistency and security policy evolution perspectives, both CP1 and CP2 suffer from partially replicated/distributed SCL and the interactions between tools and the SCL. The partial replication of AppCL hinders CP1 regarding persistency support. CP2 is comparable to CP3 and CP4 since AppCL is not directly linked to TCL nor SCL.
**Table 4** Comparing Communication Process Variants.
<table>
<thead>
<tr>
<th></th>
<th>Assuenance/Consistency</th>
<th>Security Policy Evolution</th>
<th>Persistency Support</th>
</tr>
</thead>
<tbody>
<tr>
<td>LS1</td>
<td>Superior</td>
<td>Superior</td>
<td></td>
</tr>
<tr>
<td>LS2</td>
<td>Superior</td>
<td></td>
<td></td>
</tr>
<tr>
<td>CP1</td>
<td>Prob. - SCL is Partially</td>
<td>Major Changes Possible Due to Links of TCL & SCL</td>
<td>AppCL Part. & Replicat.</td>
</tr>
<tr>
<td>CP2</td>
<td>Replicated & Distributed</td>
<td>Superior</td>
<td></td>
</tr>
<tr>
<td>CP3 & CP4</td>
<td>Superior</td>
<td>Superior</td>
<td></td>
</tr>
</tbody>
</table>
4 **CONCLUDING REMARKS AND FUTURE WORK**
Consistency and assurance for object-oriented systems is critical, since it is their nature to evolve and change over time. When both the application class library and the URBS policy are dynamic, those changes have the potential
to significantly impact on the application’s tools, which in turn, impacts on actual users. The emerging discipline of software architectures can be utilized to examine alternative architectural variants for the tools, URBS policy, and application class library. Three variants that we have presented rank comparably: LS1 - a layered system with a shared, URBS policy/enforcement and application class library that is utilized by multiple application tools; CP3 a client/server solution where each tool is a client to a server that consists of a joint process containing the URBS policy/enforcement and application class library; CP4 a multi-level, client server solution where each tool is a client, the URBS policy/enforcement is replicated as a server, and the application class library has its own independent server. Of the three, CP4 lends itself to most easily evolving from a centralized to a distributed object-oriented database.
REFERENCES
5 BIOGRAPHY
Steven A. Demurjian is an Associate Professor of CS&E, and in interested in object-oriented design, security, and reuse. T.C. Ting is a Professor of CS&E, and is interested in security, networks, and engineering/design databases. Major John A. Reisner is in the Air Force and is pursuing his doctorate at UConn.
|
{"Source-Url": "http://www.engr.uconn.edu/%7Esteve/Cse298300/chaphall98.pdf", "len_cl100k_base": 7985, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 76824, "total-output-tokens": 8692, "length": "2e12", "weborganizer": {"__label__adult": 0.0003833770751953125, "__label__art_design": 0.00045180320739746094, "__label__crime_law": 0.0004639625549316406, "__label__education_jobs": 0.0005917549133300781, "__label__entertainment": 4.3332576751708984e-05, "__label__fashion_beauty": 0.00014328956604003906, "__label__finance_business": 0.00019991397857666016, "__label__food_dining": 0.00028133392333984375, "__label__games": 0.0004279613494873047, "__label__hardware": 0.0005626678466796875, "__label__health": 0.0004220008850097656, "__label__history": 0.00016677379608154297, "__label__home_hobbies": 6.306171417236328e-05, "__label__industrial": 0.0003077983856201172, "__label__literature": 0.00018155574798583984, "__label__politics": 0.00023508071899414065, "__label__religion": 0.00034809112548828125, "__label__science_tech": 0.006702423095703125, "__label__social_life": 6.574392318725586e-05, "__label__software": 0.003887176513671875, "__label__software_dev": 0.9833984375, "__label__sports_fitness": 0.0002512931823730469, "__label__transportation": 0.0003833770751953125, "__label__travel": 0.00016689300537109375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40189, 0.01892]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40189, 0.17557]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40189, 0.92511]], "google_gemma-3-12b-it_contains_pii": [[0, 2163, false], [2163, 5440, null], [5440, 8446, null], [8446, 11736, null], [11736, 14980, null], [14980, 18066, null], [18066, 20732, null], [20732, 23724, null], [23724, 25588, null], [25588, 28888, null], [28888, 31018, null], [31018, 32665, null], [32665, 35488, null], [35488, 37727, null], [37727, 40189, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2163, true], [2163, 5440, null], [5440, 8446, null], [8446, 11736, null], [11736, 14980, null], [14980, 18066, null], [18066, 20732, null], [20732, 23724, null], [23724, 25588, null], [25588, 28888, null], [28888, 31018, null], [31018, 32665, null], [32665, 35488, null], [35488, 37727, null], [37727, 40189, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40189, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40189, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40189, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40189, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40189, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40189, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40189, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40189, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40189, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40189, null]], "pdf_page_numbers": [[0, 2163, 1], [2163, 5440, 2], [5440, 8446, 3], [8446, 11736, 4], [11736, 14980, 5], [14980, 18066, 6], [18066, 20732, 7], [20732, 23724, 8], [23724, 25588, 9], [25588, 28888, 10], [28888, 31018, 11], [31018, 32665, 12], [32665, 35488, 13], [35488, 37727, 14], [37727, 40189, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40189, 0.19626]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
a59fd3a8cd36f7bc48498d686d1bb67f04c43700
|
Requirements-driven collaborative choreography customization
Conference Item
How to cite:
For guidance on citations see FAQs.
© 2009 The Author
Version: Accepted Manuscript
Link(s) to article on publisher’s website:
http://www.icsoc.org/
Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online’s data policy on reuse of materials please consult the policies page.
Abstract. Evolving business needs call for customizing choreographed interactions. However, conventional choreography description languages provide only a partial view of the interaction. Business goals of each participant and organizational dependencies motivating the interaction are not captured in the specification of messaging. Absence of this critical business knowledge makes it hard to reason if a particular customization satisfies the goals of participants. Furthermore, there is no systematic means to assess the impact of change in one participant’s process (local view) on the choreography (global view) as well as on other participants’ processes. To this end, we argue for the benefits of representing choreography at the level of requirements motivating the interaction. We propose a framework that allows participants to collaborate on customizing choreographed interactions, while reconciling their competing business needs. To bridge the worlds of messaging and requirements, we employ an automated technique for deriving a choreography description from the customized requirements.
Keywords: Choreography, Requirements, Evolution, Viewpoints.
1 Introduction
A choreography description specifies the behavioral contract of participants in an electronic interaction from a neutral point of view [1]. Mutual obligations of the participants are specified in terms of constraints on the sequences of messages they can exchange. Using a choreography description language (CDL), such as WS-CDL[2], is becoming a de facto way for describing the “global” view of service-oriented interactions.
However, these languages focus almost entirely on operational aspects such as data formats and control flow. They fall short of capturing the business-domain knowledge behind the interaction. In particular, both the strategic motivations driving the participants to interact and the physical activities they are required to perform in order to fulfill their obligations are not directly represented in choreography.
This deficiency becomes critical when the choreography has to be customized to cater for emergent business needs. It is hard to ensure that a particular choice of customization to an existing choreography satisfies the business goals of participants.
To this end, we propose an approach for customizing choreographed interactions at the level of organizational requirements that motivate the interaction. Organizational requirements models capture intentions of the participants, strategic dependencies driving them to interact, and all activities they undertake during the interaction. This knowledge is essential for rationalizing customizations made to the interaction.
Since business goals of one participant (local view) are often conflicting with those of other participants, a particular choice of customization of the choreography (global view) may not be agreeable to all participants. Hence, we propose a framework that allows participants to collaborate on finding an alternative for customizing the interaction agreeable to all of them.
Our framework adopts Tropos [3] for representing organizational requirements. Tropos provides suitable notations for capturing and reasoning about a choreographed interaction in stakeholder-friendly terms. Furthermore, whereas leading CDLs have been criticized for inadequate formal grounding [4], the Tropos framework employs the formal notations of Formal Tropos (FT) [5] for precisely describing constraints that govern the behavior of participants in the interaction.
The formality of FT allows us to maintain consistency between the two representations, organizational requirements and the choreographed-messaging specification. We have previously shown [6] how organizational dependencies motivate choreographed conversations. We have also detailed how choreographed messaging can be derived from requirements [7]. In this paper we build on this work by proposing a framework that bridges global and local views of the interaction. The framework guides the collaborative customization of the interaction through an automatable process.
The rest of the paper is organized as follows: Section 2 introduces the notion of choreography customization and Abstract CDL (ACDL) using our running example. Section 3 motivates our work and gives an overview of our approach. Section 4 shows how we use Tropos to represent organizational requirements for an interaction. Section 5 outlines how we support impact analysis and traceability. Our customization process is detailed in section 6 and validated in section 7. Related work is discussed in section 8. Section 9 concludes and outlines future work.
2 Choreographed Interactions
A choreography description specifies a contract between a group of interacting roles in terms of sequences of messages they are allowed to exchange. Messaging between actual participants that play the choreographed roles at runtime has to abide by this contract. For example, consider the three roles: a patient, a medical provider (MP), and an insurance company (IC). One potential interaction between these roles can be choreographed as follows:
A patient who needs to visit an MP must get an authorization from her IC first. When the patient receives an authorization number from the IC, she requests an appointment from the MP. After getting the confirmation the patient visits the MP to get examined by a doctor who later sends a prescription. The MP then bills the IC and gets back an electronic payment (Figure 1).
In this paper we use a simple pseudo-language for representing choreography in order to focus on our approach without distracting the reader by the quirky details of a particular CDL. Nevertheless, ACDL constructs are directly drawn from the leading CDL, WS-CDL [2], which makes the mapping to WS-CDL constructs almost trivial.
The three ACDL constructs used in this paper are: “Send” message activity to represent a message sent by a participant, a “Sequence” of activities that have to execute in order, and a “Parallel” composition of activities that can proceed simultaneously. The grammar of the language is given in Figure 2 (terminal symbols in bold). The version of ACDL used here does not include constructs for representing repetition or conditional choice between alternative execution branches.
Message sending activities specify the participant who sends the message, P1, the participant who receives it, P2, and a literal “Message Name” that describes the message. All activities in a “Sequence” have to execute in order, where an activity cannot start unless the previous activity has completed. A “Sequence” activity is completed when the last activity in the sequence is completed. Individual branches of a “Parallel” can proceed concurrently. A “Parallel” activity is only completed when all branches are completed. The NoOp activity is a “do-nothing” activity. Figure 1 shows the ACDL for the medical example. Indentation represents nesting of activities.
3 Customizing Choreographed Interactions
We now motivate our work and present an overview of our approach.
3.1 The Problem
It is inevitable that the business requirements driving the interaction will change. As a result, the choreography description needs to be customized to reflect the new contract.
For example, consider an emergent need for the IC to protect itself from abuse of coverage. To protect its assets, the IC needs to ensure that it only covers treatment expenses for eligible patients. One way to achieve this goal is to require the MP to verify the insurance coverage of each admitted patient. The MP is thus required to submit the patient’s insurance information to the IC so that the IC checks the validity of the patient’s insurance policy. The IC will not hold itself liable for covering treatment expenses unless the MP verifies the patient information before submitting a bill. This requirement imposes a constraint on the order in which the MP performs its activities. A naïve realization of this added requirement is to have the MP send a “Verify coverage” message before sending the billing message. With conventional choreography descriptions we face two challenges:
1. It is hard to rationalize this, or any other, choice for capturing the customization without considering how well it satisfies the emergent business need.
2. It is not clear how to assess the impact of any suggested change to the choreography (global view) on the process of each participant (local view). For a participant, e.g. the patient, to agree on the change they have to assess its impact on their business goals.
These issues are exacerbated by the lack of representation of physical activities in choreography descriptions. Physical activities that are part of the interaction contract have to be taken into account when assessing a change.
3.2 Messaging Specification vs. Requirements
To rationalize a customization, it is crucial to consult problem-domain knowledge. However, choreographed messaging descriptions are operational in nature. They do not reveal much of the business rationale behind the interaction but rather focus on how the interaction is to be carried out, i.e. the control flow between activities. On the other hand, organizational requirements provide more abstract descriptions that focus on the why and what aspects of the interaction. We argue that Models of Organizational Requirements (MOR) are superior to messaging descriptions with respect to four representational areas, each of which is crucial to assessing alternative ways for capturing the required customization. These namely are:
1. Intention and Motivation. MOR for the interaction embody essential knowledge about motivations driving each participant including:
- Goals the participants wants to achieve
- Dependencies between participants enabling them to achieve their goals
- Risks and liabilities introduced by the dependencies
2. Refinement Mechanisms: MOR allow for refining high level goals into activities thereby providing rationalization of activities undertaken during the interaction. Refinement relates different levels of abstraction thereby providing traceability all the way down to the messaging specification.
3. Physical Activities. Electronic messaging is only part of the realization of the full interaction. Physical activities that the participants are obliged to perform as part of the interaction contract are not necessarily manifested in the messaging specification. For example, the patient’s visit to the MP and its relation to other activities are not captured in the choreography description in Figure 1.
4. Behavioral Contract. MOR can be annotated with precise specification of participants’ obligations. We employ these behavioral annotations to guide the refinement of models [7]. Furthermore, the use of formal logic enables automatic checking for the satisfaction of participants’ goals.
3.3 Our Proposed Approach
We propose a framework for customizing choreographed interactions that combines the benefits of organizational requirements with the standards-based choreographed messaging descriptions.
While allowing the participants to collaborate on customizing the choreography (global view), our framework allows each participant to evaluate the impact of the customization on their individual business needs (local view). This dichotomy results in the four views (quadrants) of figure 3. We elaborate on Q1 and Q2 in section 4.
Our choreography customization framework entails: representing choreographed interactions at the level of organizational requirements models, performing required customizations to these models in a collaborative manner that benefits from the embodied domain knowledge, and deriving the resulting choreography description in an automated manner.
<table>
<thead>
<tr>
<th>Global</th>
<th>Local</th>
</tr>
</thead>
<tbody>
<tr>
<td>Actor-Dependency Model</td>
<td>Goal-Activity Models</td>
</tr>
<tr>
<td>Q1: Actors, high-level goals, and organizational dependencies</td>
<td>Q2: Goal-activity refinement for one actor</td>
</tr>
<tr>
<td>Choreography</td>
<td>Business Process</td>
</tr>
<tr>
<td>Q3: Observer point-of-view messaging specification</td>
<td>Q4: Specification of messages sent/received by one actor</td>
</tr>
</tbody>
</table>
Fig. 3. The four views of our choreography customization framework.
4 Modeling Interaction Requirements
Tropos [3] is an agent-oriented software development methodology with a focus on organizational requirements at various levels of abstraction. We use Tropos for modeling interaction requirements as it provides a suitable framework for representing and reasoning about the business context for a choreographed interaction. Its models capture goals of participants (actors) in the interaction, mutual dependencies that motivate them to interact, and activities they undertake to fulfill their goals. We introduce how we model the global view of a choreographed interaction using Actor-Dependency (AD) models, how we model the local view using Goal-Activity (GA) models, and how behavioral dynamics of the model are described using FT.
4.1 Global View: AD Modeling
Actor-Dependency (AD) models provide a notation for representing the global view of the interaction at a high-level of abstraction by capturing the actors (participants) in the interaction, their high-level goals, and the inter-dependencies driving them to interact. Figure 4 is an AD model representing the medical interaction at a high-level. An actor is an active entity that performs actions to achieve its goals. The patient, the MP, and the IC are all actors. Model elements can either be internal to an actor (inside the dotted ellipse) or define dependencies whose fulfillment is delegated to other actors. An actor may depend on another for fulfilling a goal, performing an activity, or making some resource available.
A goal is a state of the world desired by one of the actors. For example, the “Get Treated” goal represents the patient’s desire to get cured from an ailment. An activity is an abstraction of a course of action with well-defined pre- and post-conditions. The patient is required to perform the “Appear for Exam” activity to visit the MP’s office. A resource is an informational or physical entity. For example, the “Payment” resource represents the compensation that the MP gets from the IC in return for providing services to the patient.

4.2 Local View: GA Modeling
To detail the specification of the interaction, we successively refine AD models into Goal-Activity (GA) models [3]. Each GA model represents an actor’s local view of the interaction. In the process, goals are refined into sub-goals and eventually realized by activities. Each actor considers and evaluates refinement alternatives based on how well they satisfy their goals [8]. Activities can be further refined into sub-activities that are either implemented by a service or carried out by a human agent.
Figure 5 shows the GA model of both the MP and the patient. Goals and activities internal to an actor are refined inside the dotted ellipse for that actor. Each actor takes responsibility for carrying out their internal activities during the interaction. For example, the “Get Treated” goal was refined into activities to get an authorization from the IC followed by getting a prescription from the MP. The latter is further refined into activities for setting up an appointment followed by visiting the MP and then receiving a prescription from the MP.
The business goals of participants may dictate some ordering of activities. For example, in the analysis process the MP realized the need to manage office schedule. Hence, the MP requires every patient to setup an appointment before they visit. Also, physical activities may impose ordering. For example, the MP has to examine the patient before prescribing treatment.

4.3 Behavioral Specification: Formal Tropos
Behavioral obligations of participants can be captured in formal annotations used by the formal counterpart of Tropos, Formal Tropos (FT). Each activity, goal, resource, and dependency in the model is represented as an FT class, of which many instances may be created during an “execution” of the model. An execution of an FT model corresponds to a possible progression of the interaction. Model execution is useful for verifying that an interaction will proceed as designed. A partial FT specification for the “MakeAppointment” activity and the “Appointment” dependency classes is shown in figure 6, parts of which can be deduced by applying some heuristics [5].
Each class has attributes that define associations with other instances in the model. For example, the “Appointment” class has “makeApp” attribute that references the associated instance of “MakeAppointment” class.
Valid progressions of the interaction are specified by constraining the lifecycle of model elements using temporal logic. Creation and Fulfillment conditions define when an instance of a class is created (instantiated) and when it becomes fulfilled.
### 4.3.1 Creation
Creation of a goal or a dependency is interpreted as the moment at which the actor begins to desire the goal or need the dependency to be fulfilled. For example, an “Appointment” dependency will be created if there is an instance of “MakeAppointment” activity that needs to be fulfilled. For an activity, creation is the moment at which the actor has to start performing it. Note how FT specifies that “MakeAppointment” is created when its “super” activity, “Obtain Prescription”, needs to be fulfilled thereby bridging two levels of abstraction. We use Cr(X) to denote the creation event of X.
### 4.3.2 Fulfillment
Fulfillment condition marks the end of the lifecycle of an instance. Fulfillment condition should hold whenever a goal is achieved, an activity is completed, or a resource is made available. For example, the “MakeAppointment” activity is fulfilled when the associated “Appointment” dependency has been fulfilled (i.e. appointment confirmation was received by the patient) whereas an instance of “Appointment” is fulfilled when the MP has completed the activity of scheduling an appointment. We use Fi(X) to denote the fulfillment event of X.
<table>
<thead>
<tr>
<th>Dependency Appointment</th>
<th>Depender</th>
<th>Patient</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Dependee</td>
<td>MP</td>
</tr>
<tr>
<td>Attribute makeApp: MakeAppointment</td>
<td>Creation condition</td>
<td>¬Fulfilled(makeApp)</td>
</tr>
<tr>
<td>Fulfillment condition</td>
<td>schedApp:SchedulApp</td>
<td>(schedApp.actor = dependee ∧ Fulfilled(sa))</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Activity MakeAppointment</th>
<th>Actor</th>
<th>Patient</th>
</tr>
</thead>
<tbody>
<tr>
<td>Creation condition</td>
<td>¬Fulfilled(super)</td>
<td></td>
</tr>
<tr>
<td>Fulfillment condition</td>
<td>∃ a:Appointment</td>
<td>(a.depender = actor ∧ a.makeApp = self ∧ Fulfilled(a))</td>
</tr>
</tbody>
</table>
Fig. 6. FT specification of “Appointment” and “MakeAppointment”.
### 5 Traceability and Impact Analysis
Our goal here is twofold: first, facilitate collaboration between participants to find a customization on which they all agree and second: systematically determine the messaging specification resulting from customization of requirements models.
5.1 Impact Analysis: Bridging Local and Global Views
To allow participants to assess the suitability of a customization (from their point of view) we must be able to determine the effect of a change in the choreography on any participant’s process. Conversely, we need to determine the impact of changes in any of the participant’s local model on the choreography so that other participants get to assess suggested customizations to the choreography from their point of view.
We employ dependencies to link GA and AD models. GA models explicate which specific activities are at both ends of each dependency, thereby providing linkage between the local view of each participant with the global view of the interaction. FT precisely relates the lifecycle of dependencies to that of activities at both ends of a dependency. For example, in figure 6, note how the state of “Appointment” dependency determines the state of “MakeAppointment” activity. The patient cannot make progress on their internal process flow unless “Appointment” dependency is fulfilled. On the other hand, the “Appointment” dependency is only fulfilled when the MP have complete the “ScheduleAppointment” activity.
5.2 Traceability: Bridging Requirements to Messaging
Using FT to relate the lifecycle of activities to their “super” activity enables us to bridge requirements models to messaging specification. We exploit this traceability mechanism to show how dependencies drive the interaction thereby outlining an abstract view of the choreography [6]. For example, “Appointment” dependency indicates that the patient depends on the MP for obtaining an appointment, which implies that both actors need to interact to fulfill the dependency.
We have exploited these semantics to automate the generation of choreographed messaging from requirements models [7]. First, we infer the set of choreographed events from creation/fulfillment events of activities and dependencies. Then, we use the semantics of refinement, dependencies, and precedence between activities to come up with a partial ordering relation over these events. Finally, from the ordering relation, we generate a choreography description that satisfies the requirements [7]. Even though GA modeling details the activities of the interaction, it provides an important flexibility. It defers the choice of the medium through which activities are carried out. For example, the choreography designer may choose to include the “Prescription” in choreographed messaging or have it be fulfilled otherwise, e.g. paper documents, fax, etc. We take advantage of this by including all activities, including physical activities, in the customization process.
6 Choreography Customization Process
Bridging requirements to choreography allows us to perform required customizations to requirements models then derive the customized messaging. On the other hand, bridging the local and global views helps ensure that customizations to a choreography description do not violate the goals of any participant. Thus, our proposed customization process covers the 4 quadrants of figure 3.
The driver behind choreography customization is to satisfy an emergent business need. Several customization alternatives that satisfy this need may exist. Our process enables participants to collaborate on finding an alternative acceptable to all of them. Each participant gets to evaluate the suitability of alternatives from their local point of view as well as suggest other alternatives.
An advantage of our process is that it has no fixed starting point. Customization may start in any of the four quadrants of figure 3 and move between them. Consider the following example manifestation of the process:
1. Participant P1 identifies an emergent business need.
2. P1 considers a change in their GA model (which is in Q2) to fulfill that need.
3. To determine the effect of the suggested change on the global view we use dependencies to relate P1 GA model to the AD model (moving from Q2 to Q1).
4. The change in the AD model may imply (again Q1 to Q2) changes to another participant’s, P2, GA model.
5. P2 evaluates suggested change from their point of view (Q2 again – but for P2).
6. P2 deems the suggested change unacceptable and suggests an alternative way for fulfilling P1’s need.
7. The effect of the alternative on the AD model is worked out (Q2 to Q1).
8. A change in the AD model implies a change in the GA model of P1 (Q1 to Q2).
9. P1 agrees to the suggested alternative.
10. The choreographed messaging is then derived from the customized AD and GA models [7] (moving from Q1 to Q3).
Each step of the process involves one of the following:
1. **Switch Views.** To assume one of the four views of figure 3 our customization framework allows moving between its four quadrants as follows:
- Q1-Q3: Choreographed messaging constraints obtained from AD models as per [7].
- Q1-Q2: Ends of every dependency appearing in the AD model are activities appearing in a GA model, as in section 5.
- Q2-Q4: Ordering of messages sent and received by one participant is constrained by refinement and precedence between the activities of that participant as per [7].
- Q3-Q4: Messages sent/received by every participant appear in the choreographed messaging specification. For example, as in [9], [10]
2. **Evaluate Alternative.** Each participant needs to ensure that a suggested customization is acceptable from their local point of view. When a change is suggested to their GA model (e.g. to reflect a change in the AD model), a participant can verify that the customized model still achieves their business goals. A systematic way to evaluate a GA model is by executing it using a simulator [5] and checking whether every possible execution state is acceptable. If the participant deems one of the states unacceptable, they can then suggest an alternative customization.
3. **Suggest Alternative.** To aid a participant suggest an alternative customization, we provide systematic ways for finding alternatives for certain classes of customizations. For example, by bridging requirements to messaging as in section 5, we can auto-enumerate all possible alternatives for a customization that requires adding an event to the choreography along with an ordering constraint [6].
4. **Perform Customization.** Customizations that we tackle here are those that result from incremental, rather than radical, changes to requirements. Section 7 shows examples of adding a dependency, an activity, and a precedence constraint.
5. **Agree on an Alternative.** The customization process concludes when none of the participants objects to the candidate customization alternative. However, there is no guarantee that a solution agreeable to all participants will be found. If a point is reached where at least one of the participants objects to the last remaining candidate solution, the requested customization may be deemed unreasonable. An alternative may then be sought at a higher level requirements model, e.g. as in [3] and [8].
7 **Validation**
We now use the medical example to demonstrate our customization framework. Revisiting the medical example, we start the process from the original suggested customization to messaging:
**Starting from the initial suggestion by the IC**
1. The IC suggests a customization where they get a message asking them to verify a patient’s coverage prior to receiving a bill (Q4 for IC).
2. This translates (Q4-Q3) to adding a “verify coverage” message that precedes the billing message in the customized choreography description.
3. Consequently (Q3-Q4 for the MP), the MP has to send a “verify coverage” message before sending the billing message (Q3).
4. The “verify coverage” request-response messages imply (Q3-Q1) an added organizational dependency.
**Adding the “Verification” dependency and required activities**
5. The “Verification” dependency is then added to the AD model (Q1).
6. To initiate the fulfillment of the dependency (Q1-Q2) the MP has to perform a “Verify Coverage” activity (Q2 for MP).
7. The new activity is added to the GA model of MP. From the original requirement imposed by the IC, the activity has to precede “Collect Payment”.
8. The first candidate solution that satisfies the new imposed constraint is to have the new activity immediately precede “Collect Payment”.
9. The MP analyzes the suggested solution through simulation (Q2). The MP determines that the solution allows a state where a prescription has already been sent to a patient whose insurance information has not been verified. This state is deemed undesirable because if the coverage is not eventually verified, the MP will not get paid.
10. To find an alternative point for performing the “Verify Coverage” activity, the MP explores other alternatives [6]. Rather than directly preceding the billing activity, “Verify Coverage” can be made to precede any other activity that transitively-precedes the billing activity.
11. One such alternative is to have the “Verify Coverage” activity precede “Issue Prescription”. But again, an execution of the model (Q2) deems this unacceptable as it allows a state where a doctor wastes his time examining the patient only to find later that she is not covered by the IC.
Continuing in the same manner, the MP finds the first viable solution which is to have “Verify Coverage” precede “Examine patient”.
Adding the “Coverage” dependency and required activities
13. The MP adds a “Get Coverage Info” activity (Q2) which entails (Q1-Q2) adding a “Coverage Info” dependency (Q1). The MP requests that the patient provides coverage information prior to the examination.
14. The patient adds a sub-activity, “Provide Coverage”, to “Obtain Prescription”. The new activity is assigned to fulfill “Coverage Info” dependency (Q1-Q2).
15. The first point “Get Coverage Info” can be performed is right before Cr(Examine Patient) and right after Fi(Visit). This implies that the patient will physically carry the coverage information to the MP office.
16. The patient finds this option undesirable as an execution of the model (Q2 for patient) shows it allows states where the patient goes through the trouble of visiting the MP but not get examined, e.g. if verification fails due to some system outage.
17. Continuing as specified in [6], a viable solution is found where verification is made to precede the Fi(Appointment). Thus, the patient suggests providing coverage information prior to getting the appointment confirmation.
Agreeing on a customization and concluding the process
18. To add “Get Coverage Info” right before Fi(Appointment) the MP makes it a sub-activity of “Schedule Appointment”.
19. The MP agrees the patient’s suggestion.
20. All participants agree to the suggested solution.
21. Having agreed on a customization, the choreography messaging is then derived automatically from the Tropos models.
Figure 7 summarizes customizations made to the Tropos models. By feeding our choreography derivation tool [7] the Tropos model as input it outputs the ACDL description shown in figure 8. Note that a design decision was made to realize “Prescription” as a messaging, rather than physical, activity.

8 Related Work
Most of the research on choreography has focused on representation [11], generating process skeletons [12], and verifying the compliance of the collective behavior of a set of processes with a choreography description [13]. While highly-dynamic service interactions have been a long-sought goal [14], choreography customization is an emerging area [15] with little support for business-level reasoning [4].
Although, our work shares the spirit of attempts to integrate commitments with Tropos [16] [17], our structured customization process and automatic derivation set our approach apart, especially that it is not clear in [17] how activities can be related to messaging. The Amoeba methodology [18] for evolving cross-organizational interaction is promising, albeit it does not adequately distinguish between the local and global views of the interaction thereby obscuring the needs of each participant.
Most of the work addressing customization of service interactions focused on adapting orchestrations [19] [20] rather than choreography. More importantly, with the exception of [21], the business needs driving the interaction are not addressed.
Representing organizational requirements for distributed actors is well-established [22], and also is evolution in agent-oriented systems [23]. However, both were yet to be applied to choreographed service interactions in a way that explicates the multiple views on the interaction. Our work is consistent with the dichotomy given in [24], albeit that work does not address customization. Otherwise, relating viewpoints in service interactions was established only at the messaging level [9]. Attempts to relate choreography to business rules have also only addressed operational aspects [25].
Finally, although UML activity diagrams [26] are widely used to represent choreographed interactions, the formality and the levels of abstractions of Tropos [3] make it superior for analyzing business goals and reasoning about their satisfaction.
9 Conclusions and Further Work
Ever-changing business needs call for customizable choreography descriptions. Conventional CDLs are not well-suited for customization as they embody little of the domain knowledge required to reason about participants’ goals. In particular, the business goals of participants and strategic dependencies motivating the interaction are not explicitly represented. We proposed representing choreographed interactions at the level of organizational requirements. Tropos models embody knowledge about the goals of the participants, the dependencies driving the interaction, and all activities performed during the interaction including physical activities not represented in conventional CDLs.
We proposed a framework that enables participants to collaborate on customizing the choreography (global view) while at the same time ensuring their individual business needs (local view) are satisfied. We utilized the formality of FT to analyze the impact of choreography customization on each participant’s processes. We provided systematic ways for finding customization alternatives and evaluating them.
Once participants have agreed on an alternative, we use our automated technique to derive the customized messaging specification from Tropos models. Using an example, we demonstrated how our framework exploits domain knowledge embodied in requirements models to decide how the required customization is to be performed.
The generated ACDL is a skeleton that needs to be refined in a design phase, e.g. by specifying message data types. In particular, ACDL employs request-response messaging whereas more complex patterns may realistically be needed. We will exploit the FT for inferring more detailed messaging, such as repetition and branching. Furthermore, we plan formalize data flow aspects of our analysis.
References
[7] A. Mahfouz, L. Barroca, R. Laney, and B. Nuseibeh, "From Organizational Requirements to Service Choreography," accepted for publications in SEASS'09, co-located with IWCS'09, July 6-10, Los Angeles, USA.
|
{"Source-Url": "http://oro.open.ac.uk/19209/1/2009-06-24-ICSOC.pdf", "len_cl100k_base": 6994, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 35999, "total-output-tokens": 9194, "length": "2e12", "weborganizer": {"__label__adult": 0.0003039836883544922, "__label__art_design": 0.0005078315734863281, "__label__crime_law": 0.00030112266540527344, "__label__education_jobs": 0.000843048095703125, "__label__entertainment": 7.843971252441406e-05, "__label__fashion_beauty": 0.0001577138900756836, "__label__finance_business": 0.0006251335144042969, "__label__food_dining": 0.0003170967102050781, "__label__games": 0.0004744529724121094, "__label__hardware": 0.000522613525390625, "__label__health": 0.0004813671112060547, "__label__history": 0.00025916099548339844, "__label__home_hobbies": 6.461143493652344e-05, "__label__industrial": 0.0003898143768310547, "__label__literature": 0.0003178119659423828, "__label__politics": 0.00026679039001464844, "__label__religion": 0.000347137451171875, "__label__science_tech": 0.030975341796875, "__label__social_life": 9.28044319152832e-05, "__label__software": 0.00959014892578125, "__label__software_dev": 0.9521484375, "__label__sports_fitness": 0.0002429485321044922, "__label__transportation": 0.0004620552062988281, "__label__travel": 0.00020241737365722656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40294, 0.02264]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40294, 0.24943]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40294, 0.91038]], "google_gemma-3-12b-it_contains_pii": [[0, 732, false], [732, 3010, null], [3010, 6264, null], [6264, 7850, null], [7850, 10984, null], [10984, 13106, null], [13106, 15251, null], [15251, 17500, null], [17500, 20061, null], [20061, 23171, null], [23171, 26367, null], [26367, 29334, null], [29334, 31360, null], [31360, 33373, null], [33373, 36476, null], [36476, 40294, null]], "google_gemma-3-12b-it_is_public_document": [[0, 732, true], [732, 3010, null], [3010, 6264, null], [6264, 7850, null], [7850, 10984, null], [10984, 13106, null], [13106, 15251, null], [15251, 17500, null], [17500, 20061, null], [20061, 23171, null], [23171, 26367, null], [26367, 29334, null], [29334, 31360, null], [31360, 33373, null], [33373, 36476, null], [36476, 40294, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40294, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40294, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40294, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40294, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40294, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40294, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40294, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40294, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40294, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40294, null]], "pdf_page_numbers": [[0, 732, 1], [732, 3010, 2], [3010, 6264, 3], [6264, 7850, 4], [7850, 10984, 5], [10984, 13106, 6], [13106, 15251, 7], [15251, 17500, 8], [17500, 20061, 9], [20061, 23171, 10], [23171, 26367, 11], [26367, 29334, 12], [29334, 31360, 13], [31360, 33373, 14], [33373, 36476, 15], [36476, 40294, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40294, 0.08197]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
4fa00c2bc5aabbf120c91b84e1ba0c53e7d5150c
|
Abstract
This document describes the processor-specific definitions for ELF for the Application Binary Interface (ABI) for the ARM architecture.
Keywords
Object files, file formats, linking, EABI, ELF
Licence
1. Subject to the provisions of clause 2, ARM hereby grants to LICENSEE a perpetual, non-exclusive, nontransferable, royalty free, worldwide licence to use and copy this ABI Specification solely for the purpose of developing, having developed, manufacturing, having manufactured, offering to sell, selling, supplying or otherwise distributing products which comply with this ABI Specification. All other rights are reserved to ARM or its licensors.
2. THIS ABI SPECIFICATION IS PROVIDED “AS IS” WITH NO WARRANTIES EXPRESS, IMPLIED OR STATUTORY, INCLUDING BUT NOT LIMITED TO ANY WARRANTY OF SATISFACTORY QUALITY, MERCHANTABILITY, NONINFRINGEMENT OR FITNESS FOR A PARTICULAR PURPOSE.
Proprietary notice
ARM and Thumb are registered trademarks of ARM Limited. The ARM logo is a trademark of ARM Limited. All other products or services mentioned herein may be trademarks of their respective owners.
## Contents
1 ABOUT THIS DOCUMENT 4
1.1 Change control 4
1.1.1 Current status and anticipated changes 4
1.1.2 Change history 4
1.2 References 4
1.3 Terms and abbreviations 5
1.4 About the licence to use this specification 5
1.5 Acknowledgements 5
2 SCOPE 6
3 INTRODUCTION 7
3.1 Platform Standards 7
4 OBJECT FILES 8
4.1 Introduction 8
4.2 ELF Header 8
4.2.1 ELF Identification 9
4.3 Sections 9
4.3.1 Special Section Indexes 9
4.3.2 Section Types 9
4.3.3 Section Attribute Flags 9
4.3.4 Special Sections 9
4.3.5 Section Alignment 10
4.4 String Table 10
4.5 Symbol Table 10
4.5.1 Weak Symbols 10
4.5.1.1 Weak References 10
4.5.1.2 Weak Definitions 11
4.5.2 Symbol Types 11
4.5.3 Symbol Values 11
4.5.4 Symbol names 11
4.5.5 Sub-class and super-class symbols [optional] 12
4.5.6 Mapping symbols 12
4.5.6.1 Section-relative mapping symbols 12
4.5.6.2 Absolute mapping symbols 13
4.6 Relocation 13
<table>
<thead>
<tr>
<th>Section</th>
<th>Title</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>4.6.1</td>
<td>Relocation codes</td>
<td>13</td>
</tr>
<tr>
<td>4.6.1.1</td>
<td>Mandatory relocation types</td>
<td>13</td>
</tr>
<tr>
<td>4.6.1.2</td>
<td>Platform specific relocation types</td>
<td>16</td>
</tr>
<tr>
<td>4.6.1.3</td>
<td>Private relocation types</td>
<td>16</td>
</tr>
<tr>
<td>4.6.1.4</td>
<td>Unallocated relocation types</td>
<td>16</td>
</tr>
<tr>
<td>4.6.2</td>
<td>Idempotency</td>
<td>16</td>
</tr>
<tr>
<td>5</td>
<td>PROGRAM LOADING AND DYNAMIC LINKING</td>
<td>17</td>
</tr>
<tr>
<td>5.1</td>
<td>Introduction</td>
<td>17</td>
</tr>
<tr>
<td>5.2</td>
<td>Program Header</td>
<td>17</td>
</tr>
<tr>
<td>5.3</td>
<td>Program Loading</td>
<td>17</td>
</tr>
<tr>
<td>5.4</td>
<td>Dynamic Linking</td>
<td>17</td>
</tr>
</tbody>
</table>
1 ABOUT THIS DOCUMENT
1.1 Change control
1.1.1 Current status and anticipated changes
This document supersedes ARM ELF, Document Number SWS ESPC 0003 B-02.
This DRAFT specification can be changed or updated by ARM without notice. Issue and version number will change on republication. The material contained herein is believed to be accurate, but is known to be incomplete. Anticipated changes include:
- Typographical corrections.
- Clarifications.
- Outstanding defect reports.
- Addition of detail and further relocation types to §4.6, Relocation.
- Completion and correction of sections flagged by yellow highlight.
- Completion of skeleton §5, Program Loading and Dynamic Linking.
1.1.2 Change history
<table>
<thead>
<tr>
<th>Issue</th>
<th>Date</th>
<th>By</th>
<th>Change</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.2</td>
<td>31st October 2003</td>
<td>Lee Smith</td>
<td>First public release.</td>
</tr>
<tr>
<td>0.3</td>
<td>1st December 2003</td>
<td>Richard Earnshaw</td>
<td>Second public release.</td>
</tr>
</tbody>
</table>
1.2 References
This document refers to, or is referred to by, the following documents.
<table>
<thead>
<tr>
<th>Ref</th>
<th>Reference</th>
<th>Title</th>
</tr>
</thead>
<tbody>
<tr>
<td>AAELF</td>
<td>Reference</td>
<td>ELF for the ARM Architecture (This document).</td>
</tr>
<tr>
<td>AAPCS</td>
<td></td>
<td>Procedure Call Standard for the ARM Architecture.</td>
</tr>
<tr>
<td>BSABI</td>
<td></td>
<td>ABI for the ARM Architecture (Base Standard)</td>
</tr>
<tr>
<td>EHABI</td>
<td></td>
<td>Exception Handling ABI for the ARM Architecture</td>
</tr>
</tbody>
</table>
1.3 Terms and abbreviations
This document uses the following terms and abbreviations.
<table>
<thead>
<tr>
<th>Term</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>ABI</td>
<td>Application Binary Interface:</td>
</tr>
<tr>
<td></td>
<td>1. The specifications to which an executable must conform in order to execute in a specific execution environment. For example, the Linux ABI for the ARM Architecture.</td>
</tr>
<tr>
<td></td>
<td>2. A particular aspect of the specifications to which independently produced relocatable files must conform in order to be statically linkable and executable. For example, the C++ ABI for the ARM Architecture, the Run-time ABI for the ARM Architecture, the C Library ABI for the ARM Architecture.</td>
</tr>
<tr>
<td>AEABI</td>
<td>EABI (see below) for the ARM Architecture, this [E]ABI.</td>
</tr>
<tr>
<td>ARM-based</td>
<td>… based on the ARM architecture …</td>
</tr>
<tr>
<td>EABI</td>
<td>An ABI suited to the needs of embedded, and deeply embedded (sometimes called free standing), applications.</td>
</tr>
<tr>
<td>ELF</td>
<td>Executable and Linking Format</td>
</tr>
<tr>
<td>OS</td>
<td>Operating System</td>
</tr>
</tbody>
</table>
1.4 About the licence to use this specification
Use of these ABI for the ARM Architecture specifications published by ARM is governed by the simple licence agreement shown on the cover page of this document, and on the cover page of each major component document. Without formalities or payment, you are licensed to use any IP rights ARM might hold in these ABI specifications for the purpose of producing products that comply with these ABI specifications.
Because these specifications may be updated by ARM without notice, we prefer that these specifications should not be copied, but that third parties should refer directly to them, in the same way that we refer directly to the specifications underpinning this ABI, such as the specifications of ELF, DWARF, and the generic C++ ABI.
1.5 Acknowledgements
This specification could not have been developed without contributions from, and the active support of, the following organizations. In alphabetical order: ARM, Intel, Metrowerks, Montavista, Nexus Electronics, PalmSource, Symbian, and Wind River.
2 SCOPE
This specification provides the processor-specific definitions required by ELF [SCO-ELF] for ARM based systems. The ELF specification is part of the larger System V ABI specification where it forms chapters 4 and 5. However, the specification can be used in isolation as a generic object and executable format.
Sections 4 and 5 of this document are structured to correspond to chapters 4 and 5 of the ELF specification. Specifically:
- Section 4 covers object files and relocations
- Section 5 covers program loading and dynamic linking.
There are several drafts of the ELF specification on the SCO web site. This specification is based on the April 2001 draft, which was the most recent stable draft at the time this specification was developed.
3 INTRODUCTION
This section is a placeholder for additional material…
3.1 Platform Standards
4 OBJECT FILES
4.1 Introduction
4.2 ELF Header
The ELF header provides a number of fields that assist in interpretation of the file. Most of these are specified in the base standard. The following fields have ARM-specific meanings.
*e_type*
There are currently no ARM-specific object file types. All values between ET_LOPROC and ET_HIPROC are reserved to ARM.
*e_machine*
An object file conforming to this specification must have the value EM_ARM (40, 0x28).
*e_entry*
The base ELF specification requires this field to be non-zero if an application has an entry point. Some applications may require an entry point of zero (for example, via the reset vector); a platform standard may specify that an executable image always has an entry point, in which case e_entry always specifies the entry point, even if zero.
*e_flags*
The processor-specific flags are shown in Table 4-1, ARM-specific e_flags. Unallocated bits, and bits allocated in previous versions of this specification, are reserved to ARM.
<table>
<thead>
<tr>
<th>Value</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>EF_ARM_EABIMASK</td>
<td>This masks an 8-bit version number, the version of the ABI to which this ELF file conforms. This ABI is version 4. A value of 0 denotes unknown conformance.</td>
</tr>
<tr>
<td>(0xFF000000)</td>
<td></td>
</tr>
<tr>
<td>(current version is 0x04000000)</td>
<td></td>
</tr>
<tr>
<td>EF_ARM_BE8</td>
<td>The ELF file contains BE-8 code, suitable for execution on an ARM Architecture v6 processor. This flag will normally only be set on an Executable file.</td>
</tr>
<tr>
<td>(0x00800000)</td>
<td></td>
</tr>
<tr>
<td>EF_ARM_LE8</td>
<td>The ELF file contains LE-8 code, suitable for execution on an ARM Architecture v6 processor. This flag will normally only be set on an Executable file, and only when the ELF file is itself in big-endian format (e_ident[EI_DATA]=ELFDATA2MSB).</td>
</tr>
<tr>
<td>(0x00400000)</td>
<td></td>
</tr>
</tbody>
</table>
XXX More information from V6BE.txt
4.2.1 ELF Identification
The 16-byte ELF identification (e_ident) provides information on how to interpret the file itself. The following values shall be used on ARM systems:
**EI_CLASS**
An ARM ELF file shall contain ELFCLASS32 objects.
**EI_DATA**
This field may be either ELFDATA2LSB or ELFDATA2MSB. The choice will be governed by the default data order in the execution environment. On ARM Architecture v6 it is possible to execute programs that are in the “opposite endianness”; objects with this requirement will be marked with either EF_ARM_BE8 or EF_ARM_LE8 in the e_flags field.
**EI_OSABI**
This field shall be zero unless the file uses objects that have flags which have OS-specific meanings (for example, it makes use of a section index in the range SHN_LOOS through SHN_HIOS). There are currently no processor-specific values for this field and all such values are reserved to ARM.
4.3 Sections
4.3.1 Special Section Indexes
There are no processor-specific special section indexes defined. All processor-specific values are reserved to ARM.
4.3.2 Section Types
The defined processor-specific section types are listed in Table 4-2, Processor specific section types. All other processor-specific values are reserved to ARM.
**Table 4-2, Processor specific section types**
<table>
<thead>
<tr>
<th>Name</th>
<th>Value</th>
<th>Comment</th>
</tr>
</thead>
<tbody>
<tr>
<td>SHT_ARM_EXIDX</td>
<td>0x70000001</td>
<td>Exception Index table</td>
</tr>
</tbody>
</table>
Pointers in sections of types SHT_INIT_ARRAY, SHT_PREINIT_ARRAY and SHT_FINI_ARRAY shall be expressed relative to the address of the pointer.
SHT_ARM_EXIDX marks a section that contains index information for exception unwinding. See EHABI for details.
4.3.3 Section Attribute Flags
There are no processor-specific section attribute flags defined. All processor-specific values are reserved to ARM.
4.3.4 Special Sections
Table 4-3, ARM special sections lists the special sections that are defined.
Table 4-3, ARM special sections
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Attributes</th>
</tr>
</thead>
<tbody>
<tr>
<td>.ARM.exidx</td>
<td>SHT_ARM_EXIDX</td>
<td>SHF_ALLOC + SHF_LINK_ORDER</td>
</tr>
<tr>
<td>.ARM.extab</td>
<td>SHT_PROGBITS</td>
<td>SHF_ALLOC</td>
</tr>
</tbody>
</table>
.ARM.exidx names a section that contains index entries for section unwinding. See EHABI for details.
.ARM.extab names a section that contains exception unwinding information. See EHABI for details.
Additional special sections may be required by some platforms standards.
4.3.5 Section Alignment
There is no minimum alignment required for a section. However, sections containing thumb code must be at least 16-bit aligned and sections containing ARM code must be at least 32-bit aligned.
Platform standards may impose a limit on the alignment that they can guarantee to provide (normally the page size).
4.4 String Table
There are no processor-specific extensions to the string table.
4.5 Symbol Table
There are no processor-specific symbol types or symbol bindings. All processor-specific values are reserved to ARM.
4.5.1 Weak Symbols
There are two forms of weak symbol:
- A weak reference — This is denoted by \( st_shndx = SHN_UNDEF, ELF32_ST_BIND() = STB_WEAK \).
- A weak definition — This is denoted by \( st_shndx != SHN_UNDEF, ELF32_ST_BIND() = STB_WEAK \).
4.5.1.1 Weak References
Libraries are not searched to resolve weak references. It is not an error for a weak reference to remain unsatisfied.
During linking, the value of an undefined weak reference is:
- Zero if the relocation type is absolute
- The address of the place if the relocation type is pc-relative
- The address of nominal base address if the relocation type is base-relative.
See §4.6 Relocation for further details.
4.5.1.2 Weak Definitions
A weak definition does not change the rules by which object files are selected from libraries. However, if a link set contains both a weak definition and a non-weak definition, the non-weak definition will always be used.
4.5.2 Symbol Types
All code symbols exported from an object file (symbols with binding STB_GLOBAL) shall have type STT_FUNC.
All extern data objects shall have type STT_OBJECT. No STB_GLOBAL data symbol shall have type STT_FUNC.
The type of an undefined symbol shall be STT_NOTYPE or the type of its expected definition.
The type of any other symbol defined in an executable section can be STT_NOTYPE. The linker is only required to provide interworking support for symbols of type STT_FUNC (interworking for untyped symbols must be encoded directly in the object file).
4.5.3 Symbol Values
In addition to the normal rules for symbol values the following rules shall also apply to symbols of type STT_FUNC:
- If the symbol addresses an ARM instruction, its value is the address of the instruction (in a relocatable object, the offset of the instruction from the start of the section containing it).
- If the symbol addresses a Thumb instruction, its value is the address of the instruction with bit zero set (in a relocatable object, the section offset with bit zero set).
- For the purposes of relocation the value used shall be the address of the instruction (st_value & ~1).
[aside — this allows a linker to distinguish ARM and Thumb code symbols without having to refer to the map. An ARM symbol will always have an even value, while a Thumb symbol will always have an odd value. However, a linker should strip the discriminating bit from the value before using it for relocation.]
4.5.4 Symbol names
A symbol that names a C or assembly language entity should have the name of that entity. For example, a C function called calculate generates a symbol called calculate (not _calculate).
All symbol names containing a dollar character (‘$’) are reserved to ARM.
Symbol names are case sensitive and are matched exactly by linkers.
Multiple conventions exist for the names of compiler temporary symbols (for example, ARMCC uses Lxxx.yyy, while GNU uses .Lxxx). More generally, any symbol with binding STB_LOCAL and type STT_NOTYPE may be removed from an object and replaced with an offset from another symbol in the same section under the following conditions:
- The replacement symbol is not of type STT_FUNC.
- All relocations referring to the symbol can accommodate the adjustment in the addend field (it is permitted to convert a REL type relocation to a RELA type relocation).
- The symbol is not described by the debug information.
- The symbol is not a mapping symbol.
- The resulting object, or image, is not required to preserve accurate symbol information to permit decompilation or other post-linking optimization techniques.
No tool is required to perform the above transformations, an object consumer must be prepared to do this itself if it might find the additional symbols confusing.
4.5.5 Sub-class and super-class symbols [optional]
A symbol $Sub\$name is the sub-class version of name. A symbol $Super\$name is the super-class version of name. In the presence of a definition of both name and $Sub\$name:
- A reference to name resolves to the definition of $Sub\$name.
- A reference to $Super\$name resolves to the definition of name.
It is an error to refer to $Sub\$name, or to define $Super\$name, or to use $Sub\$ or $Super\$ recursively.
A platform standard may mandate support of sub- and super-class symbols.
There are outstanding defects for sub- and super-class symbols DE-316140.
4.5.6 Mapping symbols
A section of an ELF file can contain a mixture of ARM code, Thumb code and data.
There are inline transitions between code and data at literal pool boundaries. There can also be inline transitions between ARM code and Thumb code, for example in ARM-Thumb inter-working veneers.
Linkers, and potentially other tools, need to map images correctly (for example, to support byte swapping to produce a BE-8 image from a BE-32 object file). To support this, a number of symbols, termed mapping symbols appear in the symbol table to denote the start of a sequence of bytes of the appropriate type. All mapping symbols have type STT_NOTYPE and binding STB_LOCAL.
The mapping symbols are defined in Table 4-4, Mapping symbols. It is an error for a relocation to reference a mapping symbol. Two forms of mapping symbol are supported:
- a short form, that uses a dollar character and a single letter denoting the class. This form can be used when an object producer creates mapping symbols automatically, and minimizes symbol table space
- a longer form, where the short form is extended with a period and then any sequence of characters that are legal for a symbol. This form can be used when assembler files have to be annotated manually and the assembler does not support multiple definitions of symbols.
Table 4-4, Mapping symbols
<table>
<thead>
<tr>
<th>Name</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>$a</td>
<td>Start of a sequence of ARM instructions</td>
</tr>
<tr>
<td>$a.<any...></td>
<td></td>
</tr>
<tr>
<td>$d</td>
<td>Start of a sequence of data items (for example, a literal pool)</td>
</tr>
<tr>
<td>$d.<any...></td>
<td></td>
</tr>
<tr>
<td>$t</td>
<td>Start of a sequence of Thumb instructions</td>
</tr>
<tr>
<td>$t.<any...></td>
<td></td>
</tr>
</tbody>
</table>
4.5.6.1 Section-relative mapping symbols
Mapping symbols defined in a section define a sequence of half-open address intervals that cover the address range of the section. Each interval starts at the address defined by the mapping symbol, and continues up to, but not including, the address defined by the next (in address order) mapping symbol or the end of the section. A
section must have a mapping symbol defined at the beginning of the section; however, if the section contains only data then the mapping symbol may be omitted.
4.5.6.2 Absolute mapping symbols
Mapping symbols are no-longer required for the absolute section. The equivalent information is now conveyed by the type of the absolute symbol.
4.6 Relocation
Relocation information is used by linkers in order to bind symbols and addresses that could not be determined when the initial object was generated.
4.6.1 Relocation codes
The relocation codes for ARM are divided into four categories:
- Mandatory relocations that must be supported by all static linkers
- Platform-specific relocations that are required for specific virtual platforms
- Private relocations that are guaranteed never to be allocated in future revisions of this specification, but which must never be used in portable object files.
- Unallocated relocations that are reserved for use in future revisions of this specification.
4.6.1.1 Mandatory relocation types
Table 4-5, Mandatory relocation types lists the relocation types that must be supported by all linkers. The table shows:
- The type which is stored in the ELF32_R_TYPE component of the r_info field.
- The name of the relocation type.
- The type of place that can be relocated by this relocation. For instructions this is sub-divided into ARM and Thumb instructions and then the type of underlying instruction is further described. From this information it is possible to determine:
- The initial addend, for a REL type relocation
- The appropriate limits for overflow checking
- Any further modifications that must be necessary when writing out the relocated value.
- The size and alignment of the place being relocated (in bytes) and the type of overflow checking that must be performed: Signed, Unsigned or None.
- The computation that must be performed in order to determine the relocation result. The following nomenclature is used
- S denotes the value of symbol referenced in ELF32_R_SYM component of the r_info field.
- A denotes the initial addend. For a RELA type relocation the value is used unmodified. For a REL type relocation the value must be extracted from the place in a manner that is determined by the type of the place.
- P denotes the address of the place being relocated. It is the sum of the r_offset field and the base address of the section being relocated (note that all relocations involving P are of the form S – P, where the symbol referenced is in the same consolidated output section as P, so it is not necessary to know the absolute address of the section being relocated).
- B is the *nominal base address* used for accessing objects in the read-write data areas.
- E is the *nominal base address* used for accessing objects in the executable and read-only areas.
The precise definition of a *nominal base address* is platform defined, but it must be possible for the application to retrieve the value at run time by one of the following methods:
- A pre-determined value
- A value in a known register
- A suitable symbol
- A library call
The platform documentation must describe the appropriate model for each of B and E (they need not be the same).
### Table 4-5, Mandatory relocation types
<table>
<thead>
<tr>
<th>Type</th>
<th>Name</th>
<th>Place</th>
<th>Size</th>
<th>Alignment</th>
<th>Overflow</th>
<th>Computation</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>R_ARM_NONE</td>
<td>None</td>
<td>0/1</td>
<td>n</td>
<td></td>
<td>No relocation. Encodes dependencies between section</td>
</tr>
<tr>
<td>1</td>
<td>R_ARM_PC24</td>
<td>ARM B/BL/BLX</td>
<td>4/4</td>
<td>s</td>
<td></td>
<td>S – P + A</td>
</tr>
<tr>
<td>2</td>
<td>R_ARM_ABS32</td>
<td>Data</td>
<td>4/1</td>
<td>n</td>
<td></td>
<td>S + A</td>
</tr>
<tr>
<td>3</td>
<td>R_ARM_REL32</td>
<td>Data</td>
<td>4/1</td>
<td>n</td>
<td></td>
<td>S – P + A</td>
</tr>
<tr>
<td>4</td>
<td>R_ARM_PC13</td>
<td>ARM LDR r, [pc,...]</td>
<td>4/4/s</td>
<td></td>
<td></td>
<td>S – P + A</td>
</tr>
<tr>
<td>5</td>
<td>R_ARM_ABS16</td>
<td>Data</td>
<td>2/1</td>
<td>u</td>
<td></td>
<td>S + A</td>
</tr>
<tr>
<td>6</td>
<td>R_ARM_ABS12</td>
<td>ARM LDR/STR</td>
<td>4/4</td>
<td>s</td>
<td></td>
<td>S + A</td>
</tr>
<tr>
<td>7</td>
<td>R_ARM_THM_ABS5</td>
<td>Thumb LDR/STR</td>
<td>2/2</td>
<td>u</td>
<td></td>
<td>S + A</td>
</tr>
<tr>
<td>8</td>
<td>R_ARM_ABS8</td>
<td>Data</td>
<td>1/1</td>
<td>u</td>
<td></td>
<td>S + A</td>
</tr>
<tr>
<td>9</td>
<td>R_ARM_SBREL32</td>
<td>Data</td>
<td>4/1</td>
<td>n</td>
<td></td>
<td>S – B + A</td>
</tr>
<tr>
<td>10</td>
<td>R_ARM_THM_PC22</td>
<td>Thumb BL/BLX pair</td>
<td>4/2/s</td>
<td></td>
<td></td>
<td>S – P + A</td>
</tr>
<tr>
<td>11</td>
<td>R_ARM_THM_PC8</td>
<td>Thumb LDR r, [pc,...]</td>
<td>2/2/u</td>
<td></td>
<td></td>
<td>S – P + A</td>
</tr>
<tr>
<td>12</td>
<td>Reserved</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>13</td>
<td>R_ARM_SWI24</td>
<td>ARM SWI</td>
<td>4/4</td>
<td>u</td>
<td></td>
<td>S + A</td>
</tr>
<tr>
<td>14</td>
<td>R_ARM_THM_SWI8</td>
<td>Thumb SWI</td>
<td>2/2</td>
<td>u</td>
<td></td>
<td>S + A</td>
</tr>
<tr>
<td>15</td>
<td>R_ARM_XPC25</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>Obsolete. Use R_ARM_PC24</td>
</tr>
<tr>
<td>16</td>
<td>R_ARM_THM_XPC22</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>Obsolete. Use R_ARM_THM_PC22</td>
</tr>
</tbody>
</table>
Type | Name | Place | Size | Alignment | Computation
--- | --- | --- | --- | --- | ---
32 | R_ARM_ALU_PCREL_7_0 | ARM ADD/SUB | 4/4/n | (S – P + A) & 0x000000FF
33 | R_ARM_ALU_PCREL_15_8 | ARM ADD/SUB | 4/4/n | (S – P + A) & 0x0000FF00
34 | R_ARM_ALU_PCREL_23_15 | ARM ADD/SUB | 4/4/n | (S – P + A) & 0x00FF0000
35 | R_ARM_LDR_SBREL_11_0 | ARM LDR/STR | 4/4/n | (S – B + A) & 0x00000FFF
36 | R_ARM_ALU_SBREL_19_12 | ARM ADD/SUB | 4/4/n | (S – B + A) & 0x000FF0000
37 | R_ARM_ALU_SBREL_27_20 | ARM ADD/SUB | 4/4/n | (S – B + A) & 0xFF000000
38 | R_ARM_RELABS32 | Data | 4/1/n | S + A or S – P + A
39 | R_ARM_ROSEGREL32 | Data | 4/1/n | S – E + A
40 | R_ARM_V4BX | ARM BX r | 4/4/n | None. Used to mark BX instructions in ARMv4T code.
41 | R_ARM_STKCHK | ARM ?? | 4/2/s | Reserved for stack-limit checking
42 | R_ARM_THM_STKCHK | Thumb ?? | 4/2/s | Reserved for stack-limit checking
43-52 | Reserved for Thumb-2 | |
R_ARM_NONE records that the section containing the place to be relocated depends on the section defining the symbol mentioned in the relocation directive in a way otherwise invisible to the static linker. The effect is to prevent removal of sections that might otherwise appear to be unused.
R_ARM_PC24 is used to relocate an ARM B or BL instruction (and on ARMv5 an ARM BLX instruction). Bits 0-23 encode a signed offset, in units of 4-byte instructions (thus 24 bits encode a branch offset of +/- 2^24 bytes). For a BLX instruction bit 24 additionally encodes the appropriate half-word address of the destination and there is an implicit transition to Thumb state. A static linker may convert a BL to a BLX instruction (or vice-versa) if generating an image for ARMv5 or later. If it is unable to do this (as is the case for B, or BL<cond> or on ARMv4T) then it must generate a suitable sequence of instructions that will perform the transition to the target. The instruction sequence may make use of the intra-procedure scratch register (IP) and does not need to preserve its value. The relocation must then be recalculated using the address of the sequence instead of S. Compensation for the PC bias (8 bytes) must be factored into the relocation expression by the object producer.
R_ARM_PC13 is used to relocate an ARM LDR instruction where the base register for the address is PC. Bits 0-11 encode an unsigned offset in bytes and bit 23 encodes an inverted sign bit from a 13-bit sign-magnitude representation. Compensation for the PC bias (8 bytes) must be factored into the relocation expression by the object producer.
R_ARM_THM_PC22 is used to relocate Thumb BL (and on ARMv5 Thumb BLX) instructions. It is thumb equivalent of R_ARM_PC24 and the same rules on conversion apply. Bits 0-10 of the first half-word encode the most significant bits of the branch offset, bits 0-10 of the second half-word encode the least significant bits and the offset is in units of half-words. Thus 22 bits encode a branch offset of +/- 2^22 bytes. Compensation for the PC bias (4 bytes) must be factored into the relocation expression by the object producer.
R_ARM_V4BX records the location of an ARMv4t BX instruction. This enables a static linker to generate ARMv4 compatible images from ARMv4t objects that contain only ARM code by converting the instruction to MOV PC, r, where r is the register used in the BX instruction. See [AAPCS] for details. The symbol is unused and may even be unnamed.
4.6.1.2 Platform specific relocation types
Add these (particularly SVr4 types).
4.6.1.3 Private relocation types
Relocation types 112-127 are reserved for private experiments. These values will never be allocated by future revisions of this specification. They must not be used in portable object files.
4.6.1.4 Unallocated relocation types
All unallocated relocation types are reserved for use by future revisions of this specification.
4.6.2 Idempotency
All RELA type relocations are idempotent. They may be reapplied to the place and the result will be the same. This allows a static linker to preserve full relocation information for an image by converting all REL type relocations into RELA type relocations.
Note A REL type relocation can never be idempotent because the act of applying the relocation destroys the original addend.
5 PROGRAM LOADING AND DYNAMIC LINKING
This section will be added in a future draft.
5.1 Introduction
5.2 Program Header
5.3 Program Loading
5.4 Dynamic Linking
|
{"Source-Url": "http://simplemachines.it/doc/aaelf.pdf", "len_cl100k_base": 7746, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 32880, "total-output-tokens": 7834, "length": "2e12", "weborganizer": {"__label__adult": 0.0004322528839111328, "__label__art_design": 0.00045990943908691406, "__label__crime_law": 0.0006299018859863281, "__label__education_jobs": 0.0004010200500488281, "__label__entertainment": 6.777048110961914e-05, "__label__fashion_beauty": 0.00022101402282714844, "__label__finance_business": 0.0003719329833984375, "__label__food_dining": 0.00030732154846191406, "__label__games": 0.0009260177612304688, "__label__hardware": 0.025726318359375, "__label__health": 0.0003020763397216797, "__label__history": 0.00028228759765625, "__label__home_hobbies": 0.0001537799835205078, "__label__industrial": 0.0017175674438476562, "__label__literature": 0.00017595291137695312, "__label__politics": 0.00031828880310058594, "__label__religion": 0.0005655288696289062, "__label__science_tech": 0.054351806640625, "__label__social_life": 4.309415817260742e-05, "__label__software": 0.0166168212890625, "__label__software_dev": 0.89453125, "__label__sports_fitness": 0.00042819976806640625, "__label__transportation": 0.0007381439208984375, "__label__travel": 0.00017321109771728516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29610, 0.05188]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29610, 0.4503]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29610, 0.8385]], "google_gemma-3-12b-it_contains_pii": [[0, 1108, false], [1108, 2071, null], [2071, 2756, null], [2756, 4689, null], [4689, 7076, null], [7076, 7835, null], [7835, 7930, null], [7930, 10231, null], [10231, 12197, null], [12197, 13987, null], [13987, 16888, null], [16888, 19880, null], [19880, 22536, null], [22536, 25193, null], [25193, 28263, null], [28263, 29446, null], [29446, 29610, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1108, true], [1108, 2071, null], [2071, 2756, null], [2756, 4689, null], [4689, 7076, null], [7076, 7835, null], [7835, 7930, null], [7930, 10231, null], [10231, 12197, null], [12197, 13987, null], [13987, 16888, null], [16888, 19880, null], [19880, 22536, null], [22536, 25193, null], [25193, 28263, null], [28263, 29446, null], [29446, 29610, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 29610, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29610, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29610, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29610, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29610, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29610, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29610, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29610, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29610, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29610, null]], "pdf_page_numbers": [[0, 1108, 1], [1108, 2071, 2], [2071, 2756, 3], [2756, 4689, 4], [4689, 7076, 5], [7076, 7835, 6], [7835, 7930, 7], [7930, 10231, 8], [10231, 12197, 9], [12197, 13987, 10], [13987, 16888, 11], [16888, 19880, 12], [19880, 22536, 13], [22536, 25193, 14], [25193, 28263, 15], [28263, 29446, 16], [29446, 29610, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29610, 0.24522]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
7c71c90554752d4a30b2a0d8b814cb82e34de430
|
[REMOVED]
|
{"Source-Url": "http://sei.pku.edu.cn/~guliang05/2009_icsoc_WS_Fidelity.pdf", "len_cl100k_base": 5179, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 22100, "total-output-tokens": 5850, "length": "2e12", "weborganizer": {"__label__adult": 0.0003216266632080078, "__label__art_design": 0.0003695487976074219, "__label__crime_law": 0.000331878662109375, "__label__education_jobs": 0.0010232925415039062, "__label__entertainment": 9.775161743164062e-05, "__label__fashion_beauty": 0.00016224384307861328, "__label__finance_business": 0.0007162094116210938, "__label__food_dining": 0.00035500526428222656, "__label__games": 0.0004978179931640625, "__label__hardware": 0.0009379386901855468, "__label__health": 0.0006194114685058594, "__label__history": 0.0002818107604980469, "__label__home_hobbies": 8.934736251831055e-05, "__label__industrial": 0.0003314018249511719, "__label__literature": 0.00044345855712890625, "__label__politics": 0.0002570152282714844, "__label__religion": 0.00038743019104003906, "__label__science_tech": 0.08563232421875, "__label__social_life": 0.0001399517059326172, "__label__software": 0.0240631103515625, "__label__software_dev": 0.8818359375, "__label__sports_fitness": 0.00022709369659423828, "__label__transportation": 0.0005064010620117188, "__label__travel": 0.00023794174194335935}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22646, 0.01869]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22646, 0.25871]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22646, 0.90657]], "google_gemma-3-12b-it_contains_pii": [[0, 2623, false], [2623, 5468, null], [5468, 7793, null], [7793, 10298, null], [10298, 13308, null], [13308, 16176, null], [16176, 19811, null], [19811, 22646, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2623, true], [2623, 5468, null], [5468, 7793, null], [7793, 10298, null], [10298, 13308, null], [13308, 16176, null], [16176, 19811, null], [19811, 22646, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22646, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22646, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22646, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22646, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22646, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22646, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22646, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22646, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22646, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22646, null]], "pdf_page_numbers": [[0, 2623, 1], [2623, 5468, 2], [5468, 7793, 3], [7793, 10298, 4], [10298, 13308, 5], [13308, 16176, 6], [16176, 19811, 7], [19811, 22646, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22646, 0.05839]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
6c052db578b24a8af38721de6804ee037d490b36
|
EMC IT’s JOURNEY TO THE PRIVATE CLOUD: APPLICATIONS AND CLOUD EXPERIENCE
A series exploring how EMC IT is architecting for the future and our progress toward offering IT as a Service to the business
Abstract
This white paper focuses on EMC IT’s applications and cloud experience to enable a seamless transition to the private cloud. EMC IT’s vision is to provide IT as a Service in a self-service mode to EMC business units. It also examines EMC IT’s approach in offering Platform as a Service and Software as a Service.
December 2010
# Table of Contents
- Executive summary ........................................................................................................... 4
- Introduction ....................................................................................................................... 4
- Audience ......................................................................................................................... 5
- Terminology ..................................................................................................................... 5
- Background ....................................................................................................................... 5
- EMC IT’s strategy of applications and cloud experience ................................................... 6
- Platforms as a Service ....................................................................................................... 7
- Database Platforms as a Service ...................................................................................... 8
- Application Platforms as a Service .................................................................................. 10
- Software as a Service ....................................................................................................... 13
- Conclusion ......................................................................................................................... 14
- References ......................................................................................................................... 14
Executive summary
As a large, globally dispersed business, EMC relies on fast turnaround, consistent high performance, and rapid scalability for its IT requirements—regardless of work location or complexity of the IT infrastructure. While point and custom solutions can address business needs, they typically result in higher license costs, inefficient utilization, and increased total cost of ownership (TCO). Tasked with delivering enhanced efficiency and cost savings to the company, EMC® IT has embraced virtualization and cloud computing.
By virtualizing its environment, EMC IT is facilitating an on-demand platform where IT resources can be made available from a virtual pool either within EMC or from any globally located partner data center.
By leveraging cloud computing’s multi-tenancy and elasticity, EMC IT has begun to offer constructed IT solutions as on-demand, scalable services that provide high availability, self-service provisioning, metered usage, and chargeback.
Through these technologies, EMC IT envisions a cloud-based architecture that will reduce TCO through consolidation, while offering a higher ROI, improved effectiveness, and better service. It will also increase EMC’s agility.
This white paper is one in a series describing EMC IT’s initiative toward a private, cloud-based IT infrastructure. To learn more on the background of this initiative, read the white paper EMC IT’s Journey to the Private Cloud: A Practitioner’s Guide.
Introduction
This white paper details EMC IT’s applications and cloud experiences, and focuses on the company’s investment in providing IT as a Service (ITaaS) through the private cloud. It covers the following sections:
- **Background**—Explains why EMC IT embarked on ITaaS.
- **EMC IT’s strategy of applications and cloud experience**—Describes the three types of IaaS services that EMC IT provides to business units.
- **Platforms as a Service**—Encompasses the objectives and principles of offering Database Platforms as a Service, focusing on Oracle as a Service, Microsoft SQL Server as a Service, and Greenplum® as a Service. Application Platforms as a Service offers brief insight into providing applications tailored to a cloud-based operating environment, along with the platforms needed to support them.
- **Software as a Service**—Describes the objectives and advantages of offering Software as a Service, including a glimpse of business intelligence (BI), enterprise resource planning (ERP), and customer relationship management (CRM) as a service.
Audience
This white paper is intended for IT program managers, IT architects, and IT management.
Terminology
- **Force.com**: This is a leading cloud platform for business applications that offers developers a platform to create rich, collaborative, custom cloud applications.
- **Oracle Real Application Clusters (RAC)**: Oracle Corporation provides software for clustering and high availability in Oracle database environments.
- **SecurID**: RSA® SecurID® performs two-factor authentication for a user accessing a network resource.
- **SpringSource**: VMware® SpringSource® offers a comprehensive suite of products for powering the entire *build, run, manage* enterprise Java application lifecycle and breaking down the barriers between application development and operations.
- **VMware ESX**: VMware ESX® is an enterprise-level virtualization product.
Background
As a leading global enterprise, EMC requires the infrastructure agility and dynamic scalability to meet changing application and business needs. Like many companies, EMC is faced with increasing application complexity, which increases the time and cost to provision infrastructure, platforms, and applications.
Although a large number of point solutions exist, and custom solutions can be developed, EMC wanted to reduce complexity and optimize its IT infrastructure wherever possible.
In addition to building and deploying applications in the cloud, EMC IT’s developers needed to adapt applications to run in a cloud-based operating environment, while providing the security necessary to protect information, rapidly recover from security events, and address compliance and regulatory requirements.
Hence, EMC IT chose cloud computing as the ideal solution to address its challenges and drive business transformation. The goal is to transition the company away from the traditional silo-based environment to a cloud that offers efficiency, flexibility, and scalability.
Offering IT as a Service (ITaaS) encourages cost savings, reduces energy consumption through shared resourcing, and enables a rapid and agile deployment of customer environments or applications. Additionally, ITaaS offers many other benefits including:
- **Agility**—Masking underlying infrastructure complexities, ITaaS enables business users to browse and select relevant services and IT personnel to quickly and easily provision, configure, and monitor virtual applications, databases, and platforms.
It is also helps deliver a 50 percent reduction in software platform provisioning time.
- **Architect for the future**—With an ITaaS foundation in place, EMC IT can seamlessly provision for the future with infrastructure, platforms, and applications that scale up and out to meet fluctuating demands.
- **Cost savings**—ITaaS will also help EMC IT reduce real estate, energy, and maintenance costs. By deploying a shared database infrastructure, EMC projected it could save as much as $7 million over five years. By deploying information lifecycle management (ILM) it could save another $3 million over three years.
**EMC IT’s strategy of applications and cloud experience**
By virtualizing its entire infrastructure, EMC IT will be able to allocate IT resources (infrastructure, platforms, and applications) on-demand from a virtual pool of components that can be dispersed from within EMC or from partner data centers located anywhere in the world. This enables EMC IT to allocate or move in response to changing business requirements, as well as to increase efficiency and utilization.
EMC IT is working to provide its business units with three types of ITaaS services, including:
- **Infrastructure as a Service (IaaS):** EMC IT will offer compute, storage, backup and recovery, and networks, individually or as an integrated service.
- **Platforms as a Service (PaaS):** The PaaS initiative includes providing databases and application platforms such as development tools, runtime environments, application frameworks, ILM, and enterprise content management (ECM) as services. These offerings will be tailored to a cloud-based operating environment, founded on the principles of simplicity and elasticity, to ensure self-service and efficient use of IT resources. They will be offered on a number of platforms including SpringSource, and Force.com for application development.
- **Software as a Service (SaaS):** EMC IT will offer widely used applications to business units including BI, ERP, CRM, and master data management. By consolidating and standardizing its infrastructure, streamlining services to internal departments, and providing a more efficient working model, EMC IT will deliver enterprise applications to business units with a high degree of agility. Additionally, decreased provisioning time will give EMC IT a way to reduce costs.
EMC IT’s vision of ITaaS is to deliver all IT components, from infrastructure to enterprise applications, as a service to EMC business units.
**Platforms as a Service**
EMC IT has begun to provide two principal categories of platforms as a service:
- **Database platforms**, including:
- Oracle Database as a Service
- SQL Server as a Service
- Greenplum as a Service
- **Application platforms**, including:
- Application Development as a Service
- Enterprise Content Management as a Service
- Information Lifecycle Management as a Service
- Security Platform as a Service
- Integration as a Service
Database Platforms as a Service
Database as a Service offers business units several benefits including reduced TCO, improved service levels, more efficient management, easier administration, and much stronger compliance. EMC IT’s design principles in setting up its Database as a Service include:
- **Database consolidation**—Disparate databases were consolidated into tiered clusters based on business criticality, required availability, and I/O profile.
- **Information optimization**—Using effective information monitoring tools, EMC reduced duplicate data to optimize the databases.
- **Standardization**—By standardizing hardware and database footprints, EMC achieved consistency, easier management, lower costs, and better performance.
- **Compliance**—EMC embraces common management, administration, and compliance-related policies and procedures.
To provide Database as a Service, EMC IT adopted both the grid-based and the virtualization-based approach toward database virtualization and consolidation. EMC has two principal database platforms, Oracle Database and SQL Server, along with an emerging database platform in Greenplum.
**Oracle Database as a Service**—EMC IT created a foundation that offers Oracle Databases as a Service by tiering databases on a consolidated and optimized infrastructure based on the criticality and importance of applications to the EMC enterprise. All mission-critical applications have been consolidated on an eight-node RAC architecture by leveraging Oracle 11g.
EMC IT also virtualized a number of production and non-production databases. The Oracle consolidation and virtualization efforts have helped EMC IT reduce database servers from 55 to four, and databases from 51 to six. This has enabled EMC IT to ensure enough system capacity to run 55 applications within the consolidated Oracle-based grid environment.
Figure 2 illustrates EMC IT’s tiered and consolidated Oracle Database architecture.
EMC IT’s Journey to the Private Cloud: Applications and Cloud Experience
**Figure 2. EMC IT’s Oracle consolidation reference architecture**
EMC IT realized several benefits from implementing Oracle Database as a Service, including more than $2.5 million in overall cost avoidance and labor cost reduction. The company also achieved an additional savings of over $1.4 million in cost avoidance related to server replacement costs and decreased need for new capacity additions.
EMC IT has also realized a number of operational and service advantages. For example, implementing Oracle Database as a Service has significantly increased the speed at which EMC IT can provision services to internal departments and business units. Additionally, EMC IT increased its transparency by providing scalable database services against standardized, published service levels. Database as a Service also reduces internal project lifecycles and gives businesses the advantage of a faster turnaround. Another benefit is that it helped EMC IT ensure high availability while reducing data discrepancies, support, and run costs.
**SQL Server as a Service**—To offer SQL Server as a Service on demand, EMC IT has adopted the dual approaches of grid-based consolidation and database virtualization.
The consolidation of the SQL databases enabled EMC IT to offer a more efficient service, including the ability to support mission-critical applications for the business and rapid application development with the integrated Microsoft platform.
EMC IT is pursuing a SQL Server consolidation and virtualization initiative based on the principles of tiered storage, where the clustering of databases is performed in relation to the importance of the information to the business.
In the first phase of this initiative, EMC IT migrated all mission-critical and business-critical applications to a consolidated cluster-based platform, guaranteeing high...
availability and reducing downtime. Starting with medium-critical business-supporting applications, EMC IT is currently moving more SQL applications to a virtualized platform. The end goal is to have all SQL databases on a consolidated and virtualized platform, providing EMC IT with the ability to offer SQL Server as a Service to business units.
EMC IT has experienced a number of benefits by consolidating its SQL Server infrastructure. While SQL databases grew more than 30 percent in the past three years, EMC did not have to increase support staff. EMC IT was also able to reduce its software licensing costs to Enterprise Editions, and significantly lower its database storage requirements. Through the use of compression, the SQL Server 2008 environment could potentially yield 50 percent (approximately $1 million) in savings in overall labor and infrastructure costs.
**Greenplum Database as a Service**—EMC IT is starting to use Greenplum, a parallel database explicitly meant for large-scale analytical processing, as its next-generation analytical database, with the ability to partition and provide sandbox instances for use by business units.
**Application Platforms as a Service**
EMC IT is also providing its IT workforce with tools to design and build applications tailored to a cloud-based operating environment. This is being achieved by using EMC and partner technologies to provide a platform for developing secure next-generation applications. Applications built on this platform are optimized for virtual, self-managed operating environments. EMC IT’s objectives for this initiative include:
- Leveraging the power of the next-generation cloud platform for application development
- Reducing the footprint of physical machines and simplifying system architectures needed to run and manage business-critical next-generation applications
- Reducing development time and time-to-market for applications by enabling development teams to use rapid and flexible development methodologies
EMC IT’s cloud-based application platforms are being designed for simplicity and flexibility to ensure self-service and efficient use of IT resources. The guiding tenets in building this platform are:
- **Lightweight framework**—The platform must have interfaces and frameworks that have lightweight, reusable, agile, and aspect-oriented programming.
- **Agile development**—The application development must be optimized with testing and production platforms that scale up or down and shift loads physically and geographically.
- **Service-based**—Most applications need to be redesigned as a service using multi-tenant and usage-based costing methods that can be self-managed and provided on-demand.
• **Efficiency**—The platforms that host the applications must increase the efficiencies of system management by using efficient programming methodologies.
EMC IT is working toward offering a number of Platforms as a Service including VMforce.com, SpringSource, and Microsoft .NET for application development, to make runtime environments more lightweight and simplify application programming. Other platforms provided to users as a service include application development, ECM, ILM, information security, and IT integration.
**Application Development as a Service**—To effectively leverage the advantages of the private cloud, applications need to be built and deployed into the cloud. EMC IT is working on various methods in which application developers leverage application development platforms to easily build and deploy applications into the cloud. EMC IT is also building these platforms to help business units benefit from private and public cloud services.
**Enterprise Content Management as a Service**—EMC maintains and manages large amounts of unstructured data in various formats including images, documents, audio, and video that must be classified and stored, and that also allows for easy and rapid access by business units. Traditionally, this has been satisfied by siloed content management platforms that were provisioned separately by business units. However, this approach does not facilitate optimal storage utilization or effective or easy access and retrieval across business units.
To address this, EMC IT built a consolidated, scalable platform for hosting unstructured content using tiered storage and centralized management, which supports more efficient provisioning and reduces management costs. The company is currently working on offering this platform with chargeback based on usage and governance frameworks.
**Information Lifecycle Management as a Service**—EMC IT is focusing on providing an efficient and cost-effective data storage platform by reducing infrastructure, resources, and maintenance costs. EMC IT’s ILM as a platform enables end-to-end information optimization throughout the lifecycle of the data. This ensures the right level of performance for applications at the lowest cost. This also enables EMC IT to retire read-only or unused applications' mask and subset data in non-production environments to further optimize and secure information.
To accomplish all of this, EMC IT has deployed the data ILM service.
**Security Platform as a Service**—Recognizing the importance of security on the journey to the private cloud, EMC IT developed a number of solutions for implementing information security. For example, EMC IT developed a Secure Managed Infrastructure (SMI) to help administrators securely administer and monitor public networks while keeping them separated from the corporate network; a Governance, Risk, and Compliance framework (GRC) built using EMC’s proven Archer technology to drive policy adherence and govern the network infrastructure; and the Critical Incident
Response Center, a converged security operations platform that protects and monitors EMC’s critical IT infrastructure.
Based on the success of these initiatives, EMC IT is now working on providing Information Security as a Service to its business units utilizing leading RSA technologies such as Data Loss Prevention (DLP), RSA Envision®, and SecurID. Additionally, EMC IT is building comprehensive platforms that allow for common identity management and audit transactions that occur in a private cloud-based environment.
EMC IT will also integrate these platforms with governance, risk, and compliance engines that proactively develop and manage information security policies and ensure compliance with legal and regulatory requirements. In the near future, EMC IT plans to provide these integrated security and governance platforms as services to EMC business units, which will be able to implement custom security policies.
Integration as a Service—EMC IT is working on methods of integrating multiple data sources across business units to leverage this data seamlessly for business purposes. This includes end-to-end solutions that can transform data between several sources to meet the specific needs of business units. To provide these integration services, EMC IT included service-oriented architecture/web services, enterprise messaging, and extract, transform, and load (ETL). EMC IT is also developing methodologies to construct information from dispersed sets of data and formats, which are critical in the private cloud. EMC will leverage this expertise in orchestrating end-to-end business processes across cloud service providers and EMC IT’s internal infrastructure.
Benefits of providing Application Platforms as a Service
By offering Application Platforms as a Service, EMC IT has realized many benefits:
- **Improved efficiency**—EMC IT delivers high-quality application infrastructure on demand with minimum time and effort.
- **Agility**—Applications are built on common platforms and databases are consolidated, so EMC IT can quickly and efficiently adapt to new technologies and best practices.
- **Simplicity**—This helps reduce the complexity and redundancy of systems.
- **Availability**—The robust Application Platforms as a Service solution is architected to enable high performance and zero application downtime.
- **Scalability**—The service provides a high degree of scalability and effective dynamic application capacity management.
Software as a Service
EMC IT is also providing commonly used Software as a Service to business units, including services such as BI, ERP, CRM, and master data management. This will enable EMC IT to:
- Unify business definitions and provide a consistent online experience to geographically dispersed users
- Implement consistent application security policies by connecting business applications through a single virtual directory service
- Consolidate process and integration logic outside of individual applications and interfaces
Software as a Service enables EMC IT to streamline their services to various internal departments and provide more efficient services. Its standardized and consolidated infrastructure also enables EMC IT to deliver enterprise applications to business units with a high degree of agility, while reducing provisioning time and costs. EMC IT began its journey toward Software as a Service by offering BI, ERP, and CRM application services to business units.
**Business Intelligence as a Service**—To reduce the TCO of business intelligence, EMC IT brought together the several existing business intelligence solutions under a unified architecture for a Business Intelligence as a Service offering. This consolidation has reduced the number of source feeds and has removed data and hardware redundancies, thus eliminating the risk of data discrepancies from multiple code bases.
Additionally, this unified architecture has also delivered significant performance gains including a 180 percent improvement in batch job performance, and a three times reduction in the storage footprint. EMC IT will continue to unify and expand the architecture to enable self-service BI sandbox offerings to the business units. EMC’s Greenplum massively scalable analytical database is a key design point in offering BI as a service.
**ERP and CRM as a Service**—EMC IT is also exploring ERP as a Service to reduce the overall investment and time required to provision new ERP modules, and increase the return on ERP investments for customers. To accomplish this, EMC deployed global instances of its ERP and CRM environments in a scalable, shared model leveraging the work done in Platforms as a Service and Infrastructure as a Service.
The consolidated ERP infrastructure is an asset for new mergers and acquisitions because it creates a smooth and problem-free integration of organizations that join EMC. It will also help in working more effectively with suppliers, vendors, and partners.
Conclusion
By leveraging the strengths of cloud computing such as multi-tenancy for business units and elasticity, EMC IT will offer its internal customers on-demand, scalable service applications as a service, and achieve higher efficiency and availability, better service, and self-service provisioning, chargeback, and metered usage.
Although significant progress has been achieved to date, EMC IT is still on the journey and focused on addressing automation, policy, and governance. Additionally, EMC IT continues to make progress in self-provisioning and metered usage to implement a chargeback policy, along with efforts toward providing platforms-accelerated development of applications that can run seamlessly anywhere in the private or public cloud. EMC IT is also working on an architecture that will reduce TCO through consolidation.
A cloud-based solution offers the best balance of all these end objectives: lower TCO, higher ROI, enhanced efficiency, and better service—all while increasing the agility and ability of the enterprise. As EMC IT continues its transition to the private cloud, it is working to equip its IT employees with a number of skills across domains to support the successful delivery of ITaaS.
References
The following resources provide additional, relevant information. You can access these documents and sites at www.EMC.com or by contacting an EMC representative:
- [EMC IT’s Journey to the Private Cloud: A Practitioner’s Guide](#)
- [EMC IT web page at www.EMC.com/EMCIT](#)
- [Storage Best Practices for SharePoint and SQL Server recorded webcast](#)
|
{"Source-Url": "https://www.emc.com/collateral/software/white-papers/h8134-it-journey-applications-cloud-wp.pdf", "len_cl100k_base": 4682, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 37803, "total-output-tokens": 5258, "length": "2e12", "weborganizer": {"__label__adult": 0.0005288124084472656, "__label__art_design": 0.0007729530334472656, "__label__crime_law": 0.0007586479187011719, "__label__education_jobs": 0.001972198486328125, "__label__entertainment": 0.00030040740966796875, "__label__fashion_beauty": 0.0002713203430175781, "__label__finance_business": 0.083251953125, "__label__food_dining": 0.000537872314453125, "__label__games": 0.0009226799011230468, "__label__hardware": 0.003705978393554687, "__label__health": 0.0008721351623535156, "__label__history": 0.0004584789276123047, "__label__home_hobbies": 0.0002624988555908203, "__label__industrial": 0.0014133453369140625, "__label__literature": 0.0004265308380126953, "__label__politics": 0.0004353523254394531, "__label__religion": 0.0004210472106933594, "__label__science_tech": 0.08197021484375, "__label__social_life": 0.00017344951629638672, "__label__software": 0.313720703125, "__label__software_dev": 0.50537109375, "__label__sports_fitness": 0.0002884864807128906, "__label__transportation": 0.0008306503295898438, "__label__travel": 0.0004837512969970703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26319, 0.00404]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26319, 0.02888]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26319, 0.92711]], "google_gemma-3-12b-it_contains_pii": [[0, 539, false], [539, 539, null], [539, 2134, null], [2134, 4667, null], [4667, 7117, null], [7117, 9479, null], [9479, 10099, null], [10099, 12051, null], [12051, 13983, null], [13983, 16697, null], [16697, 19739, null], [19739, 22214, null], [22214, 24722, null], [24722, 26319, null]], "google_gemma-3-12b-it_is_public_document": [[0, 539, true], [539, 539, null], [539, 2134, null], [2134, 4667, null], [4667, 7117, null], [7117, 9479, null], [9479, 10099, null], [10099, 12051, null], [12051, 13983, null], [13983, 16697, null], [16697, 19739, null], [19739, 22214, null], [22214, 24722, null], [24722, 26319, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26319, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26319, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26319, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26319, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26319, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26319, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26319, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26319, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26319, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26319, null]], "pdf_page_numbers": [[0, 539, 1], [539, 539, 2], [539, 2134, 3], [2134, 4667, 4], [4667, 7117, 5], [7117, 9479, 6], [9479, 10099, 7], [10099, 12051, 8], [12051, 13983, 9], [13983, 16697, 10], [16697, 19739, 11], [19739, 22214, 12], [22214, 24722, 13], [24722, 26319, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26319, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
51f62de3db2dacbb0eeb32bb746c2bea1975fbcf
|
Abstract
The analysis of the combined results from three independent industry focused case studies, undertaken in the area of distributed software development over a period of eight years, has resulted in the identification of ten key factors. These ten factors have been utilised as the basis for the development of the GSD Implementation Model. The objective of the creation and presentation of this model is to provide a practical and systematic approach to address the key activities, infrastructure and support which are required to facilitate effective distributed software development. This approach is inspired by the IDEAL model and divided into five specific phases which are classified as Initiating, Provisioning, Establishing, Managing and Leveraging. The goal of the Initiating phase is to clearly determine why, if and how the distributed development strategy is to be selected and undertaken. The implementation of the Provisioning phase is to ensure that the required infrastructure, processes and support to facilitate successful distributed software development are identified and put in place. The focus of the Establishing phase is to ensure that the development teams are effectively established. The managing phase addresses the day to day requirements of operating efficiently in a distributed environment. The Leveraging phase concentrates on the need to ensure that the structures and procedures are in place so that lessons learned can be documented and leveraged in existing and future projects.
Keywords
1 Introduction
In today’s highly integrated international markets software development is considered a globally sourced commodity [1]. The sustained popularity for the selection of this strategy is ascribed to organisations endeavouring to gain and maintain competitive advantage from the globalization of software development [2]. The potential for achieving this advantage is attributed to the benefits provided by labour arbitrage, which offers the opportunity for re-
duced development costs [3]. This continues to be facilitated by the availability of well educated and technically competent software engineers in low cost centres in Eastern Europe, Latin America, India and the Far East [4, 5]. It is a commonly held belief that these savings can be coupled with the opportunity for round the clock development facilitated by the temporal difference between remote development locations. The logic underpinning this approach is that these two factors can facilitate competitive pricing and reduce time to market. Thus enabling companies to compete more effectively by gaining, expanding or maintaining their market share [6].
As many organisations who have implemented a Global Software Development (GSD) strategy have discovered, due to the level of complexity involved in software development, outsourcing to other organisations or offshoring to remote divisions is not a straightforward task [3, 6-8]. Some of the difficulties encountered include such factors as the problem of understanding requirements, testing of systems and the coordination of these types of projects [7]. These difficulties are further compounded by cultural and language differences, lack of communication, geographical and temporal distance from team members and the customer, different process maturity levels, development and testing tools, standards, technical ability and experience. As a result the management of globally distributed software development projects has been recognised as a difficult and complex task [9].
Given all these circumstances it is not surprising that offshoring and outsourcing software development has proved a complex endeavour and should never be embarked on lightly or without due consideration. A major problem which has emerged in this area is that too often the implementation of an outsourcing or offshoring strategy has been seen as simply the replication of those strategies which are implemented for collocated software development. This short sighted approach has led to serious problems and numerous failures [2, 7]. It is in this context and with the objective of helping to address the issues which have been outlined the authors have undertaken to develop the GSD Implementation Model.
2 Three Independent Case Studies
The findings presented in this paper are based on the results from three independent case studies which the authors have undertaken over an eight year period in the area of distributed software development. The first case study was carried out in an Irish company called Irish Computing Solutions (a pseudonym) who implemented a strategy to expand the organisation’s market share by the establishment of local offsite virtual software development teams. Prior to implementing this policy the company operated collocated teams based in the capital (Dublin) who worked exclusively on the development of financial and telecommunications software. In addition the organisation had a software development centre located 150 miles from Dublin. This centre was involved in general application development and maintenance and had lower labour costs than the capital. The objective was to leverage staff at both locations and capitalize on the cost advantage which this strategy offered. A group of twelve offsite engineers were selected and were provided with basic training in the technology and process required. Two virtual teams were established and consisted of two sets of six offsite engineers who were partnered with three experienced onsite engineers based in Dublin. Considerable effort was put into providing the communication infrastructure, process and support for both virtual teams. A key objective of this approach was that the onsite engineers would mentor the inexperienced offsite staff and provide effective knowledge transfer. The operation of these teams and there subsequent failure provided the basis for this case study [10].
The second case study focused on what is termed offshore / nearshore software development [1]. The concept of offshore / nearshore is derived from the fact that the research centred on a partnership between a large US based financial organisation Stock Exchange Trading Inc. and an Irish division of a US multinational company Software Future Technologies (both pseudonyms). The US and Irish based sites were geographically distant, but they were considered linguistically and culturally nearshore [1, 11]. This partnership ultimately resulted in the establishment of virtual teams to develop and maintain bespoke financial software. Stock Exchange Trading Inc. was the senior partner in this relationship and had an ongoing requirement for the development and maintenance of this type of software. An unanticipated and urgent requirement arose for the development of new software during the initial stage of establishing the virtual teams. To address this need 70 percent of the Irish team members moved to the US, as a temporary measure for a period of one year to work on collocated teams with their Stock Exchange Trading colleagues. This proved to be a very effective strategy and both groups operated very successfully while collocated within what were to eventually become their virtual teams. It was only when the Irish team members returned to Ireland and the virtual teams were established that serious problems arose. These problems and issues and their ultimate solution have been articulated in detail in [10, 12, 13].
The third case study centred on offshore virtual team software testing and was undertaken in the Irish division of a large US multinational called Computing World International (a pseudonym) who had been operating in Ireland for over twenty years. The Irish division had been very successful and had expanded considerably over that time. During that period a large percentage of the projects undertaken had been offshored from their US parent; therefore, the Irish staff and management were very experienced in having projects offshored to them.
Two years prior to undertaking this case study the organisation’s corporate strategy changed. At that time they initiated a policy of establishing virtual testing teams with the objective of leveraging the technical ability of their Irish staff with the competitive salary levels of their Malaysian test engineers. When this research commenced four virtual testing teams were in operation between the Irish and Malaysian divisions. Some teams were established for over a year and a half while others had only been in operation for a number of months.
This case study focused on two embedded units of analysis. One was a virtual testing team with members located in Ireland and Malaysia which had been in operation for a period of eighteen months. The second was a virtual team with a similar makeup, but had been established for just over six months. The different aspects and findings from this study have been outlined in detail and published in [10, 13-15].
2.1 Research Methodologies
The research methodology employed in the first and second case studies was the action research five-phase cyclical process based approach as defined by Susman and Evered [16] and Baskerville [17]. Action research entails the analysis of the direct intervention of the researcher. This methodology was selected as the most appropriate for both case studies as one of the authors held a management role in the respective organisations researched. The objective in both situations was to leverage the research opportunities which this provided while maintaining the required level of objectivity of both researchers. The third case study required a different approach and research methodology. When this study was undertaken both authors were fulltime researchers and were offered the opportunity to undertake extensive on
site research. The objective was therefore to maximize the level of access this opportunity provided. After due consideration this resulted in the selection and implementation of a Yin [18] based embedded case study which incorporated a Strauss and Corbin grounded theory [19] approach to data gathering and analysis.
3 The Development of the GSD Implementation Model
Based on the analysis of the combined results from the three case studies [10, 12-15] ten key factors were identified. It was determined these factors were directly relevant and needed to be specifically addressed in order to establish and facilitate the operation of globally distributed virtual teams. These factors are summarised as follows:
1. Understand why, at what cost and risk a distributed strategy is undertaken
2. The Provision of effective infrastructure, process and documentation
3. The requirement to effectively establish the teams
4. Implement an efficient distributed team project management strategy
5. Ensure the development of common goals, objectives and rewards
6. The need for the clear definition of roles and responsibilities
7. Address issues related to culture, communication, motivation and fear
8. Ensure provision of adequate training and knowledge transfer
9. Facilitate and monitor the operation of collaborative and supportive teams
10. Document and leverage lessons learned
3.1 Foundation of the Model
Reviewing the ten key factors which were identified by this research it was determined of value to consider how they could be utilised to develop a strategy for the establishment, operation and the effective management of virtual software teams. It was realised they also had relevance and implications for GSD in general. To address both of these issues a model was developed which highlighted the key areas which needed to be considered and addressed to facilitate successful virtual team operation and globally distributed software development.
When developing this model it was recognised that it required to be clear so that it could be easily understood and implemented, to be practical so that it would be used and to be comprehensive to address the numerous relevant factors and issues which impact on GSD. It was also required to incorporate an element which facilitated recording relevant experience and knowledge gained while establishing and operating the GSD teams. This could then be leveraged to improve existing operations and assist with the implementation of GSD strategies in the future.
It was in this context that the IDEAL\textsuperscript{sm} model [20] was researched and identified as an appropriate basis for the development of the \textit{GSD Implementation Model}. The original focus and application of the IDEAL\textsuperscript{sm} model is in the area of Software Process Improvement (SPI). In these circumstances the authors had in previous research utilised it as an effective tool and its adaptability had been successfully implemented to achieve SPI [21]. Its wider applicability and potential for use outside this specific SPI area has been recognised by the Software Engineering Institute (SEI). It is acknowledged that the model can provide an effective and disciplined approach for the adoption of new software engineering processes, methods and tools. In these circumstances it can also be utilised for establishing the foundation for and the maintenance of a long-term improvement strategy [22].
It was recognised that the IDEAL\textsuperscript{sm} model presented a structure which could be amended to directly address all the relevant requirements and areas of concern which impact on the establishment and operation of GSD teams. It provided a simple, but comprehensive framework on which the \textit{GSD Implementation Model} could be based. It also offered a straightforward, practical and extensive approach. Based on all these factors it was considered suitable. It has been adapted to the specific requirements of the GSD environment. What was proposed was not to mirror the IDEAL\textsuperscript{sm} model in every aspect, but to utilise its relevant constituent parts and overall approach. Therefore the development of the GSD Implementation Model was based on the basic structure of the IDEAL\textsuperscript{sm} model which was expanded and modified to meet the specific requirements and needs of operating in the globally distributed software development environment.
4 \textit{The GSD Implementation Model}
The ten key factors which our research identified were divided into five distinct phases, which were to be undertaken sequentially. The model as a whole was designed for iterative execution (see figure 1). The five phases are as follows:
Initiating – Determine why, if and how the GSD approach is to be implemented
Provisioning – Ensure provision of effective infrastructure, process and documentation
Establishing – The requirement to effectively establish the GSD teams
Managing – Implementation of an efficient GSD project management strategy
Leveraging – Document and leverage lessons learned for existing and future projects
\textsuperscript{sm} IDEAL is a service mark of Carnegie Mellon University
There is a requirement for organisations considering outsourcing or offshoring part or all of their software development activities to clearly define and articulate their rationale for selecting and implementing such an approach. In some cases justification is simply the result of a perceived cost advantage of implementing a GSD strategy at a corporate level or the fact that competitors are doing it. In a number of situations this type of rationale has proved very short sighted and led to serious problems. In these circumstances it is important that organisations recognise that the reality can be quite different. GSD projects can and have ended up costing as much or more than if they were collocated. They can also negatively impact on the delivery and quality of the software artefacts produced and the morale and motivation of existing staff [10, 13, 15].
In addition risk is a key factor which needs to be specifically addressed in the GSD environment, while pervasive risk should be incorporated into all well planned software projects [23, 24]. Globally distributed development projects carry additional high risk exposure [25]. These include the risk of delay or failure due to linguistic, cultural difference, motivation and temporal distance. All these issues need to be recognised and understood prior to embarking on or implementing such an approach [2, 26]. This can only take place when time is spent gathering and evaluating information on exactly what is involved and what are the positive and negative factors which are inherent to operating in a GSD environment.
If it is decided this is the strategy the organisation wishes to implement, the real potential costs and risks involved need to be accurately assessed. Based on these realistic projections the objectives of the strategy should be determined and directly linked to the short and long-term goals of the organisation. Senior management support is key to the success of any distributed software development strategy. Therefore, they must be provided with all the information necessary to allow them to have realistic expectations as to what can be actually achieved. Once the decision to implement this approach has been agreed the most appropriate GSD strategy should be selected.
4.2 Provisioning
Having selected a GSD strategy the infrastructure to support its implementation needs to be determined and put in place. In this context existing tools and processes need to be reviewed, adjusted and augmented. In some low cost locations the availability of a dependable electrical supply and alternative power source need to be considered and addressed. Of equal importance is the availability of an adequate telecommunications infrastructure. Once basic infrastructure has been established across the relevant sites common or compatible tools need to be identified and sourced. This is required to ensure the interoperability of cross-site operations and artefacts. In this context an essential aspect of GSD is the selection and implementation of an effective configuration management system [7]. Due consideration also needs to be given to the selection of appropriate communication tools which are essential when operating in what can largely be an asynchronous environment [3, 12, 14].
Once adequate infrastructure is in place the identification and adoption of a common and effective GSD process needs to be considered [7]. Organisations must reassess and modify their existing processes for use in a distributed environment [26]. This includes the need for more formal methods of collaboration and communication given the loss of informal communication methods [27]. In the GSD situation there is a clear need for a well-defined jointly formulated and documented process to be put in place [21].
4.3 Establishing
The next step is to effectively establish the teams. Team members should be recruited internally and externally based on the technical needs of the project. Provision should be made for technical, cultural and communications training which are specific to the needs of the GSD environment [15]. The foundation for effective knowledge transfer between team members regardless of location should be put in place. This includes leveraging all visits between team sites to develop relationships. A priority from an initial stage is the establishment of a one-team vision and cooperative approach between team members regardless of location. This has to be actively fostered, developed and monitored[14].
4.4 Managing
There is the need for the development and implementation of an efficient GSD project management strategy which incorporates and addresses the specific requirements of operating in a distributed environment [14, 15]. In this context there is a need to facilitate and ensure the development of common goals, objectives and rewards. This is achieved by specifically addressing the issues, factors and variables that GSD teams are exposed to [3, 7]. There is also the requirement for roles and responsibilities to be clearly defined and articulated to all managers and team members. This is achieved through the use of a common vocabulary which unambiguously outlines this information.
There is also a requirement to address issues which are specifically related to culture, communication, motivation and fear [10]. This is achieved by understanding these issues and ensuring they are monitored and that timely and corrective action is taken to address any problems which arise due to any of these areas. Of equal importance is to monitor the effectiveness of technical training and knowledge transfer. When the requirement for additional
training is identified it should be provided. If problems are identified with knowledge transfer they need to be investigated and specifically addressed. There should also be incentives to encourage staff to effectively transfer knowledge.
A cohesive team does not emerge of its own accord from a globally distributed, culturally, linguistically and technically diverse group of individuals, who are separated by geographical and temporal distance [7]. If it is to be put in place, it requires effort and goodwill on all sides. It can happen, but it must be planned, established, supported, monitored and actively developed. It can only take place with effective management where the positive aspects of the GSD environment are effectively leveraged and the negative factors and issues are addressed [12].
4.5 Leveraging
A key activity is leveraging the experience and knowledge gained by implementing a GSD strategy. This is best achieved by analysing and documenting the experience and knowledge gained. This should then be utilised to review what has been achieved and identify areas where further improvements can be made. This information should also be made available and used to directly assist with the management of other existing teams and the establishment and operation of new GSD projects.
5 Conclusion
The GSD Implementation Model provides an overview which is practical and comprehensive in its structured and iterative approach. Within its five phases it addresses the specific requirements of operating in a GSD environment. This is achieved by ensuring the rationale for undertaking this approach is clearly articulated and understood and that realistic objectives and goals are set. Senior management support is secured on achievable expectations based on the accurate evaluation of costs and risks. The required infrastructure, processes and supports are put in place to facilitate the operation of the GSD teams. Time and effort is put into effectively establishing and managing the teams. An effective project management strategy based on the needs of the GSD environment is implemented. Key to the long term success of this approach is the documenting and leveraging of the experience gained implementing such a strategy. This model has been presented to forty five senior managers who had direct experience of implementing GSD strategies for evaluation. Their response was very positive and the consensus was that it was an excellent model to utilise when embarking on a GSD strategy as it highlighted the key areas which need to be specifically addressed.
6 Literature
NB This is a prepublication version of this paper
|
{"Source-Url": "https://www.researchgate.net/profile/Valentine_Casey2/publication/220542212_Implementation_of_Global_Software_Development_a_structured_approach/links/09e4150fecf48f3d99000000.pdf", "len_cl100k_base": 4154, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 29385, "total-output-tokens": 5972, "length": "2e12", "weborganizer": {"__label__adult": 0.0004625320434570313, "__label__art_design": 0.0003330707550048828, "__label__crime_law": 0.0004291534423828125, "__label__education_jobs": 0.00267791748046875, "__label__entertainment": 5.394220352172851e-05, "__label__fashion_beauty": 0.00017845630645751953, "__label__finance_business": 0.002468109130859375, "__label__food_dining": 0.00040841102600097656, "__label__games": 0.000469207763671875, "__label__hardware": 0.0004835128784179687, "__label__health": 0.0005359649658203125, "__label__history": 0.0002167224884033203, "__label__home_hobbies": 8.755922317504883e-05, "__label__industrial": 0.00041294097900390625, "__label__literature": 0.00026702880859375, "__label__politics": 0.0003056526184082031, "__label__religion": 0.0003352165222167969, "__label__science_tech": 0.00365447998046875, "__label__social_life": 0.00013124942779541016, "__label__software": 0.00579833984375, "__label__software_dev": 0.97900390625, "__label__sports_fitness": 0.00029587745666503906, "__label__transportation": 0.00048732757568359375, "__label__travel": 0.0002818107604980469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27770, 0.03458]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27770, 0.16105]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27770, 0.95132]], "google_gemma-3-12b-it_contains_pii": [[0, 2182, false], [2182, 6107, null], [6107, 9998, null], [9998, 12517, null], [12517, 15184, null], [15184, 17452, null], [17452, 20845, null], [20845, 23879, null], [23879, 27130, null], [27130, 27770, null], [27770, 27770, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2182, true], [2182, 6107, null], [6107, 9998, null], [9998, 12517, null], [12517, 15184, null], [15184, 17452, null], [17452, 20845, null], [20845, 23879, null], [23879, 27130, null], [27130, 27770, null], [27770, 27770, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27770, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27770, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27770, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27770, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27770, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27770, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27770, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27770, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27770, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27770, null]], "pdf_page_numbers": [[0, 2182, 1], [2182, 6107, 2], [6107, 9998, 3], [9998, 12517, 4], [12517, 15184, 5], [15184, 17452, 6], [17452, 20845, 7], [20845, 23879, 8], [23879, 27130, 9], [27130, 27770, 10], [27770, 27770, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27770, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
7bd4391df62095f5508ab39bc89ac7b8b78e4ece
|
The handle http://hdl.handle.net/1887/22911 holds various files of this Leiden University dissertation.
**Author:** Haastregt, Sven Joseph Johannes van
**Title:** Estimation and optimization of the performance of polyhedral process networks
**Issue Date:** 2013-12-17
In Chapter 2, we introduced the Polyhedral Process Network model of computation and the PNGEN tool flow which automatically derives PPNs from sequential static affine nested loop programs written in C. We then introduced the ESPAM tool which employs the LAURA model to obtain synthesizable RTL implementations of PPNs. In this chapter, we focus on optimizing the RTL in the aforementioned tool flow. We first investigate shortcomings of the current state-of-the-art techniques and then propose extensions to facilitate more efficient RTL implementations.
3.1 Motivation & Contributions
When implementing industrially relevant applications, such as the sphere decoder application discussed in Chapter 6, and when applying transformations discussed in Chapter 5, we encountered four limitations of the LAURA model and the ESPAM tool. These limitations comprise characterization of functions, incorporation of novel front-end optimizations, handling of more complex domains, and handling out-of-order communication. In this chapter, we present solutions to these four limitations.
First, in the original work describing the LAURA model, only the delay metric of an IP core was considered [ZSKD03, NSD08a]. Such a simplified characterization does not suffice when integrating IP cores generated by HLS tools or when reasoning about system composition. In Section 3.2, we therefore present a more elaborate characterization of IP cores.
Second, the PN tool performs several optimizations that were not taken into account in the original LAURA model. In Section 3.3 and 3.4, we show how data reuse and sticky FIFO optimizations can be leveraged in the LAURA model to obtain more
Third, for complex iteration domains, the evaluation logic of a LAURA processor may become part of the critical path limiting the maximum achievable clock frequency of a system. As a result, the overall throughput of the system is limited. In Section 3.5, we investigate two different approaches to reduce the degradation of the maximum achievable clock frequency.
Fourth, applications with reordering communication could not be implemented using the ESPAM tool. Moreover, the known reordering buffer implementations suffered from read and write penalties with regards to non-reordering buffers [ZTKD02]. In Section 3.6, we present a new reordering buffer design with single-cycle read and write latencies that has been integrated in ESPAM. The particular design enables effortless integration in ESPAM-generated MPSoCs with point-to-point communication. In Section 3.7, we summarize this chapter. The positions of the contributions to the LAURA model have been indicated in Figure 3.1.
3.2 IP Core Characterization
The original LAURA model assumes that the IP core that is integrated into the execute unit comes from an external library. Such a library contains IP cores for different functions and possibly multiple IP cores for the same function that differ in performance and resource cost metrics. Being able to characterize an IP core in a concise way is important when considering performance estimations of PPNs in Chapter 4. To systematically distinguish between different IP cores which possibly implement the same function, we introduce the notion of a function implementation.
Definition 3.1 (Function Implementation).
A function implementation is a particular implementation of a process function \( F \). A function implementation is characterized by
• a latency $\Lambda_F$ and
• an initiation interval $II_F$,
where $\Lambda_F \in \mathbb{N}^+$ is the input-to-output delay in clock cycles, and $II_F \in \mathbb{N}^+$ is the initiation interval in clock cycles.
The delay $\Lambda_F$ represents the time between the start of a function execution and the moment at which all output has been produced. In Figure 3.2c, we show a time line of three sequential executions of a function implementation with $\Lambda_F = 6$.
The initiation interval $II$ represents the amount of time between successive starts of a function implementation. Figure 3.2a depicts a function implementation with $II_F = 1$, allowing an execution of a function to be started every clock cycle. As a result, different executions of the function overlap in a pipeline fashion. Figure 3.2b depicts a function implementation with $II_F = 4$, allowing an execution to be started only every four clock cycles. The amount of overlap between different executions is less than the previous scenario. Figure 3.2c depicts a function implementation with $II_F = \Lambda_F = 6$, resulting in fully sequential executions of the function. This scenario resembles a non-pipelined function implementation. In this thesis, we set $II_F = \Lambda_F$ to model an implementation on a programmable processor on which no overlapped execution of function invocations occurs. A low $II$ implies that the function implementation can deliver a high throughput. However, a low $II$ reduces the opportunities for resource sharing inside a function implementation, resulting in higher resource cost compared to function implementations with a higher $II$. As such, the $II$ is a key tool in trading off throughput and resource cost of the function implementation.
### 3.2.1 IP Core Integration
The function implementations in the IP core library may originate from various sources. The corresponding IP cores may be implemented in RTL manually, or the RTL can be automatically derived from a high-level specification using HLS tools. We have successfully implemented IP cores generated by the PICO [Syn10], AutoESL [Xil11], and DWRV HLS tools [YBK+07]. The RTL generated by PICO and

AutoESL can be integrated in a straightforward way by connecting the clock, reset, enable, and data ports to the execute unit [HK09]. The RTL generated by DWRV assumes a shared memory model which is different from the distributed memory model employed in the PPN context. Therefore, integrating DWRV cores requires an additional wrapper which transfers data to and from a memory that connects to the DWRV core [NHS+11].
HLS tools such as PICO or AutoESL characterize a generated fixed-latency core by its latency $\Lambda$ and initiation interval $II$ [Fin10]. In the original LAURA model, only the latency was taken into account and the $II$ value was assumed to be one. To integrate a fixed-delay IP core characterized by $\Lambda$ and $II$ values, we have extended the LAURA model to take IP cores with $II > 1$ into account. Both $\Lambda$ and $II$ are incorporated in the control unit of the generated LAURA HDL. Using the delay value, the control unit enables the write unit at the appropriate times, that is, when valid data is produced by the execute unit. Using the $II$ value, the control unit enables the read unit only at valid $II$ boundaries.
Function implementations with a variable delay cannot be characterized accurately by a single number. Instead, a designer may choose to set $\Lambda$ to the average or worst-case delay value for performance analysis purposes. When integrating a variable-delay IP core, the values $\Lambda$ and $II$ are not taken into account in the LAURA HDL. Instead, the control unit requires the IP core to indicate when it is ready to accept or produce data.
### 3.3 Data Reuse
In applications such as filters, often a variable or array element is written once and subsequently read multiple times. For example, the array element $a[1]$ in Figure 3.3a is written once when $i = 1$ and read when $j = 1$ (for argument $a[j]$) and $j = 2$ (for argument $a[j-1]$). In a PPN derived from the C code, both reads of $a[1]$ are performed by the $\text{accum}$ process. For the relation from $\text{source}$ to $\text{accum}$, the compiler detects data reuse, which means the same token is read more than once from this relation.
A PPN derived from the C code using PN_GEN is shown in Figure 3.3b. Channels $F1$ and $F3$ implement the data reuse channel pair for the relation from $\text{source}$ to $\text{accum}$. Channel $F1$ is a regular FIFO which transfers a token when $\text{accum}$ needs it for the first time. Channel $F3$ is a regular FIFO which propagates the token to subsequent iterations of $\text{accum}$.
In Figure 3.4, we depict part of a LAURA processor for the $\text{accum}$ process of Figure 3.3c. Its read unit contains two multiplexers. The lower multiplexer passes tokens
3.3. Data Reuse
```c
for (i=0; i<5; i++) {
source(&a[i]);
}
for (j=1; j<5; j++) {
accum(a[j], a[j-1], &b[j]);
}
for (k=1; k<5; k++) {
sink(b[k]);
}
```
a) C code.
b) PPN with data reuse.
Figure 3.3: A program with data reuse.

Figure 3.4: Handling data reuse in a LAURA processor.
from FIFO $F1$ to the first input of the $accum$ IP core. The upper multiplexer selects between FIFO $F1$ that is read during the first iteration and FIFO $F3$ that is read during subsequent iterations, and passes the token to the second argument of the IP core. The write unit contains a single demultiplexer which propagates the IP core output to FIFO $F2$. To handle the reuse, we extend the write unit with another output port connected to FIFO $F3$. The output port is driven by the first input to the IP core. A separate reuse evaluation logic block ensures that only tokens that need to be propagated to subsequent iterations are written to $F3$. The reuse evaluation logic block duplicates the expressions from the write unit’s evaluation logic for the reuse ports to select the correct output port. Tokens that are reused in subsequent iterations can be written to $F3$ immediately after reading them, irrespective of the IP core latency. We therefore connect the counters of the read unit to the reuse evaluation logic block.
3.4 Sticky FIFOs
As an optimization of data reuse, PNGEN can classify a data reuse channel pair as a sticky FIFO. If the same token is transferred over a FIFO to multiple subsequent iterations of a process, then PN classifies the FIFO as a sticky FIFO and removes the selfloop. During a regular read operation on a sticky FIFO, the receiving process stores the token in a register. Subsequent iterations that need the same token then read from the register instead of the FIFO. This reduces inter-process communication and the number of write operations the producing process has to perform.
We implement a sticky FIFO by replacing the read multiplexer of a function argument with a “sticky read multiplexer”. In Figure 3.5, we illustrate both types of read multiplexers. Figure 3.5a depicts the situation where all of the three input ports of the read multiplexer are connected to regular FIFOs. The read unit’s evaluation logic block drives the input_select port of the multiplexer. The output of the multiplexer is propagated to the execute unit. In the example of Figure 3.5a, we first read a token from port 2, then a token from port 3, and then four tokens from port 1, as indicated by the sequence below the input_select port.
Figure 3.5b depicts the situation where port 1 is connected to a sticky FIFO. The output of the multiplexer is both propagated to the execute unit and written into register $R$. The output of register $R$ is an additional input to the multiplexer. This additional input is selected when input_select is set to zero. This is illustrated by the sequence below the input_select port. We first read a token from port 2, then a token from port 3, and then a token from port 1. Then, input_select is set to zero which means we reuse the token read from port 1 that is still in $R$. As a result, the process writing to port 1 has to write the token only once.
Since the register is connected to the output of the multiplexer, it also stores tokens read from other ports that can be connected to any type of channel. However, tokens from non-sticky FIFOs are never read from the register, since the semantics of a
a) Port 1 connects to a regular FIFO.
b) Port 1 connects to a sticky FIFO.
Figure 3.5: Read multiplexer architecture.
sticky FIFO ensure that a regular read access is always performed before the token in the register is reused. For the example of Figure 3.5b this means that a zero in the input select sequence is always preceded by a one, potentially with more zeros in between. Therefore, we do not need a separate register for each sticky FIFO port, but use a single register connected to the multiplexer output.
### 3.5 Evaluation Logic Optimizations
The main purpose of a LAURA processor is to route tokens from different process ports to the IP core during the appropriate process iterations. The evaluation logic blocks of a LAURA processor select the process ports that are accessed during a given iteration. The evaluation logic is driven by a set of cascaded counters that iterate through the points of the process iteration domain. At each iteration point, an expression is evaluated for each process port. When the expression evaluates to true, the port is accessed in the current iteration. The result of the evaluation is forwarded to the read multiplexer or write demultiplexer of the LAURA processor. In Figure 3.6, we illustrate the internal structure of the evaluation logic by considering the read unit’s evaluation logic of Figure 2.12 in more detail. Only one counter is present, because the domain of the process is one-dimensional. The evaluation logic contains an expression for each of the two input ports. Port 1 is accessed during the first five iteration points, as denoted by the bit string in the right part of Figure 3.6. Port 2 is accessed during the remaining five iteration points.
We have identified two problems with the evaluation logic of a LAURA processor. First, the evaluation logic may affect the maximum achievable clock frequency of a LAURA processor, as the expressions become part of the critical path. Second, expressions containing for example \( \text{max} \) or \( \text{div} \) operators are nontrivial to implement. These problems becomes apparent when considering the scheduling transformation discussed in Section 5.1.4, as illustrated in for example Figure 5.11.
We address the first problem by pipelining the evaluation logic, as discussed in Section 3.5.1. We address the second problem by implementing the evaluation logic using ROM tables, as discussed in Section 3.5.2.

3.5.1 Pipelined Evaluation Logic
To achieve a higher clock frequency, we break long combinational paths into shorter combinational paths that are connected by registers. In Figure 3.7, we illustrate this for the expression $i + j < 5$. Without pipelining, the maximum combinational path length is two because the comparison is connected directly to the addition. In Figure 3.7b, we insert a register between the addition and the comparison. As a result, the maximum combinational path length is reduced to one and therefore the clock cycle period for this circuit can be decreased. However, the evaluation of the expression now takes two clock cycles. Only if subsequent evaluations can execute in an overlapped fashion then a throughput rate of one operation per clock cycle can be sustained at a clock frequency that is higher than the original clock frequency.
The advantage of this solution is that the maximum clock frequency of a LAURA node can be increased at the expense of only a small amount of registers. A disadvantage of this solution is that deciding the amount and insertion points of registers is a non-trivial task. Moreover, control dependencies inside the LAURA model and control dependencies between LAURA processors and other processing or communication components of a system do not allow for unlimited insertion of registers. We have found that pipelining the evaluation logic by one level is still possible.
3.5.2 ROM-Based Evaluation Logic
To implement any non-parametric evaluation logic, we can always resort to a table based implementation. We obtain this table by evaluating all expressions at compile-time and storing the results in a Read-Only Memory (ROM). This technique has already been presented by Derrien et al. [DTZ+05], but was not available in the Daedalus design flow. Derrien et al. already found that ROM based evaluation logic is more expensive in terms of resources than expression based evaluation logic. When realizing designs, we favor expression based evaluation logic, and only use ROM based evaluation logic when expression based evaluation logic requires operators like $\text{max}$ and $\text{div}$, as these operators are not trivial to implement in RTL. Within Daedalus, we can select per processor whether to use expression based evaluation logic or ROM.
3.5. **Evaluation Logic Optimizations**
Based evaluation logic.
For each iteration in the process domain, the ROM contains a word that specifies which ports need to be accessed. In a straightforward implementation of ROM-based evaluation logic, all port selection signals for each iteration of the process domain are stored in a table $E$. For a read or write unit of a process $p$ connected to $n$ ports, such a table $E$ requires
$$n \cdot |D_p|$$
(3.1)
bits, where $|D_p|$ is the cardinality of $p$'s process domain. However, many streaming applications exhibit repeating patterns in the ports accessed during subsequent iterations. Like [DTZ+05], we compress such repetition by applying a run-length encoding on the ROM data. This requires an additional table $R$ containing the repetition count of each word in table $E$.
In Figure 3.8, we show the read unit’s evaluation logic of Figure 2.12 implemented using ROM containing run-length encoded port selection patterns. Contrary to Figure 3.6, the evaluation logic block now contains two ROMs instead of a set of expressions. The first ROM shown at the bottom of the evaluation logic block contains table $E$. A column in this ROM represents the ports that are selected during a set of subsequent iterations. For example, the first column contains the sequence $[1, 0]^T$, meaning the first port is selected while the second port is deselected. The second ROM shown at the top of the evaluation logic block contains table $R$. It specifies the amount of times each column in $E$ has to be repeated. In Figure 3.8, table $R$ contains $[4, 4]$, meaning that both columns in $E$ should be repeated four times. Thus, the first column is considered in total five times, and then the second column is considered five times. At run time, this results in port 1 being accessed five times, followed by port 2 being accessed five times, as illustrated by the bit strings at the right part of Figure 3.8.
The resource cost of a compressed ROM-based evaluation logic block mainly depends on the sizes of tables $E$ and $R$. The size of $E$ depends on the number of entries and the number of ports. The size of $R$ depends on the number of entries and the number of bits required to store the largest repetition count occurring in $R$. This

Chapter 3. Synthesizing PPNs
| Process - Unit | $n$ | $|D_p|$ | $|R|$ | $\max(R)$ | ROM Size (bits) |
|---------------|-----|-------|-----|-----------|-----------------|
| | | | | | Uncompr. | Compressed | % |
| zero-Wr | 2 | 28 | 13 | 5 | 56 | 65 | +16 |
| read-Wr | 2 | 147 | 42 | 5 | 294 | 210 | -29 |
| vectorize-Rd | 4 | 147 | 42 | 5 | 588 | 294 | -50 |
| vectorize-Wr | 3 | 147 | 42 | 5 | 441 | 252 | -50 |
| rotate-Rd | 5 | 441 | 231 | 4 | 2205 | 1848 | -16 |
| rotate-Wr | 4 | 441 | 212 | 4 | 1764 | 1484 | -16 |
| sink-Rd | 2 | 28 | 13 | 5 | 56 | 65 | +16 |
Table 3.1: Individual ROM sizes for QR decomposition with $K = 21, N = 7$.
yields a total ROM size of
$$|R| \cdot n + |R| \cdot w$$
(3.2)
bits, where $n$ is again the number of ports and $w = \lceil \log_2 \max(R) \rceil$. The size of a compressed evaluation logic block may be larger than the size of an uncompressed evaluation logic block in case
$$n \cdot |D_p| < |R| \cdot n + |R| \cdot w.$$ (3.3)
To assess whether this occurs in practice, we consider the QR decomposition application which exhibits complex port selection patterns that reduce compression effectiveness.
In Table 3.1, we show statistics for the individual ROMs of the five processes constituting a QR decomposition application. For example, the third row corresponds to the read unit for the vectorize process. An uncompressed ROM for the vectorize read unit requires $3 \cdot 147$ bits according to Equation (3.1). The compressed ROM requires $42 \cdot 4 + 42 \cdot \lceil \log_2(5) \rceil$ bits according to Equation (3.2). Applying the compression technique to the “zero” and “sink” processes results in ROM sizes that are larger than the sizes of their uncompressed counterparts. This can be attributed to the small domain sizes of these processes. Because each pattern is repeated at most twice, the overhead of table $R$ outweighs the benefits of a smaller number of entries in $E$.
In Table 3.2, we show the total ROM size with and without using compression for instances of the QR decomposition application. In all cases except the first, the compression technique leads to reduction of the memory cost. For larger values of parameters $K$ and $N$, the iteration domain sizes of the processes increase. This results in a larger reduction, because the number of additional bits required to store
Table 3.2: Total ROM sizes for different QR decomposition instances.
higher repetition counts increases more slowly than the number of additional points in the iteration domain.
The worst case for which run-length encoding does not yield any gains is when alternating between two ports. In such a case, the ROM size approaches \( n \cdot |D_p| \) bits. The cost of repetition count table \( R \) should be added to this, yielding a “compressed” ROM whose size may exceed the size of the uncompressed ROM. However, alternating port selection patterns can often be handled easily using LAURA’s conventional expression-based evaluation logic. Therefore, we do not need a ROM-based solution for such cases.
3.5.3 Related Work
All case studies conducted in this dissertation (cf. Chapter 5), the evaluation logic could be successfully implemented in either a pipelined or a ROM-based fashion. However, for applications demanding a clock frequency close to the platform limits, neither a pipelined nor a ROM-based evaluation logic implementation may suffice. In particular the application studied in Chapter 6 demands a high clock frequency of 225 MHz which neither pipelined nor ROM-based evaluation logic can provide. In such a case, one may leverage existing work on control generation. However, this may require non-trivial integration efforts, because the architectures in which the related works are used differ from the LAURA architecture. We present three alternative works that may be considered when further improving the LAURA evaluation logic components.
The CLooGVHDL tool generates a VHDL controller which traverses the points of a set of polytopes according to a predefined order [DBC+07]. The controller consists of a set of communicating automata that iterate over the dimensions of the polytope.
By placing registers between the automata, the maximum achievable clock frequency can be increased. Parallel execution of multiple instances of statements was left as future work. This would be of interest to us, since such parallel execution occurs in the LAURA architecture.
PARO attempts to reduce the resource cost of control logic by identifying counters and control signals that can be shared across different processors [DHRT07]. This approach was shown to lower resource cost particularly for partitioned applications, since the different partitions still have parts in common. However, the efficacy of this is limited for PPNs implemented using LAURA processors because of the globally asynchronous nature of the PPN model. That is, although two processes may share the same process domain and thus have similar control logic, they do not necessarily traverse their domains at the same pace.
Another alternative for the evaluation logic components of a LAURA processor is to implement them using existing HLS tools such as AutoESL [Xil11] or SynphonyC [Syn10]. This has the advantage that a target clock frequency can be specified. The HLS tool then produces a pipelined controller that is optimized for the specified clock frequency. However, we found that in practice the output of such tools have difficulties with the read and write units of a LAURA processor being decoupled [HK09]. For example, stalling the generated controllers on a blocking read condition was not fully supported at the time of our investigation. When such implementation problems have been resolved by HLS tool vendors, using an HLS tool to generate the evaluation logic might be the most favorable alternative solution.
### 3.6 Out-of-Order Communication
Ideally, a producer process produces tokens in the same order as the consumer process consumes them. Such in-order communication allows the channel from producer to consumer to be realized using a relatively inexpensive FIFO buffer. However, the PPNs of some applications do not exhibit solely in-order communication, as explained in Section 2.3.1. On some channels the order in which tokens are produced by the producer process may be different from the order in which tokens are consumed by the consumer process, and vice versa. Such communication is known as out-of-order communication. Out-of-order channels cannot be realized using FIFO buffers, because the token order needs to be taken into account to guarantee functional correctness. Instead, more sophisticated interconnects are required, such as reordering buffers. Reordering buffers store incoming tokens in order in a private memory and contain reordering logic which outputs the stored tokens in the order required by the consumer. Alternatively, circular buffers with overlapping windows
3.6. Out-of-Order Communication
```c
for (i=1; i<=4; i++) {
for (j=1; j<=3; j++) {
y[i] = F(y[i]);
}
}
```
Figure 3.9: Two executions of a program with different communication behavior.
can realize out-of-order communication [BBS09]. This solution requires modifications to the producer and consumer process synchronization primitives. The impact on performance and resource cost of these modifications, and the performance and resource cost of the buffer itself is unclear, as no RTL implementation case study has been conducted yet.
In Figure 3.9, we show an example C program and two valid executions of this program. In the first execution shown in Figure 3.9b, we follow the execution order of the original program. That is, we first execute \((i, j) = (1, 1)\), followed by \((1, 2)\), etc. The relative order of iteration executions is illustrated by the number inside the points of Figure 3.9b. Only when \(i = 4\), tokens are written to channel \(CH2\). Channel \(CH2\) receives tokens in the order \(y[1], y[2], y[3]\). Another valid execution in which the inner loop is traversed in the reverse direction is shown in Figure 3.9c. As a result, channel \(CH2\) receives tokens in the order \(y[3], y[2], y[1]\), which is different from the order shown in Figure 3.9b. If we assume that \(CH2\)'s consumer process is not modified, the tokens would arrive in reverse order if \(CH2\) would be implemented using a FIFO buffer. To respect the correct token order, channel \(CH2\) has to be implemented using a reordering buffer.
Turjan et al. have proposed different realizations of reordering buffers, such as linear, pseudo-polynomial, and Content Addressable Memory (CAM) based implementations [TKD03]. The authors showed that these reordering buffer designs have a considerable negative impact on performance and resource usage. For example, read and write operations of a CAM implementation take four and two clock cycles [ZTKD02], respectively, while read and write operations on a regular FIFO take only one clock cycle.
To avoid counteracting the benefits of an application transformation because of possible reordering communication, we have developed a new reordering buffer [HK12]. The primary difference with previous work is that read and write operations now take
only one clock cycle. This means that replacing a FIFO buffer with a reordering buffer increases resource usage, but does not introduce additional delay cycles.
Our reordering buffer is composed of a Write Address Generator (WAG), a Read Address Generator (RAG), and a private memory. The memory is dual-ported, with one port being addressed by the WAG and the other port being addressed by the RAG. The WAG and RAG both contain a set of counters which iterate through domains associated to the channel. These counters are used by the address generation logic to compute the next write and read addresses. To avoid delay cycles, the counters and address generation logic are implemented in a pipeline fashion. To minimize the latency of the address generation logic, we employ a linear addressing scheme. This addressing scheme is based on conventional linearization of an $n$-dimensional array into a 1-dimensional array. As such, the resulting address expressions are linear polynomials that can be realized efficiently in hardware.
The interface of the reordering buffer resembles a point-to-point FIFO buffer interface. This allows straightforward integration of reordering buffers in ESPAM-generated PPN implementations. That is, when a transformation introduces out-of-order communication, we do not have to modify the interfaces of the processes involved in the out-of-order communication. The interface is depicted in Figure 3.10. The outgoing slave interface exposes an output data bus, an exist signal to indicate if a token is available, and a read signal to acknowledge a read operation. The incoming master interface exposes an input data bus, a full signal to block write operations when the buffer is not ready to accept them, and a write signal to acknowledge a write operation.
3.6. Out-of-Order Communication
We illustrate the memory organization of our reordering buffer at the bottom part of Figure 3.10. In the bottom left, we show a producer domain consisting of four points \((0, 0), (0, 1), (0, 2),\) and \((1, 2)\). The producer produces four tokens in the order A, B, C, D. We store these tokens according to a linear addressing scheme at address
\[
\text{wAddr}(i_p, j_p) = i_p + 2 \cdot j_p. \tag{3.4}
\]
The slot for each token is shown in the memory of Figure 3.10. For example, token C is produced in iteration \((0, 2)\) and is therefore stored at address \(04\). Because of the linear addressing scheme, some addresses may remain unused for non-rectangular domains. In our example, this occurs for addresses \(01\) and \(03\). The consumer domain shown on the bottom right consumes the four tokens in the order C, D, B, A. To retrieve these tokens in the correct order from the memory, we compute
\[
\text{rAddr}(i_c, j_c) = \text{wAddr}(\text{M}_{p \rightarrow c}(i_c, j_c)) \tag{3.5}
\]
for each point in the consumer domain. That is, we first apply the channel relation \(\text{M}_{p \rightarrow c}\) as found by the PN compiler. This gives the point \((i_p, j_p)\) in the producer domain that corresponds to the point \((i_c, j_c)\) in the consumer domain. We then compute \(\text{wAddr}(i_p, j_p)\) to obtain the address from which the token should be read. For the example of Figure 3.10, PN finds the channel relation
\[
\text{M}_{p \rightarrow c}(i_c, j_c) = \begin{bmatrix} i_p \\ 2 - j_p \end{bmatrix}. \tag{3.6}
\]
Therefore, the read address function becomes
\[
\text{rAddr}(i_c, j_c) = i_c + 2 \cdot (2 - j_c). \tag{3.7}
\]
For token C, which is consumed in iteration \((0, 0)\), the \(\text{rAddr}\) function yields address \(04\) which is the same address that was computed by the WAG. However, a token may not have been written by the producer yet. For example, token C may not be available yet at address \(04\). Therefore, we introduce an additional valid bit for each memory location. The valid bit is set once a token has been written to its address. To comply with the blocking read semantics of the PPN model, the RAG blocks until the token corresponding to the current consumer iteration is written. In the memory of Figure 3.10, tokens A and B have been written, as indicated by the “\(V\)”s, whereas tokens C and D have not been written yet, as indicated by the “.”s.
3.7 Conclusion and Summary
To realize the complete forward synthesis flow from a C specification to an FPGA implementation (cf. Figure 1.3), we have presented four extensions to the LAURA methodology in this chapter. These extensions include a more flexible characterization of IP core performance and resource cost aspects; support for novel optimizations of the PGEN tool flow; architectural optimizations to improve the maximum clock frequency and handle complex iteration domains; and a novel reordering buffer implementation that has a lower performance penalty compared to previous reordering buffer implementations. The extensions enable the Daedalus tool flow to support transformations and cope with industrially relevant applications, as we show in the next chapters.
|
{"Source-Url": "https://scholarlypublications.universiteitleiden.nl/access/item%3A2936838/view", "len_cl100k_base": 7590, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 41348, "total-output-tokens": 8481, "length": "2e12", "weborganizer": {"__label__adult": 0.0005087852478027344, "__label__art_design": 0.0007119178771972656, "__label__crime_law": 0.0004401206970214844, "__label__education_jobs": 0.0010004043579101562, "__label__entertainment": 0.00014853477478027344, "__label__fashion_beauty": 0.0002281665802001953, "__label__finance_business": 0.0004405975341796875, "__label__food_dining": 0.0005221366882324219, "__label__games": 0.0009870529174804688, "__label__hardware": 0.01207733154296875, "__label__health": 0.0006256103515625, "__label__history": 0.0005626678466796875, "__label__home_hobbies": 0.00019443035125732425, "__label__industrial": 0.0019817352294921875, "__label__literature": 0.0002779960632324219, "__label__politics": 0.00043845176696777344, "__label__religion": 0.0009379386901855468, "__label__science_tech": 0.294921875, "__label__social_life": 8.624792098999023e-05, "__label__software": 0.00888824462890625, "__label__software_dev": 0.67236328125, "__label__sports_fitness": 0.0003614425659179687, "__label__transportation": 0.0011844635009765625, "__label__travel": 0.0002727508544921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33983, 0.03861]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33983, 0.72683]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33983, 0.89789]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 273, false], [273, 1950, null], [1950, 3719, null], [3719, 5997, null], [5997, 8737, null], [8737, 10104, null], [10104, 12373, null], [12373, 14761, null], [14761, 17076, null], [17076, 19462, null], [19462, 22050, null], [22050, 23863, null], [23863, 26663, null], [26663, 28968, null], [28968, 30765, null], [30765, 33205, null], [33205, 33983, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 273, true], [273, 1950, null], [1950, 3719, null], [3719, 5997, null], [5997, 8737, null], [8737, 10104, null], [10104, 12373, null], [12373, 14761, null], [14761, 17076, null], [17076, 19462, null], [19462, 22050, null], [22050, 23863, null], [23863, 26663, null], [26663, 28968, null], [28968, 30765, null], [30765, 33205, null], [33205, 33983, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33983, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33983, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33983, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33983, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33983, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33983, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33983, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33983, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33983, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33983, null]], "pdf_page_numbers": [[0, 0, 1], [0, 273, 2], [273, 1950, 3], [1950, 3719, 4], [3719, 5997, 5], [5997, 8737, 6], [8737, 10104, 7], [10104, 12373, 8], [12373, 14761, 9], [14761, 17076, 10], [17076, 19462, 11], [19462, 22050, 12], [22050, 23863, 13], [23863, 26663, 14], [26663, 28968, 15], [28968, 30765, 16], [30765, 33205, 17], [33205, 33983, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33983, 0.06993]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
129c69923d37f5d86672a0c689dca217fdaaafa7
|
D5b
SAL multimodal generation component optimised for real-time behaviour
Date: 24 September 2010
Dissemination level: Public
<table>
<thead>
<tr>
<th><strong>ICT project contract no.</strong></th>
<th>211486</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Project title</strong></td>
<td>SEMAINE Sustained Emotionally coloured Machine-human Interaction using Nonverbal Expression</td>
</tr>
<tr>
<td>Contractual date of delivery</td>
<td>31 August 2010</td>
</tr>
<tr>
<td><strong>Actual date of delivery</strong></td>
<td>24 September 2010</td>
</tr>
<tr>
<td><strong>Deliverable number</strong></td>
<td>D5b</td>
</tr>
<tr>
<td><strong>Deliverable title</strong></td>
<td>SAL multimodal generation component optimised for real-time behaviour</td>
</tr>
<tr>
<td><strong>Type</strong></td>
<td>Demonstrator</td>
</tr>
<tr>
<td><strong>Number of pages</strong></td>
<td>15</td>
</tr>
<tr>
<td><strong>WP contributing to the deliverable</strong></td>
<td>WP 5</td>
</tr>
<tr>
<td><strong>Responsible for task</strong></td>
<td>Catherine Pelachaud (<a href="mailto:[email protected]">[email protected]</a>)</td>
</tr>
<tr>
<td><strong>Author(s)</strong></td>
<td>Elisabetta Bevacqua, Satish Pammi, Catherine Pelachaud, Marc Schröder, Etienne de Sevin.</td>
</tr>
<tr>
<td><strong>EC Project Officer</strong></td>
<td>Philippe Gelin</td>
</tr>
</tbody>
</table>
Table of Contents
1 Executive Summary.........................................................................................................................4
2 Functionality of the components....................................................................................................5
3 Quality assessment..........................................................................................................................10
3.1 Perceptual evaluation..............................................................................................................11
3.2 Generation performance.........................................................................................................11
4 License and availability..................................................................................................................13
References..........................................................................................................................................14
1 Executive Summary
Sensitive Artificial Listeners (SAL) are virtual dialogue partners who, despite their very limited verbal understanding, intend to engage the user in a conversation by paying attention to the user's emotions and non-verbal expressions. The SAL characters have their own emotionally defined personality, and attempt to drag the user towards their dominant emotion, through a combination of verbal and non-verbal expression.
This report is part of the series of reports describing the implementation of SAL in system SEMAINE-3.0. The software described, and the full set of reports, can be downloaded from http://semaine.opendfki.de/wiki/SEMAINE-3.0/.
This report describes the progress done in the various modules of the multimodal agent architecture. We also present evaluation studies we conducted.
2 Functionality of the components
This section describes the functionality of the components in the SAL system. The possibilities to configure and reuse the components as parts of a research toolbox will be published as deliverable D7e in December 2010.
Figure 1 shows the architecture of the SEMAINE system that generates the agent's behaviour and the final visual output (the modules described here are written in bold). The whole architecture follows the SAIBA standard (Vilhjalmsson et al, 2007); it is modular and distributed. Hereafter, we present each component, underlying the changes and the improvements done in the last year.
2.1. Listener Intent Planner
The Listener Intent Planner module is in charge of the computation of the agent's behaviours while being a listener when conversing with a user. This component is able to generate two types of backchannel: response and mimicry. Response backchannels are the expression of what the listener thinks about the speaker's speech, we use Allwood's and Poggi's taxonomies of communicative functions of backchannels (Allwood et al., 1993, Poggi, 2007): understanding and attitudinal reactions (liking, accepting, agreeing, believing, being interested). Mimicry backchannels derive from the imitation of some behaviours performed by the speaker, like facial expressions, head movements and so on. Mimicry of behaviours may happen when the interactants are fully engaged in an interaction (Lakin et al., 2003). The Listener Intent Planner decide also when a backchannel should be triggered according to the speaker's acoustic and visual behaviour (Maatman et al., 2005, Ward and Tsukahara, 2000). The module has been presented in detail in the previous Deliverables.
The Listener Intent Planner was integrated in the SEMAINE architecture by connecting it with the input analysis applications. The Listener Intent Planner is implemented in the ListenerIntentPlanner component in the SEMAINE framework. It receives information from the Topics semaine.data.state.agent, semaine.data.state.user.behaviour, semaine.data.state.dialog, semaine.data.state.context. Mimicry/response backchannels are sent as FML file to the Topic se-
maine.data.action.candidate.function and mimicry as BML to the Topic semaine.data.candidate.behaviour.
**Progress in the Listener Intent Planner.** To determine which communicative functions the agent will transmit through a response backchannel, the system needs to know what the agent *thinks* about the user's speech, for example if it agrees, refuses, likes the content of the message. This information, stored in the agent's mental state, is now computed by the dialogue manager and sent on the Topic semana.data.agent.state. Recently we upgraded the Listener Intent Planner to receive and store the modifications of the agent's mental state. Moreover new communicative functions have been added in the agent's mental state to introduce the backchannel vocalizations studied in WP3 (see Deliverable X). Since the listener functions are not explicitly defined in the source code but specified in an XML-based language, the set of communicative functions has been easily extended by adding the new functions in the XML file. Then, it is the module that automatically creates the necessary internal structures to manage each function.
### 2.2. Listener Action Selection
The Action Selection (de Sevin and Pelachaud, 2009) receives all the candidate actions coming from the action proposers (Listener Intent Planner and Utterance Action Proposer). These candidate actions can be backchannels and utterances in FML or BML. The action selection received information about the turntaking, the user interest level (that is is a good indicator of the success of the interaction (Peters et al., 2005)), the name of the character and the player call-backs from the SEMAINE architecture by subscribing to topics. All these informations are used to computed the selection. The module has been presented in detail in previous Deliverables.
The action selection has been implemented in the Action Selection component in the SEMAINE framework. It receives candidate FMLs from the Topic semaine.data.action.candidate.function and BMLs from semana.data.action.candidate.behaviour coming from Action Proposers. It also uses information from the Topics semana.data.state.agent, semana.data.state.user.behaviour, semana.data.state.dialog, semana.data.state.context and semana.callback.output. Selected FMLs are sent to the Topic semana.data.action.selected.function and selected BMLs to the Topic semana.data.action.selected.behaviour.
**Progress in the Listener Action Selection.** Recently this module has been modified to take into account the agent's personality in the selection process. In SEMAINE project, four SAL agents are designed with their own personality traits. Poppy is outgoing and cheerful; Spike is aggressive and argumentative; Prudence is reliable and pragmatic; and Obadiah is pessimistic and gloomy (see figure 2). We can place the four SAL agents in Eysenck's two dimensional representation (see figure 2) which are extroversion and neuroticism (emotional stability) (Eysenck, 1991).
The four SAL agents are placed in the dimensional space defined by Eysenck's representation of personality. Values for mimicry tendency and BC frequency are defined according to their position in this space. For example, Obadiah which is pessimistic, performs few backchannels (-0.75) and sometimes some mimicry (-0.6). We obtain these values for the four personalities (see table 1). These agent's characteristics have an effect both on the type and the frequency of backchannel signals.
**Backchannel types.** Backchannel selection is event-based and is done in real-time. The algorithm follows a 2 steps process: it first determines when to do a backchannel (trigger phase) and then it selects which signal to display (selection phase). Only one action can be displayed by the ECA at a time and the Listener Action Selection receives continuously candidate backchannels. When the
ECA is already displaying a backchannel, no choices are made. The action selection algorithm waits until the display of the current backchannel is over before selecting another one to be displayed. These candidate backchannels received during this time are queued and used during the next selection pass. The listener Action Selection receives potential backchannel signals to display from the trigger module. These signals are characterized by a priority level. This value depends on the personality description of the agent; more particularly on its degree of neuroticism (Eysenck and Eysenck, 1978). A highly emotionally stable agent shows more mimicry behaviours (Chartrand et al., 2005) while a highly emotionally unstable agent shows more responsive behaviors (McCroskey et al., 2001, Noor and Evans, 2003). The priority associated to each backchannel action is computed according to its type (mimicry or responsive) and to the agent’s personality (degree of neuroticism).
**Backchannel frequency.** Based on a theoretical model (Eysenck, 1991), we establish a correlation between the extroversion dimension and the frequency of backchannels (Borkenau and Liebler, 1992). From the videos analysis of SEMAINE corpus collected in QUB, we computed the backchannel frequency: Poppy’s BC frequency is 20% higher than Spike’s, Spike’s BC frequency is 50% higher than Prudence’s and Prudence’s BC frequency is 30% higher than Obadiah’s. The value of the frequency is deduced from our model (see section 3.2). For example, the value for Poppy (extravert) is 0.95 which means that the large majority of backchannels will be displayed (La France et al., 2004). One the other end, the value for Obadiah (introvert) is -0.75 which means only 25% of the backchannels will be displayed (Smith et al., 1975). When the Listener Action Selection receives a potential backchannel (mimicry or response backchannel), it calculates a probability in order to determine if the backchannel will be displayed or not, based on the degree of the agent's extroversion. If not, the backchannel is not queued by the Listener Action Selection.
The Action Selection algorithm has been evaluated in a perceptive study reported in the Evaluation Section.

**Figure 2.** Eysenck's two dimensional representation and our hypothesis of its implication on mimicry and number of backchannels. Example of deduction for Obadiah.
### Table 1. Setting of BC type and frequency for the four SAL agents.
<table>
<thead>
<tr>
<th>BC type (mimicry)</th>
<th>Obadiah (pessimistic)</th>
<th>Poppy (outgoing)</th>
<th>Prudence (reliable)</th>
<th>Spike (aggressive)</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>- 0.6</td>
<td>0.25</td>
<td>0.90</td>
<td>- 0.85</td>
</tr>
<tr>
<td>BC frequency</td>
<td>- 0.85</td>
<td>0.95</td>
<td>- 0.5</td>
<td>0.55</td>
</tr>
</tbody>
</table>
### 2.3. Behaviour Planner
The Behaviour Planner takes as input both the agent's communicative intentions specified by the FML-APML language and some of the agent's characteristics. The main task of this component is to select, for each communicative intention to transmit, the adequate set of behaviours to display according to the agent's characteristics described in its baseline, that is its modality preference and its behaviour expressivity (Mancini and Pelachaud, 2008). The module has been presented in detail in the previous Deliverables.
The Behaviour Planner has been implemented in the BehaviourPlanner component in the SEMAINE framework. It receives FMLs from the Topic semaine.data.action.selected.speechpreprocessed. It also uses information from the Topics semana.data.state.agent and semana.data.state.context. BMLs are sent to the Topic semaine.data.synthesis.plan.
**Progress in the Behaviour Planner.** All possible sets of behaviours for a given communicative intention are defined in a lexicon. Recently we defined a lexicon for each character to take into account their different personalities and then their different way of communicating. The four lexicons have been created analysing the videos collected in the SEMAINE database. The new lexicons allows the system to generate a more various and appropriate behaviour for the agent according to its personality traits.
We also introduced new communicative functions and consequently we extended the lexicons to contain the behaviour set for each new function. Particularly, new functions have been introduced for the agent while in the role of the listener, like antagonism, solidarity (see XXX).
Backchannel vocalizations have also been added in the lexicon. Through recent works we evaluate multimodal backchannel signals (that is signals composed by visual and vocal cues) to understand how users interpret them and then how we can use them to enrich the agent's behaviour while listening. To introduce these vocal signals in our system we added a new modality in the lexicon, the “speech” modality. Such a modality specifies for each listener communicative function which vocalizations can be emitted together with other visual signals to transmit a given function. Figure 3 shows an example of the behaviour set for the listener communicative function “agreement” for the agent Prudence. The fourth signal belongs to the speech modality and it specifies all the vocalizations the agent can emitted by Prudence to transmit agreement. The signals are interchangeable, with a certain probability, as specified in the alternative tags.
<behaviorset name="backchannel-agreement">
<signals>
<signal id="1" name="head=head_nod" modality="head">
<alternative name="head=head_nod1" probability="0.3"/>
</signal>
<signal id="2" name="faceexp=prudence-agreement" modality="face">
<alternative name="faceexp=close-eyes" probability="0.2"/>
<alternative name="mouth=little_smile_open" probability="0.2"/>
</signal>
<signal id="3" name="gaze=look_at" modality="gaze"/>
<signal id="4" name="text" modality="speech" content="yes" intonation="rising" voicequality="tense" meaning="agreeing">
<alternative name="text" content="yes" intonation="rising" voicequality="modal" meaning="agreeing" probability="0.2"/>
<alternative name="text" content="tsyeah" intonation="rising" voicequality="modal" meaning="agreeing" probability="0.1"/>
<alternative name="text" content="tsright" intonation="rising" voicequality="modal" meaning="agreeing" probability="0.1"/>
<alternative name="text" content="right" intonation="rising" voicequality="modal" meaning="agreeing" probability="0.1"/>
<alternative name="text" content="alright" intonation="rising" voicequality="modal" meaning="agreeing" probability="0.1"/>
<alternative name="text" content="yeah" intonation="rising" voicequality="breathy" meaning="agreeing" probability="0.1"/>
<alternative name="text" content="yeah_that_s_true" intonation="rising" voicequality="modal" meaning="agreeing" probability="0.1"/>
</signal>
</signals>
<constraints>
<core>
<item id="1"/>
</core>
<rules>
<implication>
<ifpresent id="2"/>
<thenpresent id="3"/>
</implication>
</rules>
</constraints>
</behaviorset>
Figure 3. Example of behaviour set of the listener's communicative function for Prudence.
2.4. Behaviour Realiser
The Behaviour Realiser is implemented in the BehaviourRealizer component in the SEMAINE framework. It receives BMLs from the topics semaine.data.synthesis.plan.speechtimings and semaine.data.action.selected.behaviour. FAPs are sent to the Topic semaine.data.synthesis.lowlevel.video.FAP, BAPs to the Topic semaine.data.synthesis.lowlevel.video.BAP and commands to the Topic semaine.data.synthesis.lowlevel.video.command.
2.5. Secondary path: Prepare and trigger branch
A fundamental property of a real-time system is the speed of reaction. Due to the heavy amount of computation and physical limitation our system is not always able to generate a response in an acceptable interval of time. In order to speed up the system we duplicate both the Behaviour Planner and the Behaviour Realizer. Together they create a secondary path that works in parallel with the first one to generate suitable actions. These actions are stored in a queue and played when the agent is due to play an action but the first path has not yet finished to generate the requested one.
The details of this mechanism are described in D1d, sections 2.3 and 2.4.
2.6. FAP-BAP Player
The FAP-BAP Player receives the animation and plays it in a graphic window. Facial and body configurations are described through respectively FAP and BAP frames. The Player is based on OGRE graphics engine that can use either DirectX9 technology or OpenGL libraries. Within the SEMAINE project four different virtual agents can be displayed in the graphic window; the user can decide which agent she wants to interact with. These agents are loaded dynamically, that allows the system to pass easily from one character to another when needed. Since each head is quite heavy (12300 triangles per mesh on average), the four agents are loaded in the memory when the system is launched. They are shown or hidden in the virtual word as needed. In this way the selected character can be displayed rapidly. The Player sends directly the animation to the agent that is actually displayed. Callbacks about FAPs, BAPs and audio are sent to the Topic semaine.callback.output. The FAP-BAP Player is implemented in the OgrePlayer component in the SEMAINE framework. It receives FAPs from the Topic semaine.data.synthesis.lowlevel.video.FAP, BAPs from the Topic semaine.data.synthesis.lowlevel.video.BAP and audio files from the Topic semaine.data.synthesis.lowlevel.video.audio. It also uses information from the Topics semaine.data.state.context.
Progress in the FAP-BAP Player. To improve the Player, two queues have been implemented. The first queue, called “waiting queue”, is used to store data about animations that have been computed by the previous modules of the system. These animations are moved in the second queue, called “ready queue”, when the minimum set of needed information have been received (like FAPs, BAPs, audio and start time). The animations in the ready queue are played as their start time corresponds to the clock of the system. The animations in the waiting queue are ordered according to a priority value and they do not remain indefinitely in the queue. A life time is associated to each action which is discarded as soon as their life time is exceeded.
3 Quality assessment
This section describes an assessment of the quality on the technical and component level. An assessment of the psychological quality of interactions with the overall system will be published separately, as deliverable report D6d, in December 2010.
3.1 Perceptual evaluation
To assess the quality of our components we performed perceptive evaluations. As for the Action Selection module we tested how backchannels influence the perception of the agent's personality. We evaluated our model with two hypotheses: backchannel frequency is linked with the extroversion dimension and backchannel type with the neuroticism dimension. The results show that the first hypothesis is partially verified for outgoing and reliable personalities. The second hypothesis is verified only for the pessimistic personality. For this hypothesis, the terms used for personality need to be clarified in order to the participants understand them correctly. Although more evaluation tests are needed, the selection of backchannel type and frequency by the Listener Action Selection according to our model help to express some personalities displayed by the ECA. Further information on this evaluation study and its results can be found in (de Sevin et al., 2010).
Another evaluation has been conducted in order to build the lexicons. Such a study aims at defining a set of multimodal backchannel signals and tests how they are interpreted by subjects when displayed context-free, that is without knowing the discursive context of the speaker's speech. For multimodal signals we mean signals that contain both visual and vocal cues. Through the evaluation we saw also that the meaning conveyed by a multimodal backchannel cannot be simply inferred by the meaning of each visual and vocal cues that compose it and that certain signals can be very dependant to the content of the speaker's speech, like signals composed by visual and vocal cues that have strongly opposite meanings. In (Bevacqua et al., 2010) details on this perceptual study are provided.
3.2 Generation performance
The speed improvement due to the secondary prepare-and-trigger branch for generating agent behaviour can be measured in the live system. Table 1 shows measures from a number of test runs of the full audio-visual live system in different test configurations.
<table>
<thead>
<tr>
<th>Test setup</th>
<th>Direct branch</th>
<th>Prepare-and-trigger branch</th>
</tr>
</thead>
<tbody>
<tr>
<td>Distributed system, no cache in MARY TTS</td>
<td>687 ms (n=82)</td>
<td>7 ms (n=16)</td>
</tr>
<tr>
<td>Distributed system, cache enabled in MARY TTS</td>
<td>201 ms (n=80)</td>
<td>5 ms (n=22)</td>
</tr>
<tr>
<td>Single PC, no cache in MARY TTS</td>
<td>846 ms (n=46)</td>
<td>7 ms (n=22)</td>
</tr>
<tr>
<td>Single PC, cache enabled in MARY TTS</td>
<td>407 ms (n=41)</td>
<td>18 ms (n=39)</td>
</tr>
</tbody>
</table>
*Table 1: Median time-to-animation in various system setups, in milliseconds.*
A number of things can be seen from Table 1. First, depending on the course of the dialogue, the number of utterances that are generated using the direct branch is several times larger than the number of utterances that are generated using the prepare-and-trigger branch. This is to some extent expected behaviour, because the user's behaviour may or may not be in line with the system's predictions, so that any prepared utterances may or may not be appropriate.
Second, the TTS cache in the MARY TTS system helps to reduce the time in the direct prepare branch to less than half. The remaining time is mainly the preparation of the visual animation.
Third, distributing the load across two machines (one running the dialogue, speech synthesis and speech input components, the other running video input and output) also helps speed up the generation of output. The main reason for this seems to be that all processes are rather resource intensive, so that the single PC is under heavy load.
Finally, and most importantly for this investigation, when the prepare-and-trigger branch can be used for generating a system output, this dramatically improves the time from the decision to generate a behaviour to its start. Latencies drop from roughly half a second to roughly 10 ms. This is indeed much more responsive.
In conclusion, the prepare-and-trigger architecture achieves very well its aim of improving the reaction speed of the system for those utterances that can be prepared ahead of time. However, this is still less than half the utterances. It is worth investigating whether the number of utterances that can be prepared ahead of time can be increased further.
4 License and availability
MARY TTS 4.1.1 is available from http://mary.dfki.de. The TTS system is licensed under the Lesser GNU General Public License, LGPL, http://www.gnu.org/licenses/lgpl-3.0-standalone.html. The speech synthesis voices dfki-prudence, dfki-poppy, dfki-spike and dfki-obadiah can be installed using the MARY component installer which is part of the MARY TTS installation process. The voices are distributed under the terms of the Creative Commons Attribution-NoDerivatives license, http://mary.dfki.de/download/by-nd-3.0.html.
MARY TTS and the four synthesis voices are part of the SEMAINE 3.0 system release. Greta is available from http://www.tsi.enst.fr/~pelachau/Greta/. It is licensed under GPL licence. Greta and the four facial models are part of the SEMAINE 3.0 system release.
References
|
{"Source-Url": "https://cordis.europa.eu/docs/projects/cnect/6/211486/080/deliverables/001-D5bMultimodalgenerationcomponentA631452.pdf", "len_cl100k_base": 5540, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 29099, "total-output-tokens": 7689, "length": "2e12", "weborganizer": {"__label__adult": 0.0005593299865722656, "__label__art_design": 0.0012845993041992188, "__label__crime_law": 0.0004820823669433594, "__label__education_jobs": 0.001514434814453125, "__label__entertainment": 0.0004189014434814453, "__label__fashion_beauty": 0.00024962425231933594, "__label__finance_business": 0.0003070831298828125, "__label__food_dining": 0.0003457069396972656, "__label__games": 0.0012149810791015625, "__label__hardware": 0.0014743804931640625, "__label__health": 0.0010938644409179688, "__label__history": 0.00035572052001953125, "__label__home_hobbies": 9.012222290039062e-05, "__label__industrial": 0.00044608116149902344, "__label__literature": 0.000980377197265625, "__label__politics": 0.0004444122314453125, "__label__religion": 0.0006704330444335938, "__label__science_tech": 0.17724609375, "__label__social_life": 0.00022268295288085935, "__label__software": 0.0276641845703125, "__label__software_dev": 0.78173828125, "__label__sports_fitness": 0.00030541419982910156, "__label__transportation": 0.00047707557678222656, "__label__travel": 0.00020647048950195312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29997, 0.03336]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29997, 0.46679]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29997, 0.83629]], "google_gemma-3-12b-it_contains_pii": [[0, 127, false], [127, 1025, null], [1025, 2039, null], [2039, 2862, null], [2862, 5048, null], [5048, 8930, null], [8930, 11346, null], [11346, 14526, null], [14526, 16798, null], [16798, 19878, null], [19878, 23031, null], [23031, 24241, null], [24241, 25049, null], [25049, 27650, null], [27650, 29997, null]], "google_gemma-3-12b-it_is_public_document": [[0, 127, true], [127, 1025, null], [1025, 2039, null], [2039, 2862, null], [2862, 5048, null], [5048, 8930, null], [8930, 11346, null], [11346, 14526, null], [14526, 16798, null], [16798, 19878, null], [19878, 23031, null], [23031, 24241, null], [24241, 25049, null], [25049, 27650, null], [27650, 29997, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 29997, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29997, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29997, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29997, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29997, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29997, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29997, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29997, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29997, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29997, null]], "pdf_page_numbers": [[0, 127, 1], [127, 1025, 2], [1025, 2039, 3], [2039, 2862, 4], [2862, 5048, 5], [5048, 8930, 6], [8930, 11346, 7], [11346, 14526, 8], [14526, 16798, 9], [16798, 19878, 10], [19878, 23031, 11], [23031, 24241, 12], [24241, 25049, 13], [25049, 27650, 14], [27650, 29997, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29997, 0.15862]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
869365618e9c13dbdffcdf4635b9c528a0c9e9d0
|
Challenges of Acquisition Planning: Two Large System Acquisition Experiences
Onur Demirörs
Assoc.Prof.
[email protected]
Çiğdem Gencel
Dr.
[email protected]
Ayça Tarhan
[email protected]
1 Middle East Technical University, Informatics Institute
2 The Bilgi Group Software Research, Education, and Consultancy Ltd., Ankara, Turkey
Abstract
Acquisition of software intensive systems demands significant work prior to establishing the contract. One of the significant challenges of pre-contract phase is acquisition planning. The activities such as estimation of required effort and determining proper structures for the acquisition project require unique approaches. In this study, experiences gained from two large innovative military applications are presented.
Keywords: software acquisition, management, business process modeling, information system (IS)
1. Introduction
Software community has developed unique tools and techniques such as size, effort, and cost estimation techniques and tools to address challenges facing the management of software development projects. These tools and techniques are utilized for software development phases starting with the software requirements specification. The phases prior to the software requirements specification, on the other hand, frequently utilize generic acquisition practices for management.
Pre-development phases include tasks such as requirements elicitation and cost estimation for software systems. These tasks are critical for the success of the project and require software engineering expertise. Moreover, these tasks require significant effort and unprecedented organizational structures. Disappointingly, neither generic tools and techniques nor the tools and techniques developed for software project management can be utilized for many tasks of pre-development phases. A good example is the case for size estimation. To plan the pre-development phases we need to estimate size. However, the metrics we use relatively early in the life-cycle such as Function Points cannot be utilized as requirements are not determined yet.
During the last three years, we have implemented a business process model based approach together with a unique set of notations and tools to guide two large military acquisition projects. Our tasks involved business process modeling, requirements elicitation, size and cost estimation and preparation of technical documents for acquisition of Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) subsystems for Turkish Land Forces Command (TLFC). The outcomes of the projects formed the major parts of the Request for Proposals (RFP) currently issued by TLFC.
In this paper, we focus on the management challenges we faced and the lessons we learned while coordinating the efforts of the pre-development phases of the two military projects. In part 2, an overview of the literature on software intensive system acquisition and management, and in part 3, the descriptions of the cases are given. Part 4 explains the activities performed for managing the RFP preparation phase of the acquisition life cycle; including project organization, resources utilized, planning and tracking practices. Finally, in part 5 we present the lessons learned from managing the projects.
2. Acquisition Planning
According to Federal Acquisition Regulation (FAR), acquisition means “the acquiring by contract with appropriated funds of supplies or services (including construction) by and for the use of the agency through purchase or lease, whether the supplies or services are already in existence or must be created, developed, demonstrated, and evaluated”. Acquisition begins at the point when agency needs are established and includes the description of requirements to satisfy agency needs, solicitation and selection of sources, award of contracts, contract financing, contract performance, contract
There are several models and practices that guide the agencies in acquiring software-intensive systems. These include Software Acquisition Capability Maturity Model (SA-CMM), Capability Maturity Model Integration (CMMI)’s Supplier Agreement Management process area, and IEEE’s Recommended Practice for Software Acquisition.
The Software Acquisition Capability Maturity Model (SA-CMM) offers a framework for organizational improvement, and it focuses on building the acquisition process capability of an organization. It defines five levels of software acquisition process maturity. The first (“initial”) level holds no key process area. The second (“repeatable”) level focuses on basic project management and includes key process areas such as software acquisition planning, solicitation, requirements development and management, project management, contract tracking and oversight, evaluation, and transition to support. The third (“defined”) level has a focus on process standardization, and includes key process areas such as process definition and maintenance, user requirements, project performance management, contract performance management, acquisition risk management, and training program management. The fourth (“quantitative”) level focuses on quantitative management, and holds key process areas such as quantitative process management, and quantitative acquisition management. The final and the fifth (“optimizing”) level focuses on continuous process improvement and includes key process areas such as continuous process improvement, and acquisition innovation management.
The SA-CMM’s Project Management key process area at Level-2 describes required commitment, abilities, and activities to perform acquisition management as well as practices for measurement and analysis and verifying implementation. It is intended to manage the activities of the project office and to support organizations to ensure a timely, efficient, and effective acquisition. Project Management involves planning, organizing, staffing, directing, and controlling project activities, such as determining project tasks, estimating effort and cost, scheduling activities and tasks, training, leading the assigned personnel, and accepting products. It begins when the project office is officially chartered and terminates when the acquisition is completed.
The CMMI’s Supplier Agreement Management process area is targeted to manage the acquisition of products from suppliers for whom there exists a formal agreement. It has a context for system and software acquisition, and remains more generic when compared with other models. It includes the practices for determining the type of acquisition that will be used for the products to be acquired, selecting suppliers, establishing and maintaining agreements with suppliers, executing the supplier agreement, accepting delivery of acquired products, and transitioning acquired products to the project.
The Software Engineering Institute (SEI)’s models and practices specified above describe what characteristics the acquisition process should possess, and do not mandate how the acquisition process should be implemented or who should perform an action. In other words, neither of them provides a guide to an acquisition life cycle. IEEE’s Recommended Practice for Software Acquisition offers a life cycle for a typical software acquisition process, which primarily includes planning, contracting, product implementation, product acceptance, and follow-on phases.
IEEE defines and relates one or more steps to each of these phases. The planning phase includes steps such as planning organizational strategy, implementing organization’s process, and determining the software requirements and is completed by releasing the RFP. The contracting phase follows steps such as identifying potential suppliers, preparing contract requirements, and evaluating proposals and selecting the supplier, after which the contract is signed. The product implementation phase includes a step for managing supplier performance, and the product acceptance phase includes the step of accepting the software product. Finally, the follow-on phase holds a step for using the software. It should be noted that these steps might overlap or occur in a different sequence, according to organizational needs. This recommended practice, however, does not provide guidance on the management of the acquisition process.
The models or the practices described above focus on the implementation and/or management of the acquisition process as a whole. But neither of them specifically focuses on the management of the planning phase of the acquisition, which is actually very critical. This is the initial phase in which the domain knowledge is gathered, the business processes are identified and enhanced, and system requirements are determined, which serve as a basis for the entire acquisition process.
3. Description of the Cases
We have implemented the planning phase of the acquisition life cycle in the context of acquiring two
large innovative military applications for TLFC. TLFC is a part of the Turkish Armed Forces, and consists of 4 armies, 1 training-and-doctrine command, and 1 logistic command. The projects targeted RFP preparation for two different, but closely related C4ISR sub-systems for TLFC. METU Project Office, as depicted in Figure 2, counseled TLFC Project Office for preparing the technical contract of the system to be acquired.
Throughout the paper, we coded these projects as A and B. Due to security constraints names and descriptions of the organizational processes are not provided. We briefly define the characteristics of the projects to provide insight on the implementations of the acquisition planning process.
The domains of project-A and project-B were different but complementary with a high-degree of data exchange requirements. There are four other C4ISR subsystem projects that are planned to be integrated with these two. Therefore, not only the functional requirements of each subsystem domain, but also the integration requirements of project-A and project-B with these projects had to be considered.
Both projects required taking a system viewpoint to map domain requirements to hardware, software, and telecommunication components. The duration of project A and B were 8 and 13 months, respectively. The number of staff involved was 11 persons for project-A and 9 persons for project-B. The total effort utilized by METU Project Office for the acquisition planning process for project-A was 18 person-months and 9 person-months for project-B. The effort utilized by TLFC Project Office is not included as the collected data were not consistent. We estimated the sizes of the software components of project A and B as 10,092 FP and 25,454 FP, respectively.
The processes we implemented for acquisition planning are shown in Figure 1, and summarized in the following paragraphs.
**Planning and Managing Acquisition Planning Project:** Part 4 explains the practices and details related to project planning and management.
**Eliciting System Requirements:** We performed a business process based requirements elicitation approach to define software-intensive system requirements. We determined user-level functional requirements for software components of the systems, non-functional system requirements, commercial off-the-shelf (COTS) product requirements, and hardware and telecommunication infrastructure requirements for both systems.
**Estimating Software Size, System Development Effort and Cost:** We estimated the sizes of the software components of the systems based on the functional software requirements elicited in the previous step and using Mark II Function Points Analysis method. Effort and cost for the system development were also estimated by using software size estimates.
**Preparing Statement of Work for System Development:** We described system and software development life cycles, which are to be applied by the supplier organizations, together with engineering process steps and their outputs. We used IEEE’s system and software engineering standards and recommended practices as a reference in describing the templates for process outputs.
**Preparing RFP:** We gathered system requirements, system development estimates, and the statement of work. Then we integrated these with the acquisition planning process.
regulations of the TLFC in the form of a RFP. We included acquisition schedule, management practices, and deliverables; quality requirements for system and software development and management processes; quality assurance requirements for the deliverables; and qualifications of the system and software development and management staff in the RFP to be issued for the system development.
In this study, we will discuss the details of the acquisition planning management activities of the pre-contract phase of two software-intensive systems.
4. Managing Acquisition Planning
The management activities were performed to plan and track the progress of the acquisition projects. In this section we summarize the management activities under project resources, project planning, and project tracking.
4.1. Project Resources
Both projects required various resources to be utilized including human resources, process modeling notations and tools, and domain specific materials such as books and guidelines. The characteristics of the resources utilized in project-A and project-B and their organizations were similar, since not only the purpose of both of the projects was to acquire C4ISR subsystems, but the customer was TLFC in both cases, as well.
Human resources included Project Coordination Committee, the staff of METU and TLFC project offices, domain experts, and the current executives and representatives of the military units where the system would be used. The organizational charts of the projects is given in Figure 2.
Project Coordination Committee coordinated and monitored the tasks of the METU and TLFC project offices and was in coordination with the coordination committees of other C4ISR sub-system projects of TLFC in order to depict the system interfaces.
METU Project Office counseled TLFC for preparing the technical contract of the system to be acquired within the boundary of the project and included a project manager, and software and hardware/telecommunication analysis teams. Analysis teams modeled the business processes and specified the software, hardware, and telecommunication requirements.
TLFC Project Office executed the processes of the project and included a project manager, externally involved domain experts who have the domain knowledge, executives, and current representatives of the military units, who would use the system to be acquired.
---
**Figure 2 Organizational Charts of Projects**
In project-A, the project staff consisted of 7 part-time persons from METU Project Office, 4 graduate students of METU who have military background, and 5 part-time persons from TLFC Project Office. In addition, 2 domain experts, and 4 representatives of the organizational units, where the system would be used, joined the validation activities.
In project-B, the project staff consisted of 9 part-time persons from METU Project Office, and 9 part-time persons from TLFC Project Office, who are also domain experts. Not all the TLFC Project Office staff participated in the workgroup meetings at the same time. They participated in an interchangeable manner. In addition, 7 more domain experts, who are not the members of TLFC Project Office, and 2 representatives of the organizational units, where the system would be used, joined the validation activities.
Other resources we utilized in the projects included process modeling notations and tools, and the domain books and documents.
We proposed that the candidate tool for business process modeling should support definitions for process, process flow, input/output, input/output flow, role, and responsibility entities at minimum. Specifically in these projects, Architecture of Integrated Information System (ARIS) concept and ARIS toolset were used as the modeling tool. ARIS is frequently used by consultants and companies in creating, analyzing, and evaluating organizational processes for business process reengineering.
While modeling the business processes, Organizational Charts, Function Trees, Extended Event Driven-Process Chain (eEPC) diagrams, Access diagrams, and Network Topology diagrams were used as basic process modeling notations. The
Organizational Chart reflects the organizational units (as task performers) and their interrelationships, depending on the selected structuring criteria. ‘Business processes’ consist of complex functions that are divided into sub-functions and ‘basic functions’ represents the lowest level in semantic Function Tree diagrams. The procedure of a business process is described as a logic chain of events by means of an event-driven process chain. Events are triggering functions and are the result of functions. By arranging a combination of events and functions in a sequence, eEPCs are created. Totally, 210 distinct diagrams in project-A, and 295 in project-B were created to model existing business processes of different levels of organization units by using the eEPC notation.
In project-B, in order to generate user-level functional system requirements by KAOS tool\(^{14}\), we derived and used a special notation while modeling target business processes. This notation differed from project-A’s in the way that color codes and specific rules for the naming of functions, inputs and outputs were used.
Hardware components and their high-level relations for each organizational unit were modeled by using the ARIS Access Diagram notations. The assignment of software components to hardware components, and the domain users of these hardware components were also modeled by using the same type of diagram notations. ARIS Network Topology diagram notations were utilized to represent the system architecture.
Military books and instructions were among the basic resources, especially when domain experts had uncertainties and disagreements related to the concept of the processes. Throughout the requirements elicitation process of project-A and project-B; 15 and 9 military books and guidelines related to the domain were utilized, respectively.
### 4.2. Project Planning
At the start of the project, a project management plan was prepared, in which the activities of the project and the responsibilities of the project staff for each activity were described. The top-level project activities included:
- Orientation of METU and TLFC project offices
- Determination of the content and boundaries of the projects
- Examining the resources (military books and instructions)
- Examining the master plan (only for project-A)
- Preparation of the project management plan
- Determination of the content of RFP document
- Analysis and modeling of current business processes
- Validation of current business process models
- Analysis and modeling of target system
- Analysis and modeling of target business processes
- Identifying system architecture
- Verification and validation of target system models
- System requirements definition
- System breakdown structure preparation
- User-level functional system requirements definition
- COTS requirements definition
- Non-functional system requirements definition
- Hardware and telecommunication infrastructure requirements definition
- System requirements integration
- Verification and validation of system requirements
- Size, effort, and cost estimation of the software system to be acquired
- Cost estimation of the system to be acquired
- Statement of work preparation for system development
- RFP document preparation
At the beginning of each project, we held an orientation meeting in order to get knowledge about the domain, detail the sub-activities of the top level project activities, and schedule them. TLFC Project Office together with METU Project Office attended the meeting. TLFC Project Office provided short presentations about the domain and answered the questions of METU Project Office. In this meeting, we identified the key stakeholders; the top executives, executives, domain experts, project managers, and end users, who would join the workgroups for analyzing and modeling the current and target business processes, and system requirements definition activities. The outcome of these activities was the initial management plan.
As the projects progressed, we updated this initial management plan based on the results of the tracking activities that are discussed in the next section. In order to reflect more realistic figures to the management plan, we continuously made effort estimation throughout the project. For re-planning, we utilized the project effort metric database. Since the effort for different categories was being recorded and continuously updated in this database, the average productivity value of staff to model one business process converged to a more realistic productivity value as the projects progressed. We used this productivity value and the number of lowest-level functions in each business process model to estimate the effort required per business process. In addition, we derived percentage effort coefficients between “as-is” modeling effort and “to-be” modeling effort, verification and validation effort, and user-level functional system requirements definition effort.
Accordingly, we predicted the total effort required for project completion. The sample table shows how we utilized these metrics in order to estimate the required effort for the modeling of the remaining processes and for defining the user-level functional system requirements (see Table 1).
**Table 1 Effort estimation during the projects**
<table>
<thead>
<tr>
<th>Business process</th>
<th>Productivity*</th>
<th>Number of functions</th>
<th>As-is modeling</th>
<th>To-be modeling (as-is effort*0.4)</th>
<th>V & V (as-is effort*0.12)</th>
<th>Defining user-level functional system requirements (as-is effort*0.1)</th>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>18 functions/ 8 person-hour</td>
<td>932</td>
<td>414</td>
<td>166</td>
<td>50</td>
<td>41</td>
<td>671</td>
</tr>
</tbody>
</table>
*(Productivity=Average number of functions modeled/person-hour)*
Sometimes, it was not possible to get sufficient information on each new process and find the number of the lowest-level sub-processes. Therefore, we needed to make analogy to estimate the required effort for them. The similarities and differences between the new and completed processes were found and adjustments were made to relate the effort required for these processes.
**4.3. Project Tracking**
We utilized periodic meetings, progress reports and project effort metric database to track the performance of the projects.
During each of the projects, we held weekly meetings among project team members, monthly meetings with the Project Coordination Committee, and whenever needed, meetings with the other project’s coordination committees.
These periodic meetings helped to track the projects’ progress as well as enabling visibility of the projects to the stakeholders. For both projects, we held 7 coordination meetings with the Project Coordination Committee. In project-B, additionally, 3 more coordination meetings with the other project’s coordination committees were held.
For tracking purposes, we prepared 1 progress report for project-A and 5 progress reports for project-B. These reports were utilized to decide on the new activities or new resources needed during the periodic meetings among project team members and with the Project Coordination Committee. We reflected the efforts utilized to the project management plan.
We also utilized our project effort metric database to track the performance of the projects. The database included *team member name, type of work, activity name, date, and effort* attributes. Each team member entered these attributes on a weekly basis. During the periodic meetings among project team members, the effort utilized was compared with the planned. The effort required for the remaining activities were estimated by using the procedure discussed in the previous section. Accordingly, we updated the management plan whenever needed.
**5. Lessons Learned**
Modeling existing business processes took significant time and almost half of the total project effort, but other than being a baseline for requirement, it helped the stakeholders to identify the bottlenecks and the improvement points needed in the business processes. It also enabled us to create an early consensus with the representatives of other C4ISR projects.
Identifying domain experts and reaching them as scheduled were among the most challenging tasks. Orientation meetings at the start of the projects, and regular progress meetings between METU and TLFC project offices enabled effective communication throughout the projects. These meetings provided the opportunity to discuss conflicting issues, to notify demands on resources, and to re-plan the work under progress.
Another challenging task of working with domain experts was their orientation for business process modeling. Domain experts needed assistance in thinking in terms of business processes, and in identifying and decomposing the key business processes using specific notations. Almost all domain knowledge was documented in books, instructions, guidelines, or reports. The existence of written resources helped in understanding the context of the domain in detail, and speed up the orientation of consultants to the domain. However, identifying and modeling business processes by TLFC Project Office following these resources were stringent, since the domain knowledge is captured in terms of business work products rather than business processes in these resources. In other words, there was confusion between preparing a domain document and executing the
processes behind it. For example, documenting the sections of a domain report actually requires executing the steps of a business process that generates that report. This confusion between business work products and business processes slowed down the modeling from time to time, and frequently required elaboration of what business process modeling is.
In both projects, orientation was provided via meetings and frequent discussions. However, regular formal training sessions might have been better to save overall effort in such projects. Since TLFC Project Office had been staffed by domain experts, we needed to assist them on the details of business process modeling and system requirements elicitation throughout the projects. This assistance was one of the most important indicators of success, and was therefore provided with great care, as to proceed within context but not to restrict the expectations of the domain.
During requirements elicitation process, we generated functional requirements manually from target business process models in project-A, and automatically from target business process models by using KAOS Tool14 in project-B. For project-A, size of which was estimated to be 10,092 Mark II FP, the manual generation of requirements document took 2 person-months. After modeling target business processes using the notation’s conventions at required detail, KAOS Tool generated functional requirements of project-B, which was 25,454 Mark II FP in size, in 30 minutes. Thus, the planning of the target system modeling and functional requirements generation processes were made according to whether the requirements generation would be made manually or automatically.
During both projects, we maintained a project effort metric database. The team members entered data related to type of work, activity name and effort attributes into this database. This helped to reflect more realistic figures to the management plan as the projects progressed, since we utilized this database to estimate the effort needed for modeling the remaining business processes.
The domains of project-A and project-B were different but complementary with a high-degree of data exchange requirements. In addition, project-B had numerous integration points with other C4ISR subsystem projects. Therefore, in project-B, we organized coordination meetings with other TLFC project committees, which helped us to solve many issues related to integration.
Currently, the systems for which we completed acquisition planning have entered into the development phase. The TLFC has decided to integrate the development of projects A and B, since corresponding systems are complementary, and their requirements are easy to integrate due to definition at similar levels of granularity. The TLFC has suspended the release of the RFP, and decided to complete system and software requirements specification stages on its own, specifically by assigning responsibility to one of its departments that develops in-house systems and software. The development group has used elicited system requirements as a basis for their planning as well as for requirements specification. They are currently generating use case specifications and scenarios based on user-level functional system requirements.
Our observations show that, pre-development processes need research studies for development of systematic approaches15. We believe that extensive research on pre-development methodologies including processes, notations, heuristics as well as tools and techniques to support such methodologies are required. These methodologies should also systematically link the development phases with the work products of the pre-development phases. Specifically, we are currently working on refining the acquisition planning process we described based on the experiences we have gathered. We are also working on size estimation for pre-development and early development phases. We will improve the size metric for pre-development phases – the number of lowest level functions – by means of a more precise definition and we will check its validity for different domains by means of case studies. We have also observed that size estimation approach we have utilized for the development phase can be partially automated and we are currently working on such an estimation tool.
6. References
7. CMMI Product Team, CMMI-SE/SW/IPPD, v.1.1, CMMI® for Systems Engineering, Software Engineering,
|
{"Source-Url": "http://www.researchgate.net/profile/Cigdem_Gencel/publication/224646789_Challenges_of_Acquisition_Planning/links/02bfe50ed9d7d792ed000000.pdf", "len_cl100k_base": 5536, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 22572, "total-output-tokens": 6455, "length": "2e12", "weborganizer": {"__label__adult": 0.0004503726959228515, "__label__art_design": 0.0003974437713623047, "__label__crime_law": 0.0007381439208984375, "__label__education_jobs": 0.005062103271484375, "__label__entertainment": 8.970499038696289e-05, "__label__fashion_beauty": 0.00020003318786621096, "__label__finance_business": 0.0095367431640625, "__label__food_dining": 0.0004549026489257813, "__label__games": 0.00103759765625, "__label__hardware": 0.001361846923828125, "__label__health": 0.0004620552062988281, "__label__history": 0.00037217140197753906, "__label__home_hobbies": 0.00016260147094726562, "__label__industrial": 0.0009484291076660156, "__label__literature": 0.0003876686096191406, "__label__politics": 0.0003814697265625, "__label__religion": 0.0002605915069580078, "__label__science_tech": 0.03515625, "__label__social_life": 0.00013387203216552734, "__label__software": 0.0221710205078125, "__label__software_dev": 0.9189453125, "__label__sports_fitness": 0.0002872943878173828, "__label__transportation": 0.0007905960083007812, "__label__travel": 0.0002636909484863281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32407, 0.01666]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32407, 0.12588]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32407, 0.93561]], "google_gemma-3-12b-it_contains_pii": [[0, 3954, false], [3954, 8999, null], [8999, 12363, null], [12363, 16521, null], [16521, 21535, null], [21535, 26120, null], [26120, 31395, null], [31395, 32407, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3954, true], [3954, 8999, null], [8999, 12363, null], [12363, 16521, null], [16521, 21535, null], [21535, 26120, null], [26120, 31395, null], [31395, 32407, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32407, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32407, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32407, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32407, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32407, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32407, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32407, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32407, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32407, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32407, null]], "pdf_page_numbers": [[0, 3954, 1], [3954, 8999, 2], [8999, 12363, 3], [12363, 16521, 4], [16521, 21535, 5], [21535, 26120, 6], [26120, 31395, 7], [31395, 32407, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32407, 0.02326]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
0c0c593a4ab4798eabc69d52295e4661ec0e0af8
|
LAB. 2
GEOMETRIC TRANSFORMATIONS
1. Learning goals
2. Getting started
3. Euclidean transformations
4. Affine transformations
5. 3D transformations in OpenGL
6. Example
7. Programming exercises
8. Final remarks
Lab. 2
GEOMETRIC TRANSFORMATIONS
In this lecture, we are going to deal with geometric transformations in 2D as their generalization in 3D is straightforward. These geometric transformations are also called affine transformations.
1. Learning Goals
At the end of this chapter you should be able to:
1. Explain what transformations are and why we use them in computer graphics.
2. List the three main transformations we use in computer graphics and describe what each one does.
3. Understand how to rotate a point around an arbitrary point.
4. Understand what homogenous coordinates are and why we use them in computer graphics.
5. Understand the importance of the order of operations in a matrix multiplication expression.
6. Understand what a CTM (Combined Transformation Matrix) is and understand what order the transformations must be in to achieve the desired CTM.
7. Be aware of the default facilities of OpenGL; for example, the default 2D domain is OpenGL is \([-1,1] \times [-1,1]\).
2. Getting Started
Geometric transformations are used to fulfill two main requirements in computer graphics:
1. To model and construct scenes.
2. To navigate our way around 2- and 3-dimensional space.
For example, when a street building has \(n\) identical windows, we proceed as follows:
1. To construct a single window by means of graphics primitives;
2. To replicate \(n\) times the window.
3. To put each window at a desirable location using translations and rotations.
This shows that transformations such as translations and rotations can be used as scene modeling operations.
These transformations can be also used to move a bot or an avatar in the virtual environment of a First-Person Shooter (FPS) game.
3. **Euclidean Transformations**
There are two Euclidean transformations:
1. Translation
2. Rotation
### 3.1. Translation
Translation can be thought of as moving something. In translation, a point is moved a distance in a direction.
For example, when the point $A(x, y)$ is translated $dx$ units in the $x$ direction and $dy$ units in the $y$ direction, it becomes:
$$A'(x + dx, y + dy)$$
or, equivalently,
$$\begin{cases} x' = x + dx \\ y' = y + dy \end{cases}$$
Representing points as column matrices, we obtain
$$A = \begin{bmatrix} x \\ y \end{bmatrix}, \quad A' = \begin{bmatrix} x' \\ y' \end{bmatrix}, \quad \text{and} \quad T = \begin{bmatrix} dx \\ dy \end{bmatrix},$$
so that the translation can be expressed as follows:
$$A' = A + T$$
In general, translating an object means to translate its vertices (i.e. corners or endpoints) in such a manner that lines or polygons can then be drawn using the transformed vertices.
### 3.2. Rotation about the origin
Using polar coordinates $(r, \phi)$, a given point in the plane is given by the following equations:
$$\begin{cases} x = r \cos \phi \\ y = r \sin \phi \end{cases}$$
By default, rotating an object by the angle $\theta$ means rotating it around the origin by $\theta$. After rotating the previous point by the angle $\theta$ around the origin, the get the following transformed point:
\[
\begin{align*}
x' &= r \cos(\phi + \theta) \\
y &= r \sin(\phi + \theta)
\end{align*}
\]
or
\[
\begin{align*}
x' &= r \cos \phi \cos \theta - r \sin \phi \sin \theta \\
y' &= r \cos \phi \sin \theta + r \sin \phi \cos \theta
\end{align*}
\]
that is
\[
\begin{align*}
x' &= x \cos \theta - y \sin \theta \\
y' &= x \sin \theta + y \cos \theta
\end{align*}
\]
In matrix notation, we then obtain
\[
\begin{bmatrix}
x' \\
y'
\end{bmatrix} = \begin{bmatrix}
\cos \theta & -\sin \theta \\
\sin \theta & \cos \theta
\end{bmatrix} \begin{bmatrix}
x \\
y
\end{bmatrix}
\]
or
\[
A' = R.A
\]
where $R$ is the 2x2 rotation matrix.
### 3.3. Homogeneous Coordinates
We’ve seen the following matrix transformations:
**Translation:**
\[
A' = A + T
\]
**Rotation:**
\[
A' = R.A
\]
Translation is achieved through matrix addition while rotation is achieved by matrix product. This means that we can combine any number of translation matrices through addition, and any number of rotation matrices through multiplication. However, we cannot combine translation and rotation matrices into a single matrix through the product operation. It would be very useful if we could do this because that would enable the composition of geometric transformations through a single matrix operation, say matrix product. Besides, it would be less computationally expensive, as explained below.
**Homogenous Coordinates** are just a way to overcome this problem. With homogenous coordinates, a series of geometric transformations can be applied in a sequence using matrix product. The result is usually called combined transformation matrix or CTM.
Therefore, translations and rotations expressed in homogenous coordinates are given by:
**Translation:**
\[ A' = T \cdot A \]
**Rotation:**
\[ A' = R \cdot A \]
In homogenous coordinates a point \( P(x, y) \) is represented by the homogenous point \( P(X, Y, W) \) where:
\[ X = \frac{x}{w} \quad \text{and} \quad Y = \frac{y}{w}, \]
where \( W \) usually equals 1 in computer graphics for simplicity.
Using homogenous coordinates, the Euclidean transformation matrices are expressed as 3x3 matrices as follows:
**Translation:**
\[
T(dx, dy) = \begin{bmatrix} 1 & 0 & dx \\ 0 & 1 & dy \\ 0 & 0 & 1 \end{bmatrix}
\]
**Rotation:**
\[
R(\theta) = \begin{bmatrix} \cos \theta & -\sin \theta & 0 \\ \sin \theta & \cos \theta & 0 \\ 0 & 0 & 1 \end{bmatrix}
\]
### 3.4. Rotation about an arbitrary point
The rotation matrix (above) works well if we intend to rotate a point around the origin. But, what about rotating the point \((x, y)\) around the arbitrary point \((x_a, y_a)\)?
The answer lies in the following procedure of three steps:
- Translate \((x_a, y_a)\) to the origin, i.e. translate by \(T(-x_a, -y_a)\).
- Perform the rotation \(R(\theta)\).
- Translate so that point at the origin returns to the original location, i.e., translate by \(T(x_a, y_a)\).
Therefore, to rotate an object made up of 5 vertices, each geometric transformation would need to be done 5 times. Overall, we have
\[
3 \text{ Transformations} \times 5 \text{ Vertices} = 15 \text{ calculations}
\]
what is computationally expensive.
The computational cost can be reduced using the CTM (Combined Transformation Matrix), i.e. by combining the transformations into a single CTM:
3 Transformations x Identity Matrix = 3 calculations
1 Transformation (CTM) x 5 Vertices = 5 calculations
so that the total number of calculations is equal to 8.
3.5. Order and composition of transformations
The order of geometric transformations of the CTM is relevant because the matrix product is not commutative. In fact,
Matrix product is associative:
When multiplying matrices, the order we carry out the multiplications is not relevant, that is
\[ A \cdot B \cdot C = (A \cdot B) \cdot C = A \cdot (B \cdot C) \]
Matrix multiplication is not commutative:
When multiplying matrices together, we carry out the multiplications is relevant, that is
\[ A \cdot B \neq B \cdot A \]
The question is then how do we work out the order of our matrices when creating the CTM?
Turning back to the steps to rotate a point around the point \((x_a, y_a)\), let us rewrite the corresponding procedure:
- Translate \((x_a, y_a)\) to the origin, i.e. translate by \(T(-x_a, -y_a)\).
- Perform the rotation \(R(\theta)\).
- Translate so that point at the origin returns to the original location, i.e., translate by \(T(x_a, y_a)\).
Thus, the order of the CTM is:
\[ CTM = T(x_a, y_a) \cdot R(\theta) \cdot T(-x_a, -y_a) \]
When we multiply the CTM by the point \(P\) we have
\[ CTM \cdot P = T(x_a, y_a) \cdot R(\theta) \cdot T(-x_a, -y_a) \cdot P \]
An important fact to bear in mind is that the transformation closest to the point \(P\) in the expression is the first transformation to be applied to \(P\).
4. Affine Transformations
Euclidean transformations preserve the distance between points, and because of that they are then called rigid transformations.
Affine transformations generalize Euclidean transformations in the sense that they do not preserve distance but parallelism instead. This means that two parallel lines remain parallel after applying an affine transformation. Because of this principal invariant, other
properties are preserved. For example, an affine transformation also preserves collinearity (i.e., all points of a line remain on a line after transformation) and ratios of distances or proportions (e.g., the midpoint of a line segment remains the midpoint after transformation).
An affine transformation is also called an affinity. Examples of affine transformations are contraction, expansion, dilation, reflection, rotation, shear, similarity transformations, spiral similarities, and translation, as are their combinations. In general, an affine transformation is the result of a composition of rotations, translations, dilations, and shears.
As seen above, rotations and translations are Euclidean transformations. Let us then see the other two basic affine transformations.
Dilation or Scaling:
In scaling, we change the size of an object. Scaling makes an object bigger or smaller in the x and/or y direction.
Scaling a point \((x, y)\) by a factor \(s_x\) along the x axis and \(s_y\) along the y axis requires we multiply each coordinate by the corresponding scaling factor:
\[
\begin{align*}
x' &= s_x \cdot x \\
y' &= s_y \cdot y
\end{align*}
\]
or, using the matrix notation,
\[
\begin{bmatrix}
x' \\
y' \\
1
\end{bmatrix} =
\begin{bmatrix}
s_x & 0 & 0 \\
0 & s_y & 0 \\
0 & 0 & 1
\end{bmatrix} \cdot
\begin{bmatrix}
x \\
y \\
1
\end{bmatrix}
\]
Shearing:
Shearing enjoys the property that all points along a given line \(l\) remain fixed, while other points are shifted parallel to \(l\) by a distance that is proportional to their perpendicular distance from \(l\). Note that shearing an object in the plane does not change its area at all. As a margin note, let us say that shearing can easily be generalized to three dimensions, where planes are translated instead of lines.
Shearing a point \((x, y)\) by a factor \(h_x\) along the x axis and \(h_y\) along the y axis is given by the following equations:
\[
\begin{align*}
x' &= x + h_x \cdot y \\
y' &= y + h_y \cdot x
\end{align*}
\]
or, using the matrix notation,
\[
\begin{bmatrix}
x' \\
y' \\
1
\end{bmatrix} =
\begin{bmatrix}
1 & h_x & 0 \\
h_y & 1 & 0 \\
0 & 0 & 1
\end{bmatrix} \cdot
\begin{bmatrix}
x \\
y \\
1
\end{bmatrix}
\]
The effect of a shearing looks like “pushing” an object in a direction that is parallel to a
coordinate axis in 2D (or coordinate plane in 3D). Note that we can do this only in the x-direction as follows
\[
\begin{align*}
x' &= x + h_x, y \\
y' &= y
\end{align*}
\]
or in the y-direction
\[
\begin{align*}
x' &= x \\
y' &= y + h_y, x
\end{align*}
\]
5. 3D Transformations in OpenGL
In modern OpenGL, geometric transformations are defined using GLM (OpenGL Mathematics) and GLSL (OpenGL Shading Language) as follows:
\[
\begin{bmatrix}
a & b & c & d \\
e & f & h & i \\
j & k & l & m \\
n & o & p & q
\end{bmatrix}
\begin{bmatrix}
x \\
y \\
z \\
w
\end{bmatrix}
= \begin{bmatrix}
ax + by + cz + dw \\
ex + fy + hz + iw \\
jx + ky + lz + mw \\
nx + oy + pz + qw
\end{bmatrix}
\]
In C++, with GLM:
```c++
glm::mat4 myMatrix;
glm::vec4 myVector;
// fill myMatrix and myVector somehow
glm::vec4 transformedVector = myMatrix * myVector; // Again, in this order! this is important.
```
In GLSL:
```glsl
mat4 myMatrix;
vec4 myVector;
// fill myMatrix and myVector somehow
vec4 transformedVector = myMatrix * myVector; // Yeah, it's pretty much the same than GLM
```
Translation
In GLM, the translation is defined as follows:
```c++
glm::mat4 glm::translate(
glm::mat4 const & m,
glm::vec3 const & translation);
```
which transforms a matrix with a translation 4x4 matrix \( m \) created from 3 scalars.
Let us see an example:
\[
\begin{bmatrix}
1 & 0 & 0 & 10 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
10 \\
10 \\
10 \\
1
\end{bmatrix}
= \begin{bmatrix}
20 \\
10 \\
10 \\
1
\end{bmatrix}
\]
In C++, with GLM:
```cpp
#include <glm/gtx/transform.hpp> // after <glm/glm.hpp>
glm::mat4 myMatrix = glm::translate(glm::mat4(), glm::vec3(10.0f, 0.0f, 0.0f));
glm::vec4 myVector(10.0f, 10.0f, 10.0f, 0.0f);
glm::vec4 transformedVector = myMatrix * myVector; // guess the result
```
In GLSL:
```cpp
vec4 transformedVector = myMatrix * myVector;
```
### Rotation:
In GLM, the rotation is defined as follows:
```cpp
glm::mat4 glm::rotate(
glm::mat4 const & m,
float angle,
glm::vec3 const & axis);
```
which transforms a matrix with a rotation 4x4 matrix m created from an axis of 3 scalars and an angle expressed in degrees.
Let us see an example:
```cpp
// Use #include <glm/gtc/transform.hpp> and #include <glm/gtx/transform.hpp>
glm::vec3 myRotationAxis(1.0f, 0.0f, 0.0f);
glm::mat4 rot = glm::rotate(angle_in_degrees, myRotationAxis);
```
### Scaling:
In GLM, the scale is defined as follows:
```cpp
glm::mat4 glm::scale(
glm::mat4 const & m,
glm::vec3 const & factors);
```
which transforms a matrix with a scale 4x4 matrix m created from a vector of 3 components.
Let us see an example:
```cpp
// Use #include <glm/gtc/transform.hpp> and #include <glm/gtx/transform.hpp>
glm::mat4 myScalingMatrix = glm::scale(2.0f, 2.0f, 2.0f);
```
For more details on GLM matrices, the reader is referred to:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
Also, the GLM manual can be found at:
http://glm.g-truc.net/glm.pdf
6. Example
Let us see a program that draws a house. Then, such a house is shifted in steps of 0.1 units along the x-axis and y-axis up to reach 10 units in each axis.
The program is available at:
http://www.di.ubi.pt/~agomes/cg/praticas/movinghouse.zip
7. Programming Exercises
1. Re-write the house-building program implemented above in a way that all building blocks (body, roof, windows and door) are constructed from the origin. Then, use translations to place these building blocks at the desired locations. Each building block is constructed in a separate function. In the case of the window, we need to call its function twice because we are assuming the house has two windows.
2. Add the toggle facilities to the previous program in a manner to add/remove the house body by pressing the ‘b’ key, roof by pressing the ‘r’ key, windows by pressing the ‘w’ key, and door by pressing the ‘d’ key.
3. Let us now replicate twice the house. The first copy of the original house must be reduced to ¾ and placed side-by-side on the left of the original house. The second house copy must be scaled up to 5/4 and placed side-by-side on the right of the original house.
4. Let us now add the bright sun to the scene. The sun can be generated using the program concerning the Exercise 5 of the practical P01. The user can change the position of the sun by clicking on the ‘s’ key. The trajectory of the sun is a circle arc.
5. Based on the original code of movinghouse.zip, put the elements of the house (i.e., windows, door, and roof) to move away from the body center.
6. Based on the original code of movinghouse.zip, put the elements of the house (i.e., windows, door, and roof) to move away or translating them from the body center.
7. Based on the original code of movinghouse.zip, put the elements of the house (i.e., windows, door, and roof) to rotate about the body center.
8. Based on the original code of movinghouse.zip, put the elements of the house (i.e., windows, door, and roof) to rotate about and to move away from the body center simultaneously.
8. Final Remarks
1. Transformations are mathematical functions that allow us to model and to navigate within 2D and 3D spaces.
2. In computer graphics, we use three main transformations: translation, scaling and rotation.
3. Homogenous coordinates allow us to treat translation, scaling and rotation in the same manner. Consequently, all affine transformations can be combined into a CTM that substantially reduces the calculations that need to be made.
4. Matrix multiplication is associative but not commutative.
5. A CTM may combine many transformations into a single matrix.
|
{"Source-Url": "https://www.di.ubi.pt/~agomes/cg/praticas/02-transformacoes-lab.pdf", "len_cl100k_base": 4597, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 33228, "total-output-tokens": 5434, "length": "2e12", "weborganizer": {"__label__adult": 0.0004887580871582031, "__label__art_design": 0.0035877227783203125, "__label__crime_law": 0.0005621910095214844, "__label__education_jobs": 0.015625, "__label__entertainment": 0.0002005100250244141, "__label__fashion_beauty": 0.00032782554626464844, "__label__finance_business": 0.00039076805114746094, "__label__food_dining": 0.0006132125854492188, "__label__games": 0.0013751983642578125, "__label__hardware": 0.002635955810546875, "__label__health": 0.0009393692016601562, "__label__history": 0.0007729530334472656, "__label__home_hobbies": 0.0003573894500732422, "__label__industrial": 0.00127410888671875, "__label__literature": 0.0005269050598144531, "__label__politics": 0.0004324913024902344, "__label__religion": 0.0009870529174804688, "__label__science_tech": 0.36962890625, "__label__social_life": 0.0002435445785522461, "__label__software": 0.0175323486328125, "__label__software_dev": 0.57958984375, "__label__sports_fitness": 0.000514984130859375, "__label__transportation": 0.0011358261108398438, "__label__travel": 0.0004258155822753906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16748, 0.02361]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16748, 0.68427]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16748, 0.80332]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 211, false], [211, 1686, null], [1686, 3075, null], [3075, 4950, null], [4950, 6625, null], [6625, 8561, null], [8561, 10963, null], [10963, 12628, null], [12628, 14105, null], [14105, 16624, null], [16624, 16748, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 211, true], [211, 1686, null], [1686, 3075, null], [3075, 4950, null], [4950, 6625, null], [6625, 8561, null], [8561, 10963, null], [10963, 12628, null], [12628, 14105, null], [14105, 16624, null], [16624, 16748, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16748, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16748, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16748, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16748, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 16748, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16748, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16748, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16748, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16748, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16748, null]], "pdf_page_numbers": [[0, 0, 1], [0, 211, 2], [211, 1686, 3], [1686, 3075, 4], [3075, 4950, 5], [4950, 6625, 6], [6625, 8561, 7], [8561, 10963, 8], [10963, 12628, 9], [12628, 14105, 10], [14105, 16624, 11], [16624, 16748, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16748, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
8d387043cb20df0d6d8e11919cadbcab81b45931
|
A Meta-Level Design Science Process for Integrating Stakeholder Needs
Demonstrated for Smart City Services
Antti Knutas1, Zohreh Pourzolfaghar2 and Markus Helfert2
1School of Business and Management, Lappeenranta University of Technology, Lappeenranta, Finland
2Department of Computing, Dublin City University, Dublin, Ireland
Keywords: Smart City, Smart City Services, Service Design, Design Science, Grounded Theory.
Abstract: Currently there is an issue in the design process of smart city services, where citizens as the main stakeholders are not involved enough in requirements engineering. In this paper, we present a meta-level design science process, based on an extended version of design science research methodology, that can be used to create requirements engineering frameworks to inform smart city service requirements engineering processes. The introduced meta-level process is beneficial as it can be used to ensure that design guideline research processes are rigorous, just as design science process ensures scientific rigor in design research. Additionally, we present a previous case study and frame it using the new meta-level design science process.
1 INTRODUCTION
Smart cities are innovative cities, which use ICT to improve quality of life for citizens (Anthopoulos et al., 2016; Booch, 2010; Kondepudi and others, 2014). According to Ferguson (2004) services are the enablers in the digital cities and therefore, responsible to improve the citizens’ quality of life. In the other word, the services in the smart cities need to respond to the needs of the citizens. In this regard, Pourzolfaghar and Helfert (2017) have defined the term ‘smart service’ for the services which meet the smart cities quality factors and respond to the smart cities stakeholders’ concerns.
Prior to introduction of agile method in early 2000, software developers were using traditional approaches to develop software. In traditional development methods, such as the waterfall, requirements are divided into two different categories of user and system requirements, or functional and non-functional requirements. Functional requirements are statements of software features and depend on some factors, e.g. the expected users (Sommerville, 2011). The functional user requirements define specific facilities to be provided by the software. According to Sommerville (2011), imprecision in the requirements specifications is the cause of many software engineering problems. Smart cities should ensure quality factors as perceived by the end users, which are often the citizens, and ensure that they are included as stakeholders already in the requirements engineering phase of projects.
A review of the recent literature suggests that requirements engineering processes, as it exists in smart city service design today, need guidance. A requirements framework would enable service developers to define the user requirements in line with the citizens’ needs in smart cities. However, how to ensure that the requirements framework responds to the needs of the application domain, is valid and is scientifically rigorous?
To address the issue, we
1. extend Ostrowski’s design science process for creating meta-level artefacts (2011, 2012), and
2. present how this new design science process can be applied to create requirements engineering frameworks.
Ostrowski et al. (2013) presented a method for creating abstract design knowledge with business process modelling, which is suitable for situations where it is possible to capture explicit organizational knowledge. In this paper, we present an alternative, grounded theory -based approach, which is more suitable for complex problems with human factors that are difficult to address with formal modelling.
The new approach presented in this approach is more suited for social systems where the knowledge is tacit instead of formal, which is often the case in human societies.
To summarize the research problem, we want to investigate **how stakeholder needs can be better included in the smart city service design process.** To address this issue, we present a **design science research process for meta-level knowledge artefacts that can be used to design requirements engineering frameworks.**
The rest of the paper is structured as follows. In section two we review recent literature on smart city service design and requirements engineering. In section three we review design science and meta-level knowledge artefact creation. In section four we present a new framework for creating abstract design knowledge and an initial evaluation of the framework in the context of requirements engineering research. The paper ends with section five, conclusion.
## 2 OVERVIEW OF SMART CITY SERVICE DEVELOPMENT AND REQUIREMENTS FRAMEWORKS
### 2.1 Smart City Service Development
The general method to develop the services in smart cities is agile development method. This is due to the priceless capabilities of agile methods in terms of quick delivery, simplicity, flexibility, easy risk management, and less process time compared to traditional models (Shah, 2016). However, recently many researchers disclosed some challenges facing agile methods in terms of defining appropriate goals and considering stakeholders concerns. For instance, Kakarontzas et al. (2014) reported some challenges related to planning the projects, setting achievable and realistic goals and objectives, and taking into account the stakeholders’ concerns. Likewise, Dingsoyr and Moe (2013) presented some challenges at the time of project planning, the role of architecture, collaboration between developers and stakeholders, and constraints in contracts. Shah (2016) outlined the challenges facing agile methods as follows: 1) lack of high quality interactions with stakeholders; 2) overrunning time and cost problems due to evolving requirements; 3) lack of quality requirements in initial stages that is essential for success; 4) unrealistic expectations; and 5) no formal modelling for the requirements.
### 2.2 Requirements Engineering
Requirements engineering is one of the crucial stages in software design and development, as it addresses the critical problem of designing the right software for the right user (Aurum and Wohlin, 2005). It is concerned with identification of goals for a proposed system, the operation and conversion of these goals into services and constraints, and the assignments of responsibilities for development. There are different levels of requirements, such as functional requirements that specify what the system will do or non-functional requirements which guide solution design.
Stakeholders play an essential part in requirements engineering, as they represent all the involved parties and will in one way or another define the requirements (Aurum and Wohlin, 2005). Typical stakeholders are product managers, various types of users and the products’ software developers. However, requirement gathering is rarely trivial and requirements elicitation involves seeking, uncovering and elaborating requirements in a complex process, which can involve conflicts between stakeholders (Zowghi and Coulin, 2005). Conflicts between different stakeholder parties leads to a requirements prioritization process, where importance, risk cost and other factors are used in deciding what features to include in requirements (Berander and Andrews, 2005).
## 3 DESIGN SCIENCE RESEARCH APPROACH
The overall research approach for this paper is design science (Hevner et al., 2004), which is commonly used in the information system sciences to create artefacts in the form of instantiated systems or new design knowledge (Ostrowski et al., 2011). Hevner and Chatterjee (2010, p. 5) define Design Science Research (DSR) as follows:
“Design science research is a research paradigm in which a designer answers questions relevant to human problems via the creation of innovative artifacts, thereby contributing new knowledge to the body of scientific evidence. The designed artifacts are both useful and fundamental in understanding that problem.”
From the above, Hevner and Chatterjee (2010, p. 5) derive the first principle of DSR: “The fundamental principle of design science research is that knowledge and understanding of a design...
problem and its solution are required in the building and application of an artefact.” What essentially separates the design science research process from routine design practise is the creation of new knowledge (Hevner and Chatterjee, 2010). If the design process is rigorous, it is based on existing theories and produces new scientific knowledge, then the process can be considered design science research.
The concept of an artefact is at the core of the research science process. In a synthesis of the Sciences of the Artificial (Simon, 1996) and Developing a Discipline of the Design/Science/Research (Cross, 2001) by Hevner et al. (2010), they broadly define artefacts, which are the end-goal of any design science research project, as follows: Construct (vocabulary and symbols), models (abstractions and representations), methods (algorithms and practices), instantiations (implemented and prototype systems), and better design theories.
The situations where DSR is well applicable are situations where humans and software systems intersect (Hevner and Chatterjee, 2010), like information systems or software engineering research. What makes information systems research unique is that it investigates the phenomenon where technological and social systems intersect (Lee, 2001), which requires a research methodology that takes both into account.
The original paper on design science by Hevner et al. (2004) does not present a model or process for performing design science research. However, a later paper (Hevner, 2007) refines the concept further and identifies the existence of three design science cycles that are present in all design research projects. These cycles are the Relevance Cycle, which connects the contextual environment to the research science project, the Rigor Cycle, which connects the design activities to the knowledge base of scientific foundations, and the Design Cycle which iteratively connects the core activities of building a design artefact and research.
Hevner’s three cycle view clarified the elements of design science research, but it still doesn’t still provide a clear linear view of design science research process. To provide a process model Peffers et al. (2007) synthesized the design science research methodology based on the evolving body of knowledge on design science. The process contains six activities, which are summarized as follows: Problem identification and motivation, defining the objects for a solution, design and development, demonstration, evaluation and communication.
### 3.1 Creating Meta-Level Knowledge Artefacts
In this section we describe the design knowledge framework for design science by Ostrowski et al. (2012), which underlines the division of design science research into an empirical part (a design practice) and a theoretical part (meta-design). These two parts exchange knowledge. The design knowledge framework presents a process for creating meta-knowledge artefacts, which consist of abstract design knowledge. These meta-artefacts in turn can be used in the creation of situational design knowledge, such as instantiations or IT systems.
Meta-design artefacts can be used as 1) a preparatory activity before situational design is started, 2) a continual activity partially integrated with the design practice, or 3) a concluding theoretical activity summarizing, evaluating and abstracting results directed for target groups outside the studied design and use practices (Goldkuhl and Lind, 2010; Ostrowski et al., 2012). Meta-design artefacts are based on data types as opposed to specific instances of data (Ostrowski and Helfert, 2012). These types of artefacts are general, or “unreal” according to Sun et al. (2006). However, meta-design produces solid and generic background for design science activities to construct solutions for a real environments, systems and people (Ostrowski and Helfert, 2012; Sun and Kantor, 2006).
In Figure 1 we extend Hevner’s three cycle view (2007) to include the split between abstract and situational knowledge. The original three cycle view included only the top half and considered only situational knowledge. The top level contains the environment and situational design, where design science could be applied to create requirements for one specific smart city service. The lower level contains the creation of abstract design knowledge, which informs and guides the creation of situational artefacts. Ostrowski et al. (2011, 2012) have earlier created a similar extension for the process model by Peffers et al. (2007), following the ideas by Goldkuhl and Lind (2010).
In our case, the meta-design context is fitting for the creation of requirements framework, because we create meta-level design knowledge (framework) that guides the creation of the situational design (set of requirements). Both levels of design, situational and abstract, produce a method as the artefact. However, the situational design produces a set of requirements and the abstract design part produces a requirements framework to guide the requirements process.
Ostrowski et al. (2011, 2012) further divide the meta-artefact design process into three steps that interact with each other: Modelling, literature review and engagement scholarship. We relate them to the cycle model as the (theoretically grounding) rigor cycle and the meta-relevance cycle, as seen in Figure 1.
In the abstract design knowledge phase two levels of knowledge, literature and design experts, contribute to create reference models for design (Ostrowski and Helfert, 2012). Literature review allows developing an initial scope and reviewing existing knowledge, and collaboration with practitioners allows ensuring problem relevancy and gaining current knowledge. These two information sources are combined to a reference model, which allows modelling and evaluation of solutions. This model is then compared to existing body of knowledge in theoretical grounding in a rigor cycle, and to designers for the design practice phrase in a meta-relevance cycle.
The knowledge exchanges presented in Figure 1 are also form the three-part grounding process: Theoretical, empirical and internal grounding (Goldkuhl and Lind, 2010). Theoretical and empirical grounding between the meta-artefact and the artefact design cycle, and internal grounding in both artefact design cycles.
### 3.2 Evaluating and Grounding Meta-Level Knowledge
As with all design science research, the validity of the artefact is judged by its utility (Hevner et al., 2004). Therefore, the model resulting from meta-artefact design should be evaluated to establish its validity before applying it to the artefact design cycle.
There are two levels of evaluation in design science research: artificial and naturalistic (Venable, 2006). Artificial evaluation is contrived or non-real in some manner and may consist of simulations, field experiments or lab experiments. Naturalistic
evaluation is full evaluation of the situational artefact in its intended environment, the application domain. Naturalistic evaluation may consist of methods such as case studies, survey studies or action research.
Ostrowski et al. (2012) present that for abstract design knowledge artificial evaluation is more suitable and for situational design knowledge naturalistic evaluation is most suitable. They also present a process model where situational design knowledge is validated with naturalistic evaluation and abstract design knowledge is further validated by that after an empirical grounding process.
4 META-LEVEL DESIGN PROCESS FOR DESIGN SCIENCE RESEARCH
In this section, we present a new meta-level design process that is builds on Ostrowski’s work (2011) and earlier work on fitting the grounded theory research methodology in design science research process (Gregory, 2011). Ostrowski’s framework is business process modelling -based and suitable for design science cases where large organizations are central and the knowledge is explicit. We present an alternative that is aimed for situations with complex human factors and individuals as actors, such as citizens as stakeholders in service design processes, and the knowledge is tacit.
4.1 A Framework for Creating Meta-abstract Design Knowledge Artefacts
Ostrowski’s framework for creating meta-abstract design knowledge for information systems recommends three steps for creating models for information systems: 1) Literature review, 2) collaboration with practitioners, and 3) then creating an ontological model using one of the business modelling languages (Ostrowski and Helfert, 2012). It is aimed for process-oriented environments, such as large organizations or situations where business process modelling is appropriate (Ostrowski and Helfert, 2012).
In this section, we present an alternative process that uses grounded theory (Glaser and Strauss, 1967) as defined by Urquhart et al. (2012; 2010) for information systems research to generate a design theory. In the process design we follow the line of research that discusses and evaluates using grounded theory in design theories (Adams and Courtney, 2004; Goldkuhl, 2004; Gregory, 2011; Holmström et al., 2009). This alternative approach is valuable for creating meta-level design knowledge for situations that involve complex human factors that are a challenge for formal models, or for situations where is not initially clear who are the actors, their relationships and the exact nature of the issue is not clearly defined.
The objective of grounded theory is the discovery of a theoretically comprehensive explanation about phenomena, using techniques and analytical procedures that enable investigators to develop a theory that is significant, generalizable, reproducible and rigorous (Adams and Courtney, 2004). The aim of grounded theory is not only to describe a phenomenon, but also to provide an explanation of relevant conditions, how actors respond to the conditions and consequences of the actors’ actions (Kinnunen and Simon, 2010; Urquhart et al., 2010). For data analysis, it has a systematic set of procedures that support the development of theory that is inductively derived and continuously tested against empirical data through constant comparison (Strauss and Corbin, 1990).
Grounded theory has three levels of theory: 1) narrow concepts, 2) substantive theories, and 3) formal theories (Urquhart et al., 2010). Substantive theories have been generated within a specific areas of inquiry, The highest level of abstraction is a “formal theory”, which focuses on conceptual entities, such as organizational knowledge (Strauss, 1987). Our alternative process uses design science to first 1) first generate a situational grounded theory based on relevance cycle interactions, 2) use theoretical integration (Urquhart, 2007) in the rigor cycle to compare and extend the grounded theory to create a substantive theory, and 3) uses the theory to create a model to assist in situational design phase.
In Table 1 we present the three process details as synthesized from guidelines by Urquhart et al. (2010; 2012) for information systems research and Gregory (2011) for design science research methodology, and how they can be applied to requirements framework generation. The Table 1 also includes a subset of Figure 1, and relates the three process steps to the extended three cycle view of design science.
To summarize the process, in phase 1a the situation is investigated, and phenomena and actors around the current situations are identified. This involves gathering source material from the actors, often interviews, and using the grounded theory methodology to code the results. In phase 1b the meta-application domain is engaged, with the researcher interacting with domain and design
experts. This results in a situational theory of the issue and initial ideas for a solution. This situational theory is then scaled up in phase 2 by engaging current academic knowledge and using theoretical integration to scale up the situational theory. Finally, in phase 3 the researcher should create a meta-level
<table>
<thead>
<tr>
<th>Design science activity</th>
<th>Grounded theory activity</th>
<th>Outcomes</th>
<th>As applied in requirements framework design</th>
</tr>
</thead>
<tbody>
<tr>
<td>1a. Relevance phase</td>
<td>Open and selective coding; initial theoretical coding</td>
<td>Identifying core phenomena, relationships and initial explanation. Concepts defined.</td>
<td>Discovering key concepts and issues in smart city service design and how the stakeholders in the application domain perceive it.</td>
</tr>
<tr>
<td>1b. Meta-relevance phase</td>
<td>Advanced theoretical coding; theoretical grounding.</td>
<td>Grounding the emergent theory in existing expert views. Initially grounded situational theory.</td>
<td>Grounding the initial solution concept in requirements engineering expert opinions. Having a concept for solution that is supported by practitioners.</td>
</tr>
<tr>
<td>2. (Meta-level) rigor phase</td>
<td>Theoretical grounding and scaling up. Relating the emergent theory to literature.</td>
<td>Rigorous situational theory.</td>
<td>An initial concept of a requirements framework. Supported by existing literature.</td>
</tr>
<tr>
<td>3. Meta-artefact design phase; empirical grounding</td>
<td>Constant comparison. Grounding the theory back to original data.</td>
<td>A prescriptive meta-level artefact that can guide artefact design. Design informed by the descriptive situational grounded theory.</td>
<td>Requirements engineering framework that has been created to address the issues in requirements design as explained in the situational theory. Supported by the (empirically and theoretically) grounded theory.</td>
</tr>
</tbody>
</table>
artefact based on the scaled-up theory that can be used to inform situational design processes. In the example presented in Table 1 this meta-artefact would be a requirements engineering framework that addresses issues discovered in phases 1 and 2.
4.2 Evaluating the Meta-abstract Design Knowledge Framework with a Sample Case
In this section, we evaluate the utility of our meta-abstract design knowledge framework by presenting how it can guide and inform an ongoing design science meta-artefact design process. This is an initial form of artificial evaluation (Venable, 2006), which should establish a preliminary utility of the framework (Ostrowski et al., 2012; Pries-Heje et al., 2008), and thus its validity (Hevner et al., 2004).
The evaluation consists of framing an existing series of case studies by Pourzofarghar et al., where abstract design knowledge is created by using the meta-abstract design knowledge framework. In this series of case studies Pourzofarghar et al. have discovered that currently citizens are not involved enough as stakeholders in current smart city design processes (Pourzolfaghar et al., 2016; Pourzolfaghar and Helfert, 2017), even though they are most often the end users. This is a clear issue in software system design, because requirements elicitation from all stakeholders is a critical part of requirements engineering (Zowghi and Coulin, 2005).
This far the research group has identified the issue and established a problem definition (Pourzolfaghar et al., 2016), and have created a taxonomy based on a literature review to inform smart city service developers (Pourzolfaghar and Helfert, 2017). The next step is to create a requirements engineering framework that would enable smart city requirements engineering processes to better consider citizens as stakeholders.
When one frames the entire process in the context of a design science process as presented in Figure 1 and Table 1, there are two levels. On the situational level, the application environment is 1) smart city service developers using a requirements design framework to create services, and 2) the service users as stakeholders. The knowledge base is the scientific body of knowledge on the topic. The meta level on the other hand consists of Pourzolfaghar’s research group, who are creating a requirement engineering framework to inform individual service requirements engineering processes. What we are presenting in this paper is a framework to describe and formalize meta-level design.
The research group is creating several meta-artefacts, of which the smart city service taxonomy was the first one (Pourzolfaghar and Helfert, 2017). The taxonomy is a meta-artefact, because it is not the result of smart city service design processes, but instead has been created to inform the design process.
In Table 2 we frame the research group’s design process as a meta-artefact design process with the new meta-abstract design knowledge framework and present a proposed plan how they would proceed. The benefits of the framework in this case is ensuring that their framework is strongly grounded in actual citizen needs while enabling scaling up the theory to a more general level. Having a framework to support the meta-level design science research process also ensures that the relevance, rigour and design grounding are all considered in the process.
After creation of the research plan (as summarized in Table 2) the members of the research group were interviewed first individually and then as a group. The research group agreed that the plan is beneficial and could inform their meta-artefact design process. While not full proof of the framework’s validity, it can be considered a promising initial evaluation and suggests that the framework evaluation should proceed with further, empirical testing. The interview-based evaluation found the following benefits from the proposed approach.
- The main goal for the process is designing effective services that can improve citizen’s quality of life, and so grounding the design theory back to stakeholders is beneficial.
- The design process involves complex, human problems, as service design is a complex, human-centered issue. In this case the new, proposed framework would be suitable.
- The research topic exists at intersection of human computer interaction studies with users and smart city services, business process modelling and software development processes. Flexible model creation process allows addressing all of these issues.
5 CONCLUSIONS
In this paper, we presented an issue in smart city service design stakeholder involvement and a new method that can be used to inform the design of meta-artefacts, such as requirements frameworks. This has the potential to improve the quality of services and design processes, not by directly addressing the issue, but by presenting a method for creating abstract design knowledge for design science research.
Table 2: Framing an existing case with the design science-based meta-artefact framework.
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Grounded theory activity</td>
<td>Open and selective coding; initial theoretical coding</td>
<td>Advanced theoretical coding; theoretical grounding.</td>
<td>Theoretical grounding and scaling up. Relating the emergent theory to literature.</td>
<td>Constant comparison. Grounding the theory back to original data.</td>
</tr>
<tr>
<td>Case activities and outcomes</td>
<td>- Discovering what needs exist in regard to smart city services in the local context with interviews and the grounded theory coding-based analysis process. Interviewing both smart city service designers and stakeholders. Creating a simple, situational grounded theory model to describe what needs exist, how requirements engineering processes respond to current needs and what is missing.</td>
<td>- Grounding the local, situational model of smart city service requirements design in smart city service designer opinions. Creating a model of requirements engineering processes as part of theoretical coding. - Having a model of user needs in the environment that allows weighing the taxonomy.</td>
<td>- Scaling up the taxonomy and model of user needs by rigorously comparing it to existing scientific literature. - Seeing if the local situational theory matches existing scientific knowledge and identifying the novel contribution. - Scaling up the requirements engineering model</td>
<td>- A formal taxonomy that has been generated from local observations and then scaled up by literature review. - A situational model of user needs in smart city services that can be used to weigh the smart city service taxonomy and to inform smart city service design processes. - Validation: Grounding the scaled-up model by applying it in the original context.</td>
</tr>
</tbody>
</table>
We also extend the current state of the art in meta-artefact creation processes (Iivari, 2015; Ostrowski et al., 2011; Ostrowski and Helfert, 2013) with a grounded theory-based approach and present a novel process description for meta-abstract artefact creation. In this approach, the grounded theory research method can be used in conjunction with a design science meta-artefact creation process to create abstract design knowledge and situational design theories, providing an example of how to apply the combination of grounded theory and design science, as originally proposed by Gregory (2011).
The framework presented in this paper warrants future investigation and evaluation in order to establish its utility and thus the validity. As future work, the researchers will proceed by creating a requirements framework, informed by the meta-artefact creation process presented in this paper.
ACKNOWLEDGEMENTS
The first author graciously acknowledges the funding by Ulla Tuominen Foundation. This work was supported, in part, by Science Foundation Ireland grant 13/RC/2094 and co-funded under the European Regional Development Fund through the Southern & Eastern Regional Operational Programme to Lero - the Irish Software Research Centre (www.lero.ie).
REFERENCES
Sun, Y., Kantor, P.B., 2006. Cross-Evaluation: A new model for information system evaluation. Journal of the...
American Society for Information Science and Technology 57, 614–628.
|
{"Source-Url": "http://www.scitepress.org/Papers/2017/65125/65125.pdf", "len_cl100k_base": 6073, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 28537, "total-output-tokens": 8737, "length": "2e12", "weborganizer": {"__label__adult": 0.0006308555603027344, "__label__art_design": 0.00348663330078125, "__label__crime_law": 0.0006647109985351562, "__label__education_jobs": 0.0115814208984375, "__label__entertainment": 0.0001875162124633789, "__label__fashion_beauty": 0.0004036426544189453, "__label__finance_business": 0.0018796920776367188, "__label__food_dining": 0.0005702972412109375, "__label__games": 0.0011720657348632812, "__label__hardware": 0.0011720657348632812, "__label__health": 0.0014638900756835938, "__label__history": 0.00098419189453125, "__label__home_hobbies": 0.00016236305236816406, "__label__industrial": 0.0008273124694824219, "__label__literature": 0.001346588134765625, "__label__politics": 0.0006418228149414062, "__label__religion": 0.0008153915405273438, "__label__science_tech": 0.21728515625, "__label__social_life": 0.00020956993103027344, "__label__software": 0.01218414306640625, "__label__software_dev": 0.74072265625, "__label__sports_fitness": 0.0004076957702636719, "__label__transportation": 0.001049041748046875, "__label__travel": 0.0003006458282470703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37179, 0.03284]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37179, 0.51783]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37179, 0.88909]], "google_gemma-3-12b-it_contains_pii": [[0, 3764, false], [3764, 8296, null], [8296, 13375, null], [13375, 15238, null], [15238, 20082, null], [20082, 21955, null], [21955, 26899, null], [26899, 30654, null], [30654, 36153, null], [36153, 37179, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3764, true], [3764, 8296, null], [8296, 13375, null], [13375, 15238, null], [15238, 20082, null], [20082, 21955, null], [21955, 26899, null], [26899, 30654, null], [30654, 36153, null], [36153, 37179, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37179, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37179, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37179, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37179, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37179, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37179, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37179, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37179, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37179, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37179, null]], "pdf_page_numbers": [[0, 3764, 1], [3764, 8296, 2], [8296, 13375, 3], [13375, 15238, 4], [15238, 20082, 5], [20082, 21955, 6], [21955, 26899, 7], [26899, 30654, 8], [30654, 36153, 9], [36153, 37179, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37179, 0.07937]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
00bde5581335c96210afb879c41768fbbdd378df
|
This document defines a protocol and URI scheme for user invitation in order to allow a third party to register on a server. The goal of this is to make onboarding for XMPP IM newcomers as easy as possible.
Legal
Copyright
This XMPP Extension Protocol is copyright © 1999 – 2018 by the XMPP Standards Foundation (XSF).
Permissions
Permission is hereby granted, free of charge, to any person obtaining a copy of this specification (the "Specification"), to make use of the Specification without restriction, including without limitation the rights to implement the Specification in a software program, deploy the Specification in a network service, and copy, modify, merge, publish, translate, distribute, sublicense, or sell copies of the Specification, and to permit persons to whom the Specification is furnished to do so, subject to the condition that the foregoing copyright notice and this permission notice shall be included in all copies or substantial portions of the Specification. Unless separate permission is granted, modified works that are redistributed shall not contain misleading information regarding the authors, title, number, or publisher of the Specification, and shall not claim endorsement of the modified works by the authors, any organization or project to which the authors belong, or the XMPP Standards Foundation.
Warranty
## NOTE WELL: This Specification is provided on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. ##
Liability
In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall the XMPP Standards Foundation or any author of this Specification be liable for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising from, out of, or in connection with the Specification or the implementation, deployment, or other use of the Specification (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if the XMPP Standards Foundation or such author has been advised of the possibility of such damages.
Conformance
This XMPP Extension Protocol has been contributed in full conformance with the XSF’s Intellectual Property Rights Policy (a copy of which can be found at https://xmpp.org/about/xsf/ipr-policy or obtained by writing to XMPP Standards Foundation, P.O. Box 787, Parker, CO 80134 USA).
## Contents
1 Introduction ............................. 1
1.1 User Invitation ................................. 1
1.2 Account Creation ................................. 1
2 Requirements ......................... 1
3 Discovery .............................. 2
4 Glossary ............................... 2
5 Use Cases ................................ 3
5.1 Creating a User Invitation .......................... 3
5.2 Landing Page ........................................ 4
5.3 Redeeming a User Invitation ............................ 4
5.3.1 Pre-Configured Account ....................... 5
5.3.2 No Configured Account ......................... 5
5.4 Initiating Account Creation ........................... 5
5.5 Pre-Authenticated In-Band Registration .................. 7
6 Business Rules ....................... 9
6.1 Fallback to Client-Side PARS ..................... 9
6.2 Account Creation ................................. 10
7 Implementation Notes .................. 10
7.1 XMPP Server Suggestion for Invitees ... 10
8 Accessibility Considerations .......... 10
9 Internationalization Considerations 10
10 Security Considerations ............... 10
11 IANA Considerations .................. 10
12 XMPP Registrar Considerations ........ 11
13 XML Schema ........................... 11
1 Introduction
Romeo is an active XMPP IM (Instant Messaging) user or the operator of an XMPP server. He convinces Juliet (who may not have an XMPP account yet) to install a client but she may still need to choose an XMPP server, create an account, and add Romeo as a contact. This specification defines two ways for Romeo to simplify this process for Juliet:
1.1 User Invitation
If Romeo is an XMPP user, he can create an out-of-band link (URI) which allows Juliet to:
1. Download an XMPP client (if needed).
2. Optionally register an account with Romeo’s server (if permitted by the server rules), or with a public server.
3. Establish a mutual presence subscription between Romeo and Juliet.
The process is designed to automatically skip steps that Juliet already completed, to make the overall experience as smooth as possible.
1.2 Account Creation
If Romeo is an administrator of an XMPP server, he can create an out-of-band link (URI) which allows Juliet to:
1. Download an XMPP client (if needed).
2. Register an account on Romeo’s server with a user name defined by Romeo and a password not known to Romeo.
3. Establish a mutual presence subscription between Romeo and Juliet.
2 Requirements
This specification makes use of XMPP URIs. The basic URI scheme for XMPP is defined in RFC 5122 \(^1\) and extended in XMPP URI Query Components (XEP-0147) \(^2\) and Pre-Authenticated Roster Subscription (XEP-0379) \(^3\). Furthermore, this heavily builds upon the blocks provided
---
in XEP-0379 for landing page and roster subscription.
To create out-of-band invitation links, Romeo’s server needs to implement the Ad-Hoc Commands (XEP-0050) commands specified in the following section, and his client must be able to execute them.
Furthermore, Romeo’s server SHOULD provide a HTTPS service hosting the landing page.
3 Discovery
Romeo can query his server for the availability of "User Invitation" and "Account Creation" commands:
Listing 1: Discover available ad-hoc commands
```
<iq type='get' from='[email protected]' to='example.com' id='disco'>
<query xmlns='http://jabber.org/protocol/disco#items'
node='http://jabber.org/protocol/commands'/>
</iq>
```
TODO: use appropriate node namespace.
Listing 2: Discovery result for available ad-hoc commands
```
<iq type='result' to='[email protected]' from='example.com' id='disco'>
<query xmlns='http://jabber.org/protocol/disco#items'
node='http://jabber.org/protocol/commands'>
<item jid='example.com'
node='urn:xmpp:invite#invite'
name='Invite_user'/>
<item jid='example.com'
node='urn:xmpp:invite#create-account'
name='Create_account'/>
</query>
</iq>
```
When performing the account creation, Juliet’s client needs to ensure that the server supports the extended IBR protocol with a <preauth> token: TODO
4 Glossary
OPTIONAL.
5 Use Cases
5.1 Creating a User Invitation
A user can execute the 'invite' command to obtain a new invitation link with a unique invitation token.
Listing 3: Execute user invitation command
```xml
<iq type='set' from='[email protected]' to='example.com' id='exec1'>
<command xmlns='http://jabber.org/protocol/commands' node='urn:xmpp:invite#invite' action='execute'/>
</iq>
```
Listing 4: User invitation finished
```xml
<iq type='result' to='[email protected]' from='example.com' id='exec2'>
<command xmlns='http://jabber.org/protocol/commands' node='urn:xmpp:invite#invite' status='completed'>
<x xmlns='jabber:x:data' type='result'>
<item>
<field var='uri'>
<value>xmpp:[email protected]?roster;preauth=TOKEN;ibr=y</value>
</field>
<field var='landing-url'>
<value>https://example.com/invite/#TOKEN</value>
</field>
<field var='expire'>
<value>2017-11-06T02:56:15Z</value>
</field>
</item>
</x>
</command>
</iq>
```
The token should be unique, sufficiently long and generated by a strong random number generator. A server MUST provide the `uri` field which contains an XMPP URI of the following format:
```
xmpp:[email protected]?roster;preauth=TOKEN;ibr=y
```
The `ibr` query component in the XMPP URI indicates that the invitee is allowed to create an account on Romeo’s server, using the 'preauth' token. If the server does not support or allow in-band registration for invited users, the server MUST omit the `ibr` query component.
Additionally, the server SHOULD provide the landing-url field which contains an HTTPS URL of a web-based landing page as described in Pre-Authenticated Roster Subscription (XEP-0379) § 3.3. The URL format may differ from the example shown here depending on where the landing page is hosted.
If the server omits the landing-page field, Romeo’s client SHOULD generate an appropriate landing page URL hosted by the client developer or a trusted third party.
A server MAY provide a field which provides the expiration date of the generated token. The expiration date MUST conform to the DateTime profile specified in XMPP Date and Time Profiles (XEP-0082). If the field is not provided, the token does not expire.
Romeo’s client should provide adequate means to export the landing-page URL, possibly accompanied with a short description and the expire information, so that Romeo can share it with Juliet by other means than XMPP, like e-mail or a QR code.
5.2 Landing Page
The landing page that the generated URL points to should correspond to the format described in XEP-0379 §3.3, and it needs to convey the following information:
- A short text that this is an XMPP invitation from Romeo.
- A client recommendation (based on the detected web browser/OS) with download links.
- A prominent button that activates the encoded xmpp: link.
If the landing page is hosted by Romeo’s server, the server MAY display additional information based on the supplied TOKEN value, like the name of the inviter or validity information.
5.3 Redeeming a User Invitation
If Juliet does not have an XMPP client installed, she will not be able to open the xmpp: link from the invitation page. For this case, the landing page needs to indicate that a client must be installed first, and that the link will not work as intended without. The automatic installation of an appropriate IM client when a user clicks on an xmpp: is outside of the scope of this document.
With an XMPP client installed, Juliet can open the xmpp: link and have the client process it appropriately, as follows:
---
5.3.1 Pre-Configured Account
If Juliet’s client is already configured with an account, the default action for the presented `xmpp:[email protected]?roster;...` URI is to add the inviter to Juliet’s roster. This should be performed as described in §3.4 of XEP-0379, by sending a presence subscription request containing the ‘preauth’ token.
If Juliet already has Romeo in her roster, her client should open the appropriate chat interface instead.
5.3.2 No Configured Account
If Juliet’s client does not have an XMPP account configured, she needs to login or register an account first. Therefore, the client should provide an interface with the following options:
- Login with an existing XMPP account.
- Register an account with Romeo’s server (if the URI contains a `ibr=y` parameter).
- Register an account with a public or client-endorsed server.
If the `xmpp:` URI provided by Romeo contains the `ibr=y` parameter, it indicates that the server supports the Pre-Authenticated In-Band Registration defined in this document. If Juliet chooses this approach, the server will ensure that after the registration, Romeo is added to her roster with a full presence subscription.
If Juliet chooses to login or register with a different server, her client must complete the respective process and issue a subscription request as described in §3.4 of XEP-0379.
5.4 Initiating Account Creation
If Romeo is the administrator of an XMPP server, he might want to ensure that Juliet obtains an account on this server, with a username defined either by Romeo or by Juliet, and in a way that does not require the out-of-band communication of user passwords.
TODO: description of overall process steps, design motivation.
Listing 5: Execute account creation command
```
<iq type='set' from='[email protected]' to='example.com' id='exec1'>
<command xmlns='http://jabber.org/protocol/commands'
node='urn:xmpp:invite#create-account'
action='execute'/>
</iq>
```
Listing 6: Service returns form for account creation
```xml
<iq type='result' to='[email protected]' from='example.com' id='exec1'>
<command xmlns='http://jabber.org/protocol/commands' sessionid='config:20020923T213616Z-700' node='urn:xmpp:invite#create-account' status='executing'>
<actions execute='complete'/>
<x xmlns='jabber:x:data' type='form'>
<field var='username' label='Username' type='text-single'/>
<field var='roster-subscription' label='Roster_subscription' type='boolean'/>
</x>
</command>
</iq>
```
A server MAY require a username to be specified for account creation. In this case, the server MUST add the <required/> element to the username field. The username MUST be a valid localpart as defined in RFC 6122 §2.3.
Listing 7: Account creation with specified username
```xml
<iq type='set' to='[email protected]' from='example.com' id='exec2'>
<command xmlns='http://jabber.org/protocol/commands' sessionid='config:20020923T213616Z-700' node='urn:xmpp:invite#create-account'>
<x xmlns='jabber:x:data' type='submit'>
<field var='username'>juliet</field>
</x>
</command>
</iq>
```
Listing 8: Account creation finished
```xml
<iq type='result' to='[email protected]' from='example.com' id='exec2'>
<command xmlns='http://jabber.org/protocol/commands' sessionid='config:20020923T213616Z-700' node='urn:xmpp:invite#create-account' status='completed'>
<x xmlns='jabber:x:data' type='result'>
<item/>
</x>
</command>
</iq>
```
The server’s response for account creation is the same as for user invitation except for the format of the *uri* field which contains an XMPP URI of the following format:
```
xmpp:[email protected]?register;preauth=TOKEN
```
If no username was specified during the account creation process, the local part of the JID in the XMPP URI is omitted by the server which results in the following format:
```
xmpp:example.com?register;preauth=TOKEN
```
**5.5 Pre-Authenticated In-Band Registration**
In order to allow invited users to register on a server, the registration process as defined in [In-Band Registration (XEP-0077)](https://xmpp.org/extensions/xep-0077.html) needs to be extended. The invited user’s client MUST add a `<preauth>` element in the 'TODO' namespace to the 'jabber:iq:register' query in order to inform the server that it wants to perform Pre-Authenticated IBR:
```
<iq type='get' id='reg1' to='example.com'>
<query xmlns='jabber:iq:register'>
<preauth xmlns='urn:xmpp:invite:1'/>
</query>
</iq>
```
If the server supports and is ready to perform Pre-Authenticated IBR, it MUST add a `<token>` element to the response (TODO: 'token' or 'preauth')?
---
Listing 10: Receiving registration form
```
<iq type='result' to='[email protected]' from='example.com' id='reg1'>
<query xmlns='jabber:iq:register'>
<x xmlns='jabber:x:data' type='form'>
<Field type='hidden' var='FORM_TYPE'>
<value>urn:xmpp:invite:1</value>
</Field>
<Field type='text-single' label='Username' var='username'>
<required/>
</Field>
<Field type='text-private' label='Password' var='password'>
<required/>
</Field>
<Field type='text-single' label='Invite_token' var='token'>
<required/>
</Field>
</x>
</query>
</iq>
```
Listing 11: Receiving registration form with error (invalid token)
```
<iq type='error' from='example.com' id='reg1'>
<query xmlns='jabber:iq:register'>
<x xmlns='jabber:x:data' type='form'>
<Field type='hidden' var='FORM_TYPE'>
<value>urn:xmpp:invite:1</value>
</Field>
<Field type='text-single' var='username'>
<value>juliet</value>
</Field>
<Field type='text-private' var='password'>
<value>m1cro $oft</value>
</Field>
<Field type='text-single' var='token'>
<value>BADTOKEN</value>
</Field>
</x>
</query>
</iq>
```
Listing 12: Receiving registration form with error (token expired)
```
<iq type='error' from='example.com' id='reg1'>
<query xmlns='jabber:iq:register'>
<bad-request xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/>
<invalid-token xmlns='urn:xmpp:invite:1'/>
</query>
</iq>
```
After the invitee has successfully registered on the inviter’s server and roster subscription is enabled for account creation, the server MUST use roster pushes as defined in RFC 6121 §2.1.6 in order to inform the inviter about the invitee’s new account without the need to reconnect.
Listing 13: Push roster item of invitee to inviter
```xml
<iq type='set' from='[email protected]' id='push'>
<query xmlns='jabber:iq:roster'>
<item subscription='both' jid='[email protected]'/>
</query>
</iq>
```
6 Business Rules
6.1 Fallback to Client-Side PARS
If the inviter’s server does not support user invitation, the client application SHOULD silently fall back to Pre-Authenticated Roster Subscription (XEP-0379) for a good user experience.
---
6.2 Account Creation
If a username was specified during the account creation process, the server SHOULD NOT create an account on the server until the invitee actually registers it with the corresponding token. The server MUST reserve the username at least until the corresponding token expires.
7 Implementation Notes
7.1 XMPP Server Suggestion for Invitees
If the invitee opens an invitation URI with $ibr=y$ and chooses to create a new account, the client SHOULD pre-fill the inviter JID's domain part as the new account's domain. The client MAY provide a mechanism to enter or choose a different server, though.
8 Accessibility Considerations
OPTIONAL.
9 Internationalization Considerations
OPTIONAL.
10 Security Considerations
See security considerations in Pre-Authenticated Roster Subscription (XEP-0379) \(^{11}\).
11 IANA Considerations
This document requires no interaction with the Internet Assigned Numbers Authority (IANA) \(^{12}\).
\(^{12}\) The Internet Assigned Numbers Authority (IANA) is the central coordinator for the assignment of unique parameter values for Internet protocols, such as port numbers and URI schemes. For further information, see <http://www.iana.org/>.
12 XMPP Registrar Considerations
As authorized by XMPP URI Query Components (XEP-0147)\(^\text{13}\), the XMPP Registrar maintains a registry of queries and key-value pairs for use in XMPP URIs (see <https://xmpp.org/registrar/querytypes.html>). The key-value parameter `preauth` is added to the `register` query action as defined in In-Band Registration (XEP-0077)\(^\text{14}\)
```
<querytype>
<name>register</name>
...
<key>
<name>preauth</name>
<desc>the token used to allow one-time in-band registration on the inviter’s server</desc>
</key>
</querytype>
```
In addition to the `preauth` key-value parameter define in Pre-Authenticated Roster Subscription (XEP-0379)\(^\text{15}\), the `ibr` parameter is added to the `roster` query action.
```
<querytype>
<name>roster</name>
...
<key>
<name>ibr</name>
<value>y</value>
<desc>the parameter to indicate that the token allows the invitee to create an account on the inviter’s server via in-band registration</desc>
</key>
</querytype>
```
13 XML Schema
REQUIRED for protocol specifications.
|
{"Source-Url": "https://xmpp.org/extensions/xep-0401.pdf", "len_cl100k_base": 4760, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 37162, "total-output-tokens": 6208, "length": "2e12", "weborganizer": {"__label__adult": 0.00033926963806152344, "__label__art_design": 0.0003826618194580078, "__label__crime_law": 0.0013570785522460938, "__label__education_jobs": 0.0008625984191894531, "__label__entertainment": 0.00014328956604003906, "__label__fashion_beauty": 0.0001316070556640625, "__label__finance_business": 0.0025997161865234375, "__label__food_dining": 0.00021004676818847656, "__label__games": 0.0007624626159667969, "__label__hardware": 0.0018720626831054688, "__label__health": 0.00021696090698242188, "__label__history": 0.00022208690643310547, "__label__home_hobbies": 7.593631744384766e-05, "__label__industrial": 0.0003829002380371094, "__label__literature": 0.0002923011779785156, "__label__politics": 0.0002951622009277344, "__label__religion": 0.0002830028533935547, "__label__science_tech": 0.0204925537109375, "__label__social_life": 9.620189666748048e-05, "__label__software": 0.1304931640625, "__label__software_dev": 0.837890625, "__label__sports_fitness": 0.0002275705337524414, "__label__transportation": 0.00030541419982910156, "__label__travel": 0.00018489360809326172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21267, 0.02129]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21267, 0.13362]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21267, 0.69897]], "google_gemma-3-12b-it_contains_pii": [[0, 207, false], [207, 2740, null], [2740, 4084, null], [4084, 5968, null], [5968, 7418, null], [7418, 8974, null], [8974, 11228, null], [11228, 13195, null], [13195, 14819, null], [14819, 16085, null], [16085, 17612, null], [17612, 18595, null], [18595, 19903, null], [19903, 21267, null]], "google_gemma-3-12b-it_is_public_document": [[0, 207, true], [207, 2740, null], [2740, 4084, null], [4084, 5968, null], [5968, 7418, null], [7418, 8974, null], [8974, 11228, null], [11228, 13195, null], [13195, 14819, null], [14819, 16085, null], [16085, 17612, null], [17612, 18595, null], [18595, 19903, null], [19903, 21267, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21267, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21267, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21267, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21267, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21267, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21267, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21267, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21267, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21267, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21267, null]], "pdf_page_numbers": [[0, 207, 1], [207, 2740, 2], [2740, 4084, 3], [4084, 5968, 4], [5968, 7418, 5], [7418, 8974, 6], [8974, 11228, 7], [11228, 13195, 8], [13195, 14819, 9], [14819, 16085, 10], [16085, 17612, 11], [17612, 18595, 12], [18595, 19903, 13], [19903, 21267, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21267, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
ad1eba3474cef0ad3871bc1d41ad9e34237f4e91
|
QOS-BASED RANKING MODEL FOR WEB SERVICE
SELECTION CONSIDERING USER REQUIREMENTS
G. VADIVELOU, E. ILAVARASAN
1 Research Scholar, Dept. of CSE, Bharathiar University, Coimbatore, Tamilnadu, India
2 Professor, Dept. of CSE, Pondicherry Engineering College, Puducherry, India
E-mail: [email protected], [email protected]
ABSTRACT
Web services are widely used in the business application development to achieve interoperability among standalone systems. Efficient and effective techniques are required to find and select required services among similar services which is an important task in the service-oriented computing. Ranking process which is part of Web service discovery system helps the users to find the desired services effectively. The existing research contribution for ranking process does not consider the user’s requirements which are an important factor to rank web services. In this work, vector-based ranking method is enhanced to consider user’s requirements. The vector-based model is selected because of its simplicity and high efficiency. The web services are evaluated on the basis of their similarity degrees to the optimal or the best values of various quality attributes. Experiments are conducted with real dataset and the improved algorithm is compared with the other approaches and it is found that the enhanced vector-based ranking method is efficient in terms of execution time to return the result set.
Keywords: SOA, Web Services Selection, Web Services Ranking, Vector-Based Ranking, QoS
1. INTRODUCTION
The growing number of business applications in distributed systems has resulted in the increasing demand of communication between business modules. In context of the business community, Service Oriented Architecture (SOA) was proposed based on the idea that to provide a solution for a large problem in a more effective way, the required process can be decomposed into a collection of smaller, but related parts[1]. The most common way to implement SOA is through Web services. According to W3C (World Wide Web Consortium), a Web service is defined as a software module which is implemented through standard XML-based technologies such as WSDL and SOAP. With the increasing number of Web services, discovering and selecting best services to fulfill a required task is becoming more important.
In order to search and invoke Web services based on user’s requirements, first all functional services need to be advertised by their providers in a public UDDI (Universal Description, Discovery, and Integration) registry[2]. Service providers publish descriptions and properties of their Web services in a standard file, i.e. WSDL (Web Service Description Language). A WSDL file contains the information about data types, operations and the network location of the Web services. Then consumers create their queries and use a discovery facility or an agent to search UDDI and locate the set of Web services relevant to their desired requirements. Finally, consumers need to select and invoke one of the Web services among all retrieved results[3]. More and more web services with the similar functionality are made available on the Web. In order to locate and select the appropriate Web services, additional features, i.e. non-functional attributes or quality of Web services (QoS) such as response time, scalability, etc. are taken into consideration in the discovery and selection process.
With the increasing size of the UDDI registry, it is becoming more difficult to locate and retrieve all matched web services and present them to consumers. Furthermore, it is evident that the retrieved result contains more than one matched Web services that meet the functional and non-functional criteria. Therefore, it is essential to devise an efficient technique to measure the ranking relation order between the retrieved services based on user’s requirements on different QoS attributes. The process of ranking Web services is a dominant
part of a Web service selection system, as it helps users select their desired service easily.
2. PROBLEM STATEMENT
In the ranking process, the fundamental step is to find similarity degree between the user’s request and a service. Various methods are proposed to address the problem of ranking web services. These methods compare all the quality parameters of similar Web services with the optimal values for each QoS attributes and the service with maximum similarity degree to the optimal values are returned as the result. In the some of the previous works, complicated data indexing methods are used in their query structure of the ranking process or all Web services are compared in a pair-wise method which involves more computation time. As the similar Web services are growing, the number of pair-wise comparison will increase which makes the algorithm much slower. Also most of the authors do not consider the roles of consumers in their works. They considered only services with the optimal values and not the real constraints which are as part of the query. Finally, the users are recommended with the set of Web services with minimum distance with the optimal values and different users will get the same set of Web services as recommendations.
Also various existing frameworks considered only a small number of QoS attributes and experimented only with small-sized Web service repositories. But there are various types of variables for QoS attributed, which should be considered to fulfill a desired task. Consumers prefer efficient methods which may deal with different types of constraints and large Web service repositories.
Also the existing frameworks take more time for processing a request. With the number of published web services in getting increased, it is very important to return the results fast as the user’s tolerance on slow response is usually very low.
The main goal of this work is to develop a Web service ranking model in which, the user’s request and preferences is considered along with the optimal values. Equal weights are given for both mentioned factors.
As in the previous works, a simple and more straightforward method to rank retrieved Web services is used to achieve accurate results. In this work, a methodology for ranking web services is proposed by developing a vector-based framework and considering user’s requests and preferences and the quality and efficiency of the result of the proposed method is compared with one of the existing ranking algorithms. The proposed methods are also compared with a simple positional algorithm to show that the proposed methods are reliable and efficient. Different number of web services and different types of QoS attributes such as interval data, Boolean etc are considered to compare the algorithms. A real QoS dataset is considered for simulation.
The main contributions in this work are:
1. An improved rank aggregation based algorithm (Borda Fuse Algorithm) is proposed to cover the user’s requirement.
2. A new enhanced ranking algorithm based on a vector-based model is proposed which is capable of dealing with user's requirements and to measure the ranking relation between services.
3. Finally the enhanced algorithms are compared with one of the famous skyline ranking algorithms (Sort-Filter-Skyline) with complex structure to show their efficiency on large sized datasets with a large number of attributes and different data types.
3. RELATED WORKS
Most researchers classify Web service discovery and ranking methods into two different groups: Syntactic and Semantic approaches. In semantic methods, the ontology concepts are used in the discovery process, whereas in syntactic methods, the selection process is based on the syntactic information. As semantic-based approaches suffer from massive human effort and complicated computational process, processing time is slow. It is also assumed that there is no standard definition of ontology to use for different situations. To address these issues, another category of service discovery approaches has been developed which is based on the syntactic information. It is believed by many researchers that syntactic-based models are more efficient than semantic-based approaches.
In Rank-based aggregation technique proposed by Aslam and Montague [4], first the services are ranked in different lists based on each individual attribute, and then the algorithm combines different ranked lists to compute the final ranked list.
To aggregate \( m \) ranked lists generated by \( n \) sources, rank aggregation problem is used. There are two types of rank-based aggregation methods:
1. Supervised rank aggregation technique which relies on the training data and unsupervised rank aggregation method with no need of the training data.
2. Unsupervised rank aggregation technique is categorized in two groups: positional methods and Majoritarian techniques.
Positional methods generate the final ranked list based on the combination of all ranking scores gained by summing all the positional values of each element in each ranked list. The most common method in regards with the rank aggregation method is the Linear Score Combination method in which the scores of items are aggregated by some operators such as weighted sum to compute the final ranked list.
Another important algorithm in this context is referred as Borda-Fuse proposed by Bartell et al.[5]. It is considered an effective algorithm to rank a set of data points. The algorithm was introduced to solve the voting problem. It is a very simple procedure, which has been proved to be effective.
Another positional algorithm that can be named in this area is Median-Rank aggregation method introduced by Fagin et al.[6], in which the candidate documents are ranked based on their median ranks.
Majoritarian rank aggregation approaches are another type of unsupervised rank aggregation methods. In this type of algorithms, every item is compared with another candidate[7]. The method consists of repetitive steps. First they considered a list of all candidates and then each item in the list is compared with the next one. The winner stays in the list, but the loser will be removed from the list. The comparison steps are repeated until there is no other item in the list to be compared. This method suffers from low speed, as the number of comparisons gets larger, when the number of items in the dataset increases.
There is another type of matchmaking and ranking algorithms based on Skyline query concept which is a dominant topic in the database field .The skyline operation was introduced by Kossman et al[8] to solve maximum vector problems. The model calculates and filters the desired points relevant to a query and returns all possible solutions among a large set of data points in a given domain. Skyline points are composed of services that are not dominated by other services. Skyline points assist consumers to select their desired service easier based on their preferences.
In context of the skyline query field, Papadias et al.[9] introduced a progressive algorithm which relies on Branch and Bound Skyline (BBS) based on a nearest-neighbor search method. On a given set of points, this model computes the skyline points based on their distances to a query point in an ascending order. In this work they first indexed the data by applying an R-tree technique to reduce the computation cost by decreasing the number of pair-wise comparisons. Then they computed the dominance relationship between each two services. They argued that in their framework any pre-computation functions would not be required. BBS is widely used in multi-criteria optimization problem.
To extend the Skyline query model to relational databases[10],[11] presented a new algorithm called Sort Filter Skyline (SFS) model. They implemented their model based on a sorting technique. According to their theory all data points are sorted by using a sorting technique and considering a monotone function. In other words, SFS sorts all candidates that maximize the scoring function in an ascending order. After sorting the data, the services which dominate the other services over most attributes will appear in upper positions. Thus the number of pair-wise comparisons will decrease. Any service with the best score over the monotone function will appear in the skyline list. This method is used extensively and is a fundamental structure for methodologies invented later. This model is considered as a baseline for comparison purpose in many works.
### 3.1 Limitations
Most of the reviewed models are reasonable work; however they suffer from some deficiencies:
1. Data indexing and sorting techniques are used in most of the reviewed models which generally takes a longer processing time.
2. They mostly either ignore the role of user’s requirements are ignored or their methods require users to compute the importance degree of each parameter. More load is given to the users.
3. Only limited number and a few types of attributes (mostly numeric type) are considered, while in reality there are various types of data.
In the proposed work, the above mentioned issues are addressed by developing a simple but effective method which considers user requirement as an important factor while ranking web services.
4. PROPOSED WORKS FOR RANKING WEB SERVICES
Web service selection and discovery system is essential to provide clients with proper results according to their requirements. It is impossible to fulfill this task without considering the ranking relation between thousands of available candidates with similar functionalities. Ranking process is a fundamental step in a Web service selection system, as it integrates the results gathered from previous stages (functional and non-functional matching process) and presents them to the requestors. This work focuses on the ranking process by considering user's QoS requirements. Skyline algorithm is used as baseline for the comparison purpose because it is well-accepted algorithm in database area for the multi-criteria based selection problem, and recently is being used in Web service selection area due to the accuracy of the generated results.
However, efficiency is one of the big concerns on this algorithm. The proposed work is to test whether a much simpler algorithm with higher efficiency could achieve the similar level of accuracy. The simple algorithms chosen in this work are vector-based (Distance-based) algorithm and a rank aggregation algorithm (Borda Fuse). Since both of them (also the skyline algorithm) do not consider the actual user requirements at all, in this work it is proposed to enhance those algorithms by taking into account the user's QoS requirements. The goal of the proposed work is to provide a simple and effective method for generating a ranked list of desired Web services with consideration of user's requirements. The QoS data considered in this work includes different types of data such as interval, numeric, boolean etc.
4.1 Proposed Enhanced Borda Fuse Algorithm Considering User's Requirements
Rank Aggregation methods are used for ranking data. One of the efficient methods in this context is Borda Fuse[5]. This model was proposed to solve the voting problems in different areas. In the context of Web service discovery, a service which appears in the highest positions in the most ranking lists will receive higher ranking score. In this model all services are first ranked in different lists in terms of different QoS attributes. Each service is assigned a score based on its positional value in each individual ranked list. Then the final ranked list is generated by computing the summation of all obtained scores from all ranked lists. The Borda Fuse algorithm suffers from an important deficiency that may affect the accuracy of the results. In this model the user's actual request is ignored which is an important factor in selection systems. For different requirements from different users, the output of the algorithm is always the same.
To overcome this issue, this work proposes an improved algorithm to cover user's requirements in the ranking process.
The final ranking score in Borda Fuse method is calculated by adding the positional value of each service in each individual ranked list. The user’s requirements are not considered in the ranking process which returns the same result to the user for different requirements. To involve user's requirements in the algorithm, in the enhanced method the query attributes are considered as a sample service $S_q$ and is added it to the list of offered services. As a result, a new ranked lists including $S_q$ is generated as indicated in the Table 1. It is noticed that the position order of service $S_3$ and $S_5$ has changed in the new ranked list. Service $S_5$ does not meet the requirements for the last 2 attributes.
The scores are assigned to services depend on their position in each ranked list. A negative scores are assigned to those services which appear in the each rank list after $S_q$. The negative score for each service is computed according to position of service in the new ranked list. Suppose $n$ as the number of services, $S_i \_position$ as the position value of service $S_i$ in each ranked list, $S_q \_position$ as the positional value of $S_q$ in the new list with considering user’s requirements.
In the proposed work, the values of n QoS attributes of a service S is modeled as a vector: \( Q_s=(q_{s1},q_{s2},...,q_{sn}) \) where \( s1,s2,...,sn \) are services and the values of QoS requirements requested by a consumer as \( Q_r=(q_{r1},q_{r2},...,q_{rn}) \). The consumer’s preferences values on each QoS attribute is modeled as \( P_r=(p_{r1},p_{r2},...,p_{rn}) \) where \( p_{r1},p_{r2},...,p_{rn} \) preferences where values of \( P_r \) ranges between 1 and n. If the consumer has not specified the preferences, n will be considered as the preference. Weight values of the preferences are calculated using the below given formula.
\[
w_i = \frac{p'_i}{\sum_{i=1}^{n} p_i}
\]
where \( w_i \) is the weight of an attributes, \( p_i \) is the preferences values of each attributes, \( n \) is the total no. of services and \( p_{r_{max}} \) is the maximum value in vector \( P_r \). The score to service \( S \) is computed based on its distance to the optimal values of the QoS attributes using the Euclidian formula. In order to consider the real query values, the distance between the QoS of service \( S \) and the real requested constraints specified in the query is also calculated. The final distance score which indicates the distance between a service and both optimized and required values is calculated as explained below.
Based on the Euclidian distance method, score for each service (S) based on the optimal QoS(Qo) is calculated using the formula given below.
\[
Score_{opt} = \sqrt{w_{j}(q'_{j}-q'_{oj})^2}
\]
where \( Q_o \) is the optimal values of QoS attributes, \( w_{j} \) is the weight of an attributes, \( q'_{oj} \) is the normalized value of an attributes and \( q'_{oj} \) is the normalized value of an optimal value.
Based on the tendency of the QoS parameters, the normalized values for numeric and Boolean attributes are computed based on method discussed in[13] and the normalized values for range type is calculated based on the method discussed in [14].
To calculate the distance between each service \( S_i \) and the requested QoS(Qo), a vector \( P_r=[p_{r1},p_{r2},...,p_{rn}] \) is considered where \( p_{ri} \in [1..n] \) and the vector \( P_i \) includes 0 or 1 based on whether
The distance between each service and the real required QoS is calculated as:
\[ Dis = |Q_s - Q_r| = \sqrt{\sum_{i=1}^{n} p_i (q_{ij} - q'_{ij})^2} \quad (3) \]
where \( Q_s \) is the distance between each service and real required QoS, \( p_i \) is the preference values of an attribute, \( q_{ij} \) is the normalized value of an attribute and \( q'_{ij} \) is the normalized value of specified query. The distance between two vectors of QoS and offered services is calculated by using various datasets with different number of QoS attributes. The observed results are shown in the following tables and figures. The abbreviations used for each algorithms are: DS for the original Distance-based algorithm, DS_I for the improved Distance-based algorithm, BF for the original Borda Fuse, BF_Q for the Borda Fuse algorithm with consideration of user's query, and SFS for the Sort Filter Skyline algorithm. Table 1 and Figure 1 shows the execution time of different algorithms on various number of web services for numeric type attribute and Table 2 and Figure 2 shows the execution time of different algorithms on various number of web services for interval type attribute.
6. CONCLUSION AND FUTURE WORKS
To measure the efficiency of the improved algorithms, the average execution time of each algorithm is calculated by using various datasets with different number of QoS attributes. By increasing the size of datasets and the number of QoS attributes, all algorithms are compared in terms of the processing time. According to the observations, SFS is the fastest algorithm for small sized dataset and one QoS attribute. By increasing the size of the dataset and the number of attributes, SFS runs slower. On the contrary DS and DS_I are the fastest algorithms on the large sized datasets. BF and BF_Q run faster than SFS on large sized datasets with larger number of QoS attributes. Different experiments were performed with different type of attributes such as numeric, Boolean and data interval. It is noticed that SFS has a poor performance on attributes with data interval type. DS and DS_I with a slight different in execution time have the best performance for all data types when the size of the dataset is large.
As future work, the proposed framework may be improved to support top-K query processing effectively so that the processing time could be much lower, and users would be able to select their desired services easily.
REFERENCES:
### Table 1: Execution Time Of Different Algorithms On Various Number Of Web Services For Numeric Type Attribute
<table>
<thead>
<tr>
<th>Type of attribute</th>
<th>No. of services</th>
<th>No. of attributes</th>
<th>DS</th>
<th>DS_I</th>
<th>BF</th>
<th>BF_Q</th>
<th>SFS</th>
</tr>
</thead>
<tbody>
<tr>
<td>Numeric</td>
<td>50</td>
<td>1</td>
<td>203</td>
<td>208</td>
<td>212</td>
<td>218</td>
<td>189</td>
</tr>
<tr>
<td></td>
<td></td>
<td>3</td>
<td>223</td>
<td>238</td>
<td>239</td>
<td>251</td>
<td>203</td>
</tr>
<tr>
<td></td>
<td></td>
<td>5</td>
<td>223</td>
<td>238</td>
<td>292</td>
<td>294</td>
<td>218</td>
</tr>
<tr>
<td></td>
<td></td>
<td>7</td>
<td>250</td>
<td>281</td>
<td>328</td>
<td>332</td>
<td>219</td>
</tr>
<tr>
<td></td>
<td>100</td>
<td>1</td>
<td>216</td>
<td>218</td>
<td>223</td>
<td>226</td>
<td>193</td>
</tr>
<tr>
<td></td>
<td></td>
<td>3</td>
<td>230</td>
<td>239</td>
<td>242</td>
<td>253</td>
<td>218</td>
</tr>
<tr>
<td></td>
<td></td>
<td>5</td>
<td>266</td>
<td>281</td>
<td>297</td>
<td>298</td>
<td>219</td>
</tr>
<tr>
<td></td>
<td></td>
<td>7</td>
<td>281</td>
<td>283</td>
<td>344</td>
<td>344</td>
<td>234</td>
</tr>
<tr>
<td></td>
<td>200</td>
<td>1</td>
<td>232</td>
<td>236</td>
<td>242</td>
<td>246</td>
<td>203</td>
</tr>
<tr>
<td></td>
<td></td>
<td>3</td>
<td>242</td>
<td>248</td>
<td>256</td>
<td>258</td>
<td>232</td>
</tr>
<tr>
<td></td>
<td></td>
<td>5</td>
<td>268</td>
<td>283</td>
<td>298</td>
<td>303</td>
<td>263</td>
</tr>
<tr>
<td></td>
<td></td>
<td>7</td>
<td>368</td>
<td>375</td>
<td>383</td>
<td>385</td>
<td>385</td>
</tr>
<tr>
<td></td>
<td>300</td>
<td>1</td>
<td>243</td>
<td>247</td>
<td>249</td>
<td>249</td>
<td>248</td>
</tr>
<tr>
<td></td>
<td></td>
<td>3</td>
<td>248</td>
<td>253</td>
<td>259</td>
<td>263</td>
<td>250</td>
</tr>
<tr>
<td></td>
<td></td>
<td>5</td>
<td>313</td>
<td>329</td>
<td>391</td>
<td>394</td>
<td>263</td>
</tr>
<tr>
<td></td>
<td></td>
<td>7</td>
<td>344</td>
<td>346</td>
<td>384</td>
<td>386</td>
<td>392</td>
</tr>
<tr>
<td></td>
<td>500</td>
<td>1</td>
<td>250</td>
<td>266</td>
<td>268</td>
<td>272</td>
<td>248</td>
</tr>
<tr>
<td></td>
<td></td>
<td>3</td>
<td>252</td>
<td>257</td>
<td>279</td>
<td>279</td>
<td>281</td>
</tr>
<tr>
<td></td>
<td></td>
<td>5</td>
<td>266</td>
<td>281</td>
<td>283</td>
<td>284</td>
<td>293</td>
</tr>
<tr>
<td></td>
<td></td>
<td>7</td>
<td>403</td>
<td>418</td>
<td>422</td>
<td>454</td>
<td>458</td>
</tr>
</tbody>
</table>
Figure 1: Execution Time Of Different Algorithms On Various Number Of Web Services For Numeric Type Attribute
Table 2: Execution Time Of Different Algorithms On Various Number Of Web Services For Interval Type Attribute
<table>
<thead>
<tr>
<th>Type of attribute</th>
<th>No. of services</th>
<th>No. of attributes</th>
<th>DS</th>
<th>DS_I</th>
<th>BF</th>
<th>BF_Q</th>
<th>SFS</th>
</tr>
</thead>
<tbody>
<tr>
<td>Data Interval</td>
<td>50</td>
<td>1</td>
<td>141</td>
<td>143</td>
<td>156</td>
<td>158</td>
<td>144</td>
</tr>
<tr>
<td></td>
<td></td>
<td>5</td>
<td>143</td>
<td>145</td>
<td>156</td>
<td>162</td>
<td>132</td>
</tr>
<tr>
<td></td>
<td>100</td>
<td>1</td>
<td>160</td>
<td>163</td>
<td>168</td>
<td>171</td>
<td>163</td>
</tr>
<tr>
<td></td>
<td></td>
<td>5</td>
<td>158</td>
<td>163</td>
<td>168</td>
<td>172</td>
<td>153</td>
</tr>
<tr>
<td></td>
<td>200</td>
<td>1</td>
<td>188</td>
<td>189</td>
<td>203</td>
<td>206</td>
<td>190</td>
</tr>
<tr>
<td></td>
<td></td>
<td>5</td>
<td>163</td>
<td>165</td>
<td>173</td>
<td>174</td>
<td>168</td>
</tr>
<tr>
<td></td>
<td>500</td>
<td>1</td>
<td>193</td>
<td>208</td>
<td>212</td>
<td>218</td>
<td>215</td>
</tr>
<tr>
<td></td>
<td></td>
<td>5</td>
<td>203</td>
<td>242</td>
<td>253</td>
<td>256</td>
<td>258</td>
</tr>
</tbody>
</table>
Figure 2: Execution Time Of Different Algorithms On Various Number Of Web Services For Interval Type attribute
|
{"Source-Url": "http://www.jatit.org/volumes/Vol59No3/16Vol59No3.pdf", "len_cl100k_base": 6056, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 31300, "total-output-tokens": 6816, "length": "2e12", "weborganizer": {"__label__adult": 0.00030159950256347656, "__label__art_design": 0.0003542900085449219, "__label__crime_law": 0.0003979206085205078, "__label__education_jobs": 0.0010547637939453125, "__label__entertainment": 9.500980377197266e-05, "__label__fashion_beauty": 0.00016558170318603516, "__label__finance_business": 0.0007214546203613281, "__label__food_dining": 0.0003731250762939453, "__label__games": 0.00054168701171875, "__label__hardware": 0.0007982254028320312, "__label__health": 0.0006527900695800781, "__label__history": 0.0002913475036621094, "__label__home_hobbies": 6.985664367675781e-05, "__label__industrial": 0.0003669261932373047, "__label__literature": 0.0003876686096191406, "__label__politics": 0.0002968311309814453, "__label__religion": 0.0003523826599121094, "__label__science_tech": 0.08343505859375, "__label__social_life": 0.00010460615158081056, "__label__software": 0.0250091552734375, "__label__software_dev": 0.88330078125, "__label__sports_fitness": 0.00022304058074951172, "__label__transportation": 0.0004549026489257813, "__label__travel": 0.00022494792938232425}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27422, 0.05833]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27422, 0.24564]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27422, 0.91748]], "google_gemma-3-12b-it_contains_pii": [[0, 3979, false], [3979, 8467, null], [8467, 12970, null], [12970, 17392, null], [17392, 19629, null], [19629, 22071, null], [22071, 24138, null], [24138, 26190, null], [26190, 26300, null], [26300, 27422, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3979, true], [3979, 8467, null], [8467, 12970, null], [12970, 17392, null], [17392, 19629, null], [19629, 22071, null], [22071, 24138, null], [24138, 26190, null], [26190, 26300, null], [26300, 27422, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27422, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27422, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27422, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27422, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27422, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27422, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27422, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27422, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27422, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27422, null]], "pdf_page_numbers": [[0, 3979, 1], [3979, 8467, 2], [8467, 12970, 3], [12970, 17392, 4], [17392, 19629, 5], [19629, 22071, 6], [22071, 24138, 7], [24138, 26190, 8], [26190, 26300, 9], [26300, 27422, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27422, 0.27586]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
3bad598998799591f0ea3da90d476ad173fb4bff
|
LICENSE AGREEMENT
(AMD CORE MATH LIBRARY)
IMPORTANT—READ CAREFULLY: Do not install, copy or use the enclosed Materials (defined below) until carefully reading and agreeing to the following terms and conditions. This is a legal agreement (“Agreement”) between you (either an individual or an entity) (“You”) and Advanced Micro Devices, Inc. (“AMD”).
If You do not agree to the terms of this Agreement, do not install, copy or use the Materials or any portion thereof. By installing, copying or using the Materials provided herewith or that is made available by AMD to download from any media, You agree to all of the terms of this Agreement. Note that these Materials are AMD Confidential Information and may not be shared with any third party except as expressly provided below.
1. Definitions.
In addition to those definitions set forth elsewhere in this Agreement, the following terms have the meanings specified below:
(a) “Distributed Software” means software developed or modified by You that includes Libraries and/or derivative works of the Sample Source or Documentation.
(b) “Documentation” means associated install scripts and online or electronic documentation included as part of the deliverables in the Materials, or other related materials or any portion thereof.
(c) “Free Software License” means any software license that requires as a condition of use, modification, adaptation or distribution of such licensed software that other software derived from, distributed with or incorporated into at the source code level be disclosed or distributed in Source Code form. By way of example, Free Software License includes, but is in no way limited to any of the following licenses or distribution models, or licenses or distribution models similar to any of the following: (i) GNU’s General Public License (GPL) or Lesser/Library GPL (LGPL), (ii) The Artistic License (e.g., PERL), (iii) the Mozilla Public License, (iv) the Netscape Public License, (v) the Sun Community Source License (SCSL), and (vi) the Sun Industry Standards Source License (SISSL).
(d) “Intellectual Property Rights” means any rights under any patents, trademarks, copyrights, mask works, trade secret information, intellectual property, license or similar materials.
(e) “Libraries” means libraries in Object Code included as part of the deliverables in the Materials that may be statically linked into Your software for the Licensed Purpose.
(f) “Licensed Purpose” means: (i) use of the Materials to create Distributed Software; and (ii) distributing and sublicensing to end users the Distributed Software and Runtimes for use with Licensee’s Products.
(g) “Object Code” means machine readable computer programming code files, which is not in a human readable form and which does not include debug symbols similar in detail to Source Code.
(h) “Runtimes” means programs or dynamically linked libraries in Object Code which are included as part of the deliverables in the Materials which are required by Your software and are distributed by You with Your software for the Licensed Purpose. Runtimes may be distributed in the same format as provided in the Materials or may be repackaged in Your installer framework.
(i) “Sample Source” means header files and sample code in Source Code form which are included as part of the deliverables in the Materials.
(j) “Tools” means any tools or utilities in the Materials, which are used to generate or modify portions of the Distributed Software, but which are not distributed.
(k) “Source Code” means human readable form computer programming code and related system level documentation, including all comments, symbols and any procedural code such as job control language.
2. **Contents.** This Agreement sets forth the terms and conditions under which You agree to license the Materials solely for the Licensed Purpose. If You accept these terms and conditions, AMD will license You with the Materials. Schedule A lists the Materials as: (a) Documentation; (b) Sample Source; (c) Tools; (d) Libraries; and (e) Runtimes. All of these items are collectively referred to herein as “Materials”. If not marked or marked “None” in Schedule A, then such Materials are not licensed by this Agreement.
3. **License.** Subject to the terms and conditions of this Agreement, AMD hereby grants You a non-exclusive, royalty-free, revocable, non-transferable, non-assignable limited copyright license to:
(a) install, use and reproduce the Materials internally at Your site(s) solely for the purpose of internal testing and evaluation;
(b) modify the Sample Source or Documentation to create Distributed Software;
(c) include the Libraries when building Distributed Software; and
(d) distribute and sublicense to end users in Object Code form only the Distributed Software and Runtimes for the Licensed Purpose. Your right to distribute the Distributed Software and Runtimes to end users includes the right to distribute through distributors including multiple layers of distributors.
4. **Requirements.** You will sublicense the end users to use Distributed Software, Libraries and Runtimes in accordance with terms and conditions that are substantially similar to the terms and conditions contained in Schedule B hereof. You may include these terms in Your standard form agreement. You must reproduce all AMD trademark and/or copyright notices on any copy of the Distributed Software and Runtimes that You distribute.
5. **Restrictions.** Restrictions regarding Your use of the Materials are as follows. You may not:
a) distribute, publish or sublicense the Documentation, the Sample Source, the Libraries (except when built into the Distributed Software), the Tools or any Source Code in the Materials to any one;
b) reproduce copies of the Materials other than what is reasonably required for the Licensed Purpose;
c) decompile, reverse engineer, disassemble or otherwise reduce the Object Code contained in the Materials to a human-perceivable form;
d) alter any copyright, trademark or patent notice(s) in the Materials;
e) use AMD’s trademarks in Your software or product names or in a way that suggests the Distributed Software comes from AMD or is endorsed by AMD;
f) use AMD’s trademarks in Your software or product names or in a way that that suggests that any of the Materials are endorsed by AMD;
g) include contents in malicious, deceptive or unlawful programs;
h) modify and/or distribute any of the Materials so that any part of thereof becomes subject to a Free Software License; or
i) rent, lease or lend the Materials or transfer the Materials to any third party except as expressly provided herein.
You also agree that the Materials are licensed, not sold by AMD.
Except as expressly provided in Section 3, AMD does not grant, by implication, estoppels or otherwise any other Intellectual Property Rights. You agree that all licenses granted herein are conditioned upon the use of the Materials for the Licensed Purpose. You agree that the Materials and all partial versions thereto, including without limitation all modifications, enhancements, updates, bug fixes, inventions, know-how, as well as all Intellectual Property Rights and all other information relating thereto are and will remain the sole
and exclusive property of AMD. You shall have no right, title or interest therein except for the limited licenses set forth in Section 3 of this Agreement. AMD agrees that the foregoing shall not grant AMD any right, title or interest in Your Distributed Software that is not provided as part of the Materials, and all Intellectual Property Rights therein are and will remain Your sole and exclusive property. Nothing in this Agreement shall be construed to limit AMD’s right to independently develop or acquire products similar to those of Your Distributed Software including any Intellectual Property Rights therein.
The Materials may include third party technologies (e.g. third party libraries) for which You must obtain licenses from parties other than AMD. You agree that AMD has not obtained or conveyed to You—and that You shall be responsible for obtaining—Intellectual Property Rights to use and/or distribute the applicable, underlying Intellectual Property Rights related to the third party technologies. These third party technologies are not part of the Materials and are not licensed under this Agreement.
6. No Support. AMD is under no obligation to provide any kind of technical, development or end-user support for the Materials.
7. Updates. AMD may provide updates from time to time. If AMD provides updates, these updates are licensed under the terms of this Agreement.
8. Feedback. You have no obligation to give AMD any suggestions, comments or other feedback (“Feedback”) relating to the Materials. However, AMD may use and include any Feedback that You voluntarily provide to improve the Materials or other related AMD products and technologies. Accordingly, if You provide Feedback, You grant AMD and its affiliates and subsidiaries a worldwide, non-exclusive, irrevocable, royalty-free, perpetual license to, directly or indirectly, use, reproduce, license, sublicense, distribute, make, have made, sell and otherwise commercialize the Feedback in the Materials or other AMD technologies. You further agree not to provide any Feedback that (a) You know is subject to any patent, copyright or other intellectual property claim or right of any third party; (b) is subject to a Free Software License; or (c) is subject to license terms which seek to require any products incorporating or derived from such Feedback, or other AMD intellectual property, to be licensed to or otherwise shared with any third party.
9. Confidentiality. You shall refrain from disclosing any Confidential Information to third parties and will take reasonable security precautions, at least as great as the precautions it takes to protect its own confidential information, but no less than reasonable care, to keep confidential the Confidential Information. For the purposes hereof, “Confidential Information” means all information disclosed between the parties in connection with this Agreement, including the Materials and any other business or technical information provided to You by AMD. You will only disclose the Confidential Information to Your employees or on-site subcontractors (a) who have a need to know in furtherance of the Licensed Purpose; and (b) who have signed a confidentiality agreement with You at least as restrictive as this Agreement. If at any future time AMD, directly or indirectly, discloses any other related technology or information to You, including without limitation any updated versions of the Materials, such disclosure will also be deemed to be confidential, part of the Materials and will be subject to the provisions of this Agreement. You may disclose Confidential Information in accordance with a judicial or other governmental order, provided that You give AMD reasonable notice prior to such disclosure to allow AMD a reasonable opportunity to seek a protective order or equivalent.
10. Disclaimer of Warranty. YOU EXPRESSLY ACKNOWLEDGE AND AGREE THAT USE OF THE MATERIALS ARE AT YOUR SOLE RISK. THE MATERIALS ARE PROVIDED "AS IS" AND WITHOUT WARRANTY OF ANY KIND AND AMD EXPRESSLY DISCLAIMS ALL WARRANTIES, EXPRESS AND IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE, NON-INFRINGEMENT, OR THOSE ARISING FROM CUSTOM OF TRADE OR COURSE OF USAGE. AMD DOES NOT WARRANT THAT THE MATERIALS WILL MEET YOUR REQUIREMENTS, OR THAT THE OPERATION OF THE MATERIALS WILL BE UNINTERRUPTED OR ERROR-FREE. THE ENTIRE RISK ASSOCIATED WITH THE USE OF THE MATERIALS IS ASSUMED BY YOU. FURTHERMORE, AMD DOES NOT WARRANT OR MAKE ANY REPRESENTATIONS REGARDING THE USE OR THE RESULTS OF THE USE OF THE MATERIALS IN TERMS OF THEIR CORRECTNESS, ACCURACY, RELIABILITY, CURRENTNESS, OR OTHERWISE. SHOULD THE CONTENTS OF THE MATERIALS PROVE DEFECTIVE, YOU ASSUME THE ENTIRE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. Some jurisdictions do not allow the exclusion of implied warranties, so the above exclusion may not apply to You.
11. Limitation of Liability and Indemnification. UNDER NO CIRCUMSTANCES INCLUDING NEGLIGENCE, SHALL AMD, OR ITS DIRECTORS, OFFICERS, EMPLOYEES OR AGENTS (“AUTHORIZED REPRESENTATIVE”), BE LIABLE TO YOU FOR ANY PUNITIVE, DIRECT, INCIDENTAL, INDIRECT, SPECIAL OR CONSEQUENTIAL DAMAGES (INCLUDING DAMAGES FOR LOSS OF BUSINESS PROFITS, BUSINESS INTERRUPTION, LOSS OF BUSINESS INFORMATION, AND THE LIKE) ARISING OUT OF THE USE, MISUSE OR INABILITY TO USE THE MATERIALS, BREACH OR DEFAULT, INCLUDING THOSE ARISING FROM INFRINGEMENT OR ALLEGED INFRINGEMENT OF ANY PATENT, TRADEMARK, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT, BY AMD, EVEN IF AMD AND/OR ITS AUTHORIZED REPRESENTATIVES HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. AMD will not be liable for loss of, or damage to, Your equipment, records or data or any damages claimed by You based on any third party claim. In no event shall AMD’s total liability to You for all damages, losses, and causes of action (whether in contract, tort (including negligence) or otherwise) exceed the amount of $10 USD. You agrees to defend, indemnify and hold harmless AMD and any of its Authorized Representatives from and against any and all loss, damage, liability and other expenses (including reasonable attorneys’ fees), resulting from Your provision of any product or technology developed therefrom including, without limitation, the Distributed Software, or Your breach of any term or condition of this Agreement.
12. Termination. This Agreement is effective until terminated. You can terminate this Agreement at any time by destroying the Materials, and all copies You have made. This Agreement will terminate immediately without notice from AMD if You fail to comply with any provision of this Agreement. Upon termination You must destroy the Materials and all copies You have made. The termination of this Agreement shall: (i) immediately result in the termination of all sublicenses previously granted by You to third party distributors and contract manufacturers under Section 3; and (ii) have no effect on any sublicenses previously granted by You to end users under Subsection 3, which sublicenses shall survive in accordance with their terms.
13. Government End Users. If You are acquiring the Materials on behalf of any unit or agency of the United States Government, the following provisions apply. The Government agrees the Materials were developed at private expense and are provided with "RESTRICTED RIGHTS". Use, duplication, or disclosure by the Government is subject to restrictions as set forth in DFARS 227.7202-1(a) and 227.7202-3(a) (1995), DFARS 252.227-7013(c) (1) (ii) (Oct 1988), FAR 12.212(a) (1995), FAR 52.227-19, (June 1987) or FAR 52.227-14(ALT III) (June 1987), as amended from time to time. In the event that this Agreement, or any part thereof, is deemed inconsistent with the minimum rights identified in the Restricted Rights provisions, the minimum rights shall prevail.
14. Export Restrictions. You shall adhere to all U.S. and other applicable export laws, including but not limited to the U.S. Export Administration Regulations (“EAR”), currently found at 15 C.F.R. Sections 730 through 744. Further, pursuant to 15 C.F.R Section 740.6, You hereby certifies that, except pursuant to a license granted by the United States Department of Commerce Bureau of Industry and Security or as otherwise permitted
pursuant to a License Exception under the EAR, You will not (1) export, re-export or release to a national of a
country in Country Groups D:1 or E:2 any restricted technology, software, or source code it receives from
AMD, or (2) export to Country Groups D:1 or E:2 the direct product of such technology or software, if such
foreign produced direct product is subject to national security controls as identified on the Commerce
Control List (currently found in Supplement 1 to Part 774 of EAR). For the most current Country Group
listings, or for additional information about the EAR or Your obligations under those regulations, please refer
to the U.S. Bureau of Industry and Security’s website at http://www.bis.doc.gov/. These export
requirements shall survive any expiration or termination of this Agreement.
15. **Controlling Law and Severability.** This Agreement will be governed by and construed under the laws of the
State of California without reference to its conflicts of law principles. The rights and obligations under this
Agreement shall not be governed by the United Nations Convention on Contracts or the International Sale of
Goods, the application of which is expressly excluded. Each party hereto submits to the jurisdiction of the
state and federal courts of Santa Clara County and the Northern District of California for the purpose of all
legal proceedings arising out of or relating to this Agreement or the subject matter hereof. Each party
waives any objection which it may have to contest such forum.
16. **Surviving Obligations.** Any term or condition of this Agreement which by its nature extends beyond the
expiration or termination of this Agreement, including, without limitation, Sections 1, 2 and 4-17 (inclusive)
shall survive any termination of this Agreement and shall bind the parties and their legal representatives,
successors, heirs and assigns.
17. **Complete Agreement.** This Agreement constitutes the entire agreement between You and AMD with
respect to the Materials, and supersedes all prior understandings or agreements, whether written or oral.
No amendment to or modification of this Agreement will be binding unless in writing and signed by a duly
authorized representative of AMD. The rights and licenses granted by AMD herein are personal to You, and
may not be assigned, sublicensed, or otherwise transferred without the prior written consent of AMD. Any
attempted assignment or delegation without such consent will be null and void, and shall automatically
terminate all rights You have under this Agreement. No waiver, amendment or modification of any
provision of this Agreement will be effective unless in writing and signed by the party against whom
enforcement is sought.
SCHEDULE A
As defined in the Definitions Sections above -
Documentation: All files in Doc directory and the ReleaseNotes files
Libraries: libacml.a libacml_mp.a libacml.lib libacml_dll.lib libacml_mp.lib libacml_mp_dll.lib libacml_mv.a libacml_mv.lib libacml_mv_dll.lib
Runtimes: libacml.so libacml_mp.so libacml_dll.dll libacml_mp_dll.dll libacml_mv.so libacml_mv_dll.dll
Sample Source: All files in examples directory and its subdirectories, and files in the include directory
Tools: All files in util directory
SCHEDULE B
END USER LICENSE AGREEMENT
PLEASE READ THIS LICENSE CAREFULLY BEFORE USING THE SOFTWARE. BY USING THE SOFTWARE, YOU ARE AGREEING TO BE BOUND BY THE TERMS OF THIS LICENSE. IF YOU DO NOT AGREE TO THESE TERMS AND CONDITIONS, DO NOT USE THE SOFTWARE.
1. License. The software accompanying this License (hereinafter "Software"), regardless of the media on which it is distributed, are licensed to you by Advanced Micro Devices, Inc. ("AMD"). You own the medium on which the Software is recorded, but AMD and AMD's Licensors (referred to collectively as "AMD") retain title to the Software and related documentation. You may:
a) use the Software; and
b) make a reasonable number of copies necessary for the purposes of this License. You must reproduce on such copy AMD's copyright notice and any other proprietary legends that were on the original copy of the Software
2. Restrictions. The Software contains copyrighted and patented material, trade secrets and other proprietary material. In order to protect them, and except as permitted by applicable legislation, you may not:
a) decompile, reverse engineer, disassemble or otherwise reduce the Software to a human-perceivable form;
b) modify, network, rent, lend, loan, distribute or create derivative works based upon the Software in whole or in part; or
c) transfer or sublicense the Software to another end user or otherwise transfer the Software except as permitted by this License.
3. Termination. This License is effective until terminated. You may terminate this License at any time by destroying the Software, related documentation and all copies thereof. This License will terminate immediately without notice from AMD if you fail to comply with any provision of this License. Upon termination you must destroy the Software, related documentation and all copies thereof.
4. Government End Users. If you are acquiring the Software on behalf of any unit or agency of the United States Government, the following provisions apply. The Government agrees the Software and documentation were developed at private expense and are provided with "RESTRICTED RIGHTS". Use, duplication, or disclosure by the Government is subject to restrictions as set forth in DFARS 227.7202-1(a) and 227.7202-3(a) (1995), DFARS 252.227-7013(i)(1)(ii) (Oct 1988), FAR 12.212(a)(1995), FAR 52.227-19, (June 1987) or FAR 52.227-14(ALT III) (June 1987), as amended from time to time. In the event that this License, or any part thereof, is deemed inconsistent with the minimum rights identified in the Restricted Rights provisions, the minimum rights shall prevail.
5. No Other License. No rights or licenses are granted by AMD under this License, expressly or by implication, with respect to any proprietary information or patent, copyright, trade secret or other intellectual property right owned or controlled by AMD, except as expressly provided in this License.
6. Additional Licenses. DISTRIBUTION OR USE OF THE SOFTWARE WITH AN OPERATING SYSTEM MAY REQUIRE ADDITIONAL LICENSES FROM THE OPERATING SYSTEM VENDOR. Additional third party licenses may also be required and you agree that you shall be solely responsible for obtaining such license rights.
7. Disclaimer of Warranty on Software. You expressly acknowledge and agree that use of the Software is at your sole risk. The Software and related documentation are provided "AS IS" and without warranty of any kind and AMD EXPRESSLY DISCLAIMS ALL WARRANTIES, EXPRESS AND IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, ACCURACY, CONDITION, OWNERSHIP, FITNESS FOR A PARTICULAR PURPOSE, AND/OR OF NON-INFRINGEMENT OF THIRD PARTY INTELLECTUAL PROPERTY RIGHTS, AND THOSE ARISING FROM CUSTOM OR TRADE OR COURSE OF USAGE. AMD DOES NOT WARRANT THAT THE FUNCTIONS CONTAINED IN THE SOFTWARE WILL MEET YOUR REQUIREMENTS, OR THAT THE OPERATION OF THE SOFTWARE WILL BE UNINTERRUPTED OR ERROR-FREE, OR THAT DEFECTS IN THE SOFTWARE WILL BE CORRECTED. THE ENTIRE RISK AS TO THE RESULTS AND PERFORMANCE OF THE SOFTWARE IS ASSUMED BY YOU. FURTHERMORE, AMD DOES NOT WARRANT OR MAKE ANY REPRESENTATIONS REGARDING THE USE OR THE RESULTS OF THE USE OF THE SOFTWARE OR RELATED DOCUMENTATION IN TERMS OF THEIR CORRECTNESS, ACCURACY, RELIABILITY, CURRENTNESS, OR OTHERWISE, NO ORAL OR WRITTEN INFORMATION OR ADVICE GIVEN BY AMD OR AMD'S AUTHORIZED REPRESENTATIVE SHALL CREATE A WARRANTY OR IN ANY WAY INCREASE THE SCOPE OF THIS WARRANTY. SHOULD THE SOFTWARE PROVE DEFECTIVE, YOU (AND NOT AMD OR AMD'S AUTHORIZED REPRESENTATIVE) ASSUME THE ENTIRE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. THE SOFTWARE IS NOT INTENDED FOR USE IN MEDICAL, LIFE SAVING OR LIFE SUSTAINING APPLICATIONS. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO THE ABOVE EXCLUSION MAY NOT APPLY TO YOU.
8. Limitation of Liability. UNDER NO CIRCUMSTANCES INCLUDING NEGLIGENCE, SHALL AMD, OR ITS DIRECTORS, OFFICERS, EMPLOYEES OR AGENTS ('AUTHORIZED REPRESENTATIVES'), BE LIABLE TO YOU FOR ANY PUNITIVE, EXEMPLARY, DIRECT, INCIDENTAL, INDIRECT, SPECIAL OR CONSEQUENTIAL DAMAGES INCLUDING DAMAGES FOR LOSS OF BUSINESS PROFITS, BUSINESS INTERRUPTION, LOSS OF BUSINESS INFORMATION, AND THE LIKE ARISING OUT OF THE USE, MISUSE OR INABILITY TO USE THE SOFTWARE OR RELATED DOCUMENTATION, BREACH OR DEFAULT, INCLUDING THOSE ARISING FROM INFRINGEMENT OR ALLEGED INFRINGEMENT OF ANY PATENT, TRADEMARK, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT, BY AMD, EVEN IF AMD OR AMD'S AUTHORIZED REPRESENTATIVE HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. SOME JURISDICTIONS DO NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO THE ABOVE LIMITATION OR EXCLUSION MAY NOT APPLY TO YOU. AMD will not be liable for:
1) loss of, or damage to, your records or data; or
2) any damages claimed by you based on any third party claim. In no event shall AMD's total liability to you for all damages, losses, and causes of action (whether in contract, tort (including negligence) or otherwise) exceed the amount paid by you for the Software.
9. Export Restrictions. You shall adhere to all U.S. and other applicable export laws, including but not limited to the U.S. Export Administration Regulations (EAR), currently found at 15 C.F.R. Sections 730 through 744. Further, pursuant to 15 C.F.R Section 740.6, You hereby certifies that, except pursuant to a license granted by the United States Department of Commerce Bureau of Industry and Security or as otherwise permitted pursuant to a License Exception under the U.S. Export Administration Regulations ("EAR"). You will not (1) export, re-export or release to a national of a country in Country Groups D:1 or E:2 any restricted technology, software, or source code it receives from AMD, or (2) export to Country Groups D:1 or E:2 the direct product of such technology or software, if such foreign produced direct product is subject to national security controls as identified on the Commerce Control List (currently found in Supplement 1 to Part 774 of EAR). For the most current Country Group listings, or for additional information about the EAR or Recipient's obligations under those regulations, please refer to the U.S. Bureau of Industry and Security's website at http://www.bis.doc.gov/. These export requirements shall survive any expiration or termination of this Agreement.
10. Controlling Law and Severability. This Agreement will be governed by and construed under the laws of the State of California without reference to its conflicts of law principles. The rights and obligations under this Agreement shall not be governed by the United Nations Convention on Contracts or the International Sale of Goods, the application of which is expressly excluded. Each party hereto submits to the jurisdiction of the state and federal courts of Santa Clara County and the Northern District of California for the purpose of all legal proceedings arising out of or relating to this Agreement or the subject matter hereof. Each party waives any objection which it may have to contest such forum.
11. Complete Agreement. This License constitutes the entire agreement between the parties with respect to the use of the Software and the related documentation, and supersedes all prior or contemporaneous understandings or agreements, written or oral, regarding such subject matter. No amendment to or modification of this License will be binding unless in writing and signed by a duly authorized representative of AMD.
|
{"Source-Url": "http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2013/12/ACML_June_24_2010_v2.pdf", "len_cl100k_base": 5662, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 18963, "total-output-tokens": 6071, "length": "2e12", "weborganizer": {"__label__adult": 0.00220489501953125, "__label__art_design": 0.001811981201171875, "__label__crime_law": 0.08258056640625, "__label__education_jobs": 0.004695892333984375, "__label__entertainment": 0.0010290145874023438, "__label__fashion_beauty": 0.0006837844848632812, "__label__finance_business": 0.2000732421875, "__label__food_dining": 0.0011320114135742188, "__label__games": 0.0101776123046875, "__label__hardware": 0.0041961669921875, "__label__health": 0.0007867813110351562, "__label__history": 0.0005221366882324219, "__label__home_hobbies": 0.0003170967102050781, "__label__industrial": 0.0017910003662109375, "__label__literature": 0.00176239013671875, "__label__politics": 0.0028228759765625, "__label__religion": 0.0012044906616210938, "__label__science_tech": 0.004360198974609375, "__label__social_life": 0.0002200603485107422, "__label__software": 0.1751708984375, "__label__software_dev": 0.4990234375, "__label__sports_fitness": 0.0008130073547363281, "__label__transportation": 0.00191497802734375, "__label__travel": 0.0007290840148925781}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27422, 0.00579]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27422, 0.0192]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27422, 0.91066]], "google_gemma-3-12b-it_contains_pii": [[0, 3707, false], [3707, 7291, null], [7291, 11402, null], [11402, 15632, null], [15632, 18369, null], [18369, 18885, null], [18885, 27422, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3707, true], [3707, 7291, null], [7291, 11402, null], [11402, 15632, null], [15632, 18369, null], [18369, 18885, null], [18885, 27422, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 27422, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27422, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27422, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27422, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27422, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27422, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27422, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27422, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27422, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27422, null]], "pdf_page_numbers": [[0, 3707, 1], [3707, 7291, 2], [7291, 11402, 3], [11402, 15632, 4], [15632, 18369, 5], [18369, 18885, 6], [18885, 27422, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27422, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
cbf3e083f7fddebc3a57cebbf586fd57e2e0522a
|
This specification defines semantics for using the XMPP publish-subscribe protocol to broadcast state change events associated with an instant messaging and presence account. This profile of pubsub therefore enables a standard XMPP user account to function as a virtual pubsub service, easing the discovery of syndicated data and event notifications associated with such an account.
Legal
Copyright
This XMPP Extension Protocol is copyright © 1999 – 2024 by the XMPP Standards Foundation (XSF).
Permissions
Permission is hereby granted, free of charge, to any person obtaining a copy of this specification (the "Specification"), to make use of the Specification without restriction, including without limitation the rights to implement the Specification in a software program, deploy the Specification in a network service, and copy, modify, merge, publish, translate, distribute, sublicense, or sell copies of the Specification, and to permit persons to whom the Specification is furnished to do so, subject to the condition that the foregoing copyright notice and this permission notice shall be included in all copies or substantial portions of the Specification. Unless separate permission is granted, modified works that are redistributed shall not contain misleading information regarding the authors, title, number, or publisher of the Specification, and shall not claim endorsement of the modified works by the authors, any organization or project to which the authors belong, or the XMPP Standards Foundation.
Warranty
## NOTE WELL: This Specification is provided on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. ##
Liability
In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall the XMPP Standards Foundation or any author of this Specification be liable for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising from, out of, or in connection with the Specification or the implementation, deployment, or other use of the Specification (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if the XMPP Standards Foundation or such author has been advised of the possibility of such damages.
Conformance
This XMPP Extension Protocol has been contributed in full conformance with the XSF’s Intellectual Property Rights Policy (a copy of which can be found at <https://xmpp.org/about/xsf/ipr-policy> or obtained by writing to XMPP Standards Foundation, P.O. Box 787, Parker, CO 80134 USA).
## Contents
1 Introduction ........................................ 1
1.1 Motivation ........................................ 1
1.2 How It Works ..................................... 1
2 Concepts and Approach .............................. 5
2.1 Every Account a Pubsub Service ................. 6
2.2 One Publisher Per Node .......................... 6
2.3 Use Presence ...................................... 6
2.4 Filtered Notifications ............................. 7
2.5 Smart Defaults ................................... 7
3 Publishing Events ................................. 7
4 Receiving Event Notifications .................... 8
4.1 Automatic Subscriptions .......................... 8
4.2 Notification Filtering ............................. 8
4.3 Generating Notifications ....................... 9
4.3.1 Addressing ................................... 9
4.3.2 Number of Notifications .................... 9
4.3.3 When to Generate Notifications .......... 10
4.3.4 Sending the Last Published Item ........ 10
5 Recommended Defaults ............................ 12
6 Determining Support ................................ 12
6.1 Account Owner Service Discovery ............... 12
6.2 Contact Service Discovery ..................... 13
7 Implementation Notes ............................ 14
7.1 Cancelling Subscriptions ........................ 14
7.2 One Node Per Namespace ....................... 14
8 Security Considerations ........................ 14
9 IANA Considerations .............................. 15
10 XMPP Registrar Considerations ................. 15
10.1 Service Discovery Category/Type ............. 15
11 XML Schema ...................................... 15
12 Acknowledgements ............................... 15
1 Introduction
1.1 Motivation
Personal eventing provides a way for a Jabber/XMPP user to send updates or “events” to other users, who are typically contacts in the user’s roster. An event can be anything that a user wants to make known to other people, such as those described in User Geolocation (XEP-0080) ¹, User Mood (XEP-0107) ², User Activity (XEP-0108) ³, and User Tune (XEP-0118) ⁴. While the XMPP Publish-Subscribe (XEP-0060) ⁵ extension (“pubsub”) can be used to broadcast such events associated, the full pubsub protocol is often thought of as complicated and therefore has not been widely implemented. ⁶ To make publish-subscribe functionality more accessible (especially to instant messaging and presence applications that conform to XMPP IM ⁷), this document defines a simplified subset of pubsub that can be followed by instant messaging client and server developers to more easily deploy personal eventing services across the Jabber/XMPP network. We label this subset “Personal Eventing Protocol” or PEP.
Note: Any use cases not described herein are described in XEP-0060. Also, this document does not show error flows related to the generic publish-subscribe use cases referenced herein, since they are exhaustively defined in XEP-0060. The reader is referred to XEP-0060 for all relevant protocol details related to the XMPP publish-subscribe extension. This document merely defines a “subset” or “profile” of XMPP publish-subscribe.
1.2 How It Works
This section provides a friendly introduction to personal eventing via pubsub (PEP). Imagine that you are a Shakespearean character named Juliet and that you want to generate events about what music you’re listening to, which anyone may see as long as they are authorized to see your online/offline presence (i.e., a pubsub access model of “presence”). We assume that you have three contacts with the following relationship to you:
1. [email protected], who has no subscription to your presence
2. [email protected], who has a bidirectional subscription to your presence and who is in your “Servants” roster group
⁶Instead, many “extended presence” formats are currently sent using the <presence/> stanza type; unfortunately, this overloads presence, results in unnecessary presence traffic, and does not provide fine-grained control over access. The use of publish-subscribe rather than presence is therefore preferable.
3. [email protected], who has a bidirectional subscription to your presence and who is in your "Friends" roster group
We also assume that your server (capulet.lit) supports PEP and that your client discovered that support when you logged in.
Now you start playing a song on your music playing software. Your client captures that "event" and publishes it to your server:
```
Listing 1: Publishing an event
<iq from='[email protected]/balcony' type='set' id='pub1'>
<pubsub xmlns='http://jabber.org/protocol/pubsub'>
<publish node='http://jabber.org/protocol/tune'>
<tune xmlns='http://jabber.org/protocol/tune'>
<artist>Gerald Finzi</artist>
<length>255</length>
<source>Music for "Love's Labors Lost" (Suite for small orchestra)</source>
<title>Introduction (Allegro vigoroso)</title>
<track>1</track>
</tune>
</publish>
</pubsub>
</iq>
```
Note the following about your publish request:
1. It is sent with no 'to' address (see Every Account a Pubsub Service).
2. It specifies a node of "http://jabber.org/protocol/tune" (see One Node per Namespace).
If all goes well (see Publishing Events), everyone who is interested in what you are listening to will receive notification of the event:
```
Listing 2: Interested parties receive event notifications
<message from='[email protected]' to='[email protected]/orchard' type='headline' id='tunefoo1'>
<event xmlns='http://jabber.org/protocol/pubsub#event'>
<items node='http://jabber.org/protocol/tune'>
<item>
<tune xmlns='http://jabber.org/protocol/tune'>
<artist>Gerald Finzi</artist>
<length>255</length>
</tune>
</item>
</items>
</event>
</message>
```
Because PEP services must send notifications to the account owner, you too receive the notification at each of your resources (here "balcony" and "chamber").
Listing 3: Publisher receives event notification
1 INTRODUCTION
But how do Romeo and the Nurse tell your server that they are interested in knowing what you’re listening to? In generic pubsub they typically need to explicitly subscribe to your “http://jabber.org/protocol/tune” node. But PEP services support two special features:
1. "auto-subscribe" -- because they are subscribed to your presence, they automatically receive your events (see Use Presence).
2. "filtered-notification" -- they can include some special flags in their Entity Capabilities (XEP-0115) information to specify which event types (payloads) they want to receive (see Filtered Notifications).
Listing 4: Romeo sends presence with caps
```
<presence from='[email protected]/orchard'>
<c xmlns='http://jabber.org/protocol/caps' hash='sha-1' node='http://www.chatopus.com' ver='zHyEOgxTrkpSdGcQKH8EFPLsriY='/>
</presence>
```
That may still be necessary for open access model nodes in PEP if another user does not send you presence, such as [email protected] in our scenario.
Your server knows to send tune information to Romeo because when the server un-
packs the value of the 'ver' attribute ("054H4A7280JuT6+IroVYxgCAjZo=") in accordance
with XEP-0115, it discovers that Romeo’s client advertises a service discovery feature of
"http://jabber.org/protocol/tune+notify", where the "+notify" suffix indicates interest in
receiving notifications of the node whose NodeID precedes the suffix (see XEP-0060 § 9.2). The
server can verify this support if needed by sending a service discovery request to Romeo’s full
JID, where the response would be as follows:
Listing 5: Disco#info result from extension
```
<iq from='[email protected]/orchard'
to='[email protected]'
type='result'
id='disco123'>
<query xmlns='http://jabber.org/protocol/disco#info'>
<identity category='client' name='Exodus_0.9.1' type='pc'/>
<feature var='http://jabber.org/protocol/disco#info'/>
<feature var='http://jabber.org/protocol/disco#items'/>
<feature var='http://jabber.org/protocol/geoloc'/>
<feature var='http://jabber.org/protocol/geoloc+notify'/>
<feature var='http://jabber.org/protocol/tune'/>
<feature var='http://jabber.org/protocol/tune+notify'/>
</query>
</iq>
```
Naturally your server doesn’t need to send out a disco#info request every time, since it will
quickly create a large cache of 'ver' values.
So that’s the general idea.
2 Concepts and Approach
Personal eventing via pubsub (“PEP”) is based on the following principles:
1. Every account a pubsub service.
2. One publisher per node.
3. Use presence.
4. Filter notifications based on expressed interest.
5. Smart defaults.
These principles are described more fully below.
2.1 Every Account a Pubsub Service
When a user creates an account (or has an account provisioned) at a Jabber/XMPP server that supports PEP, the server associates a virtual pubsub service with the account. This greatly simplifies the task of discovering the account owner’s personal pubsub nodes, since the root pubsub node simply is the account owner’s bare JID (<[email protected]> or <domain.tld>). This assumption also simplifies publishing and subscribing.
2.2 One Publisher Per Node
There is no need for multiple publishers to a PEP service, since by definition the service generates information associated with only one entity. The owner-publisher for every node is the bare JID of the account owner.
2.3 Use Presence
Although generic publish-subscribe services do not necessarily have access to presence information about subscribers, PEP services are integrated with presence in the following ways:
- Each messaging and presence account simply is a virtual publish-subscribe service.
- The default access model is ”presence”.
- A contact’s subscription to an account owner’s personal eventing data is automatically created because the contact has an XMPP presence subscription (the ”auto-subscribe” feature).
- Services take account of subscriber presence in the generation of notifications. ¹⁰
- A service automatically sends notifications to all of the account owner’s connected resources (subject to notification filtering).
These uses of presence simplify the task of developing compliant clients (cf. XMPP Design Guidelines (XEP-0134) ¹¹).
Note: It is strongly NOT RECOMMENDED to use directed presence with Entity Capabilities data that differs from the data included in broadcast presence for the purpose of establishing implicit PEP subscriptions to another entity, because the directed presence information will be overwritten by any subsequent presence broadcast.
¹⁰This works only if the subscription state is ”both” (see RFC 3921).
2.4 Filtered Notifications
By default, the existence of an XMPP presence subscription is used to establish a PEP subscription to the account owner's personal eventing data. In order to filter which notifications are sent by the PEP service, the contact's client includes extended Entity Capabilities (XEP-0115) information in the presence notifications it sends to the account owner. Because the PEP service supports the "filtered-notifications" feature, it sends only those notifications that match the contact's expressed notification preferences.
2.5 Smart Defaults
Most pubsub configuration options and metadata are not needed for personal eventing. Instead, PEP services offer smart defaults to simplify node creation and management.
3 Publishing Events
An account owner publishes an item to a node by following the protocol specified in XEP-0060:
Listing 6: Account owner publishes item
```xml
<iq from='[email protected]/balcony' type='set' id='pub1'>
<pubsub xmlns='http://jabber.org/protocol/pubsub'>
<publish node='http://jabber.org/protocol/tune'>
<item>
<tune xmlns='http://jabber.org/protocol/tune'>
<artist>Gerald Finzi</artist>
<length>255</length>
<source>Music for "Love’s Labors Lost" (Suite for small orchestra)</source>
<title>Introduction (Allegro vigoroso)</title>
<track>1</track>
</tune>
</item>
</publish>
</pubsub>
</iq>
```
If the node does not already exist, the PEP service MUST create the node. This "auto-create" feature (defined in XEP-0060) MUST be supported by a PEP service. (Naturally, the account owner's client MAY follow the node creation use case specified in XEP-0060 before attempting to publish an item.)
A PEP service SHOULD also support the "publish-options" feature defined in XEP-0060. If the publication logic dictates that event notifications shall be sent, the account owner's
---
server generates notifications and sends them to all appropriate entities as described in the Receiving Event Notifications section of this document, as well as to any of the account owner’s available resources.
Note: PEP ties the receipt of PEP notifications to the subscriber’s presence, but does not tie the generation of PEP notifications to the publisher’s presence. If the publisher wishes to stop generating PEP events (or to generate an "empty" event as can be done for some PEP payloads) before ending its presence session, the publisher MUST direct its client to do so and MUST NOT depend on the PEP service to automatically "zero out" its PEP information when the PEP service receives unavailable presence from the publisher.
4 Receiving Event Notifications
An entity shall receive event notifications if:
1. The node has an open access model and the entity has explicitly or implicitly subscribed to the node as explained in XEP-0060.
2. The entity shares presence with the account owner (see Presence Sharing), is authorized to receive events from the node in accordance with the node access model (see XEP-0060), and advertises an interest in the payload type (see Notification Filtering).
3. The entity is the account owner itself, in which case the PEP service shall send notifications to all of the account owner’s available resources (subject to notification filtering).
4.1 Automatic Subscriptions
A PEP service MUST support the “auto-subscribe” feature defined in Section 9.1 of XEP-0060. This implies that when a user has an XMPP presence subscription to the account owner’s presence, the user automatically also has the right to subscribe to any of the account owner’s PEP nodes (if the access model is the default of "presence") and to retrieve items from such PEP nodes.
4.2 Notification Filtering
A PEP service MUST support the "filtered-notifications" feature defined in Section 9.2 of XEP-0060. This implies that when an automatic subscriber can specify which event payloads it wants to receive by including appropriate feature bundles in the XEP-0115 information it broadcasts.
4.3 Generating Notifications
4.3.1 Addressing
1. The server MUST set the ‘from’ address on the notification to the bare JID (<[email protected]> or <domain.tld>) of the account owner (in these examples, "[email protected]").
2. Any errors generated by the recipient or the recipient’s server in relation to the notification MUST be directed to the JID of the ‘from’ address on the notification (i.e., the bare JID) so that bounce processing can be handled by the PEP service rather than by the publishing client.
3. When sending notifications to an entity that has a presence subscription to the account owner, the server SHOULD include an Extended Stanza Addressing (XEP-0033) "replyto" extension specifying the publishing resource (in this example, "[email protected]/balcony"); this enables the subscriber’s client to differentiate between information received from each of the account owner’s resources (for example, different resources may be in different places and therefore may need to specify distinct geolocation data). However, a server MUST NOT include the “replyto” address when sending a notification to an entity that does not have a presence subscription to the account owner.
4. If the PEP service has presence information about the intended recipient, it SHOULD direct the notification(s) to the full JID(s) of the recipient’s (<[email protected]/resource> or <domain.tld/resource>); if the PEP service does not have presence information about a subscriber, it MUST address the notification to the subscriber’s bare JID (<[email protected]> or <domain.tld>).
4.3.2 Number of Notifications
1. If an entity subscribed using a full JID (<[email protected]/resource> or <domain.tld/resource>) or a bare domain identifier <domain.tld>, a PEP service MUST send one notification only, addressed to the subscribed JID.
2. If a subscriber subscribed using a bare JID <[email protected]> and a PEP service does not have appropriate presence information about the subscriber, a PEP service MUST send at most one notification, addressed to the bare JID <[email protected]> of the subscriber, and MAY choose not to send any notification. (By "appropriate presence information" is meant an available presence stanza with XEP-0115 data that
---
receives event notifications
indicates interest in the relevant data format.)
3. If a subscriber subscribed using a bare JID <[email protected]> and a PEP service has appropriate presence information about the subscriber, the PEP service MUST send one notification to the full JID (<[email protected]/resource> or <domain.tld/resource>) of each of the subscriber’s available resources that have included XEP-0115 information indicating an interest in the data format.
4.3.3 When to Generate Notifications
1. When an account owner publishes an item to a node, a PEP service MUST generate a notification and send it to all appropriate subscribers (where the number of notifications is determined by the foregoing rules).
2. When a PEP service receives initial presence 14 from a subscriber’s resource including XEP-0115 information that indicates an interest in the data format, it MUST generate a notification containing at least the last published item for that node and send it to the newly-available resource; see below under Sending the Last Published Item.
3. As an exception to the foregoing MUST rules, a PEP service MUST NOT send notifications to a subscriber if the user has blocked the subscriber from receiving the kind of stanza used for notifications (typically message stanzas) by means of communications blocking as specified in Privacy Lists (XEP-0016) 15 or Blocking Command (XEP-0191) 16.
4.3.4 Sending the Last Published Item
As mentioned, a PEP service MUST send the last published item to all new subscribers and to all newly-available resources for each subscriber, including the account owner itself. (That is, the default value of the “pubsub#send_last_published_item” node configuration field must be “on_sub_and_presence”; this behavior essentially mimics the functionality of presence as defined in XMPP IM.) When processing a new subscription, the service MAY send not only the last published item but instead all items that are currently associated with the node (i.e., up to the maximum number of items at the node, which might be one if the node is a “singleton node” as described in XEP-0060). If the service has knowledge about the datetime that a subscriber’s newly-available resource last received updated information from the node (e.g.,
14By “initial presence” is meant a presence stanza with no ‘type’ attribute that the PEP service receives after the subscriber was previously unavailable; any subsequent presence stanza with no ‘type’ attribute that the PEP service receives after the initial presence notification but before the subscriber against goes offline MUST NOT trigger sending of a new pubsub notification.
as described in Last Activity in Presence (XEP-0256)\(^{17}\), then it MAY also send more items than only the last published item to the newly-available resource.
Note: The "on_sub_and_presence" setting relates to the subscriber’s presence, not the publisher’s presence.
Listing 7: Subscriber sends presence from newly-available resource
```xml
<presence from='[email protected]/orchard'>
<
c xmlns='http://jabber.org/protocol/caps'
hash='sha-1'
node='http://www.chatopus.com'
ver='zHyEOgxTrkpSdGcQKH8EFPLsriY='/>
</presence>
```
Listing 8: Subscriber’s server sends presence from newly-available resource to publisher’s bare JID (i.e., PEP service)
```xml
<presence from='[email protected]/orchard' to='[email protected]'>
<
c xmlns='http://jabber.org/protocol/caps'
hash='sha-1'
node='http://www.chatopus.com'
ver='zHyEOgxTrkpSdGcQKH8EFPLsriY='/>
</presence>
```
Listing 9: PEP service sends last published item to newly-available resource
```xml
<message from='[email protected]'
to='[email protected]/orchard'
type='headline'
id='foo'>
<event xmlns='http://jabber.org/protocol/pubsub#event'>
<items node='http://jabber.org/protocol/tune'>
<item>
<tune xmlns='http://jabber.org/protocol/tune'>
<artist>Gerald Finzi</artist>
<length>255</length>
<source>Music for "Love's_Labors_Lost" (Suite for small orchestra)</source>
<title>Introduction (Allegro vigoroso)</title>
<track>1</track>
</tune>
</item>
</items>
</event>
<delay xmlns='urn:xmpp:delay' stamp='2003-12-13T23:58:37Z'/>
</message>
```
5 Recommended Defaults
A PEP service MUST:
- Support the node discovery, node creation, node deletion, publish item, subscribe, unsubscribe, and item retrieval use cases specified in XEP-0060.
- Support the "auto-create", "auto-subscribe", and "filtered-notifications" features.
- Support the "owner" and "subscriber" affiliations.
- Support the "presence" access model and set it to the default.
- Support the "open", "roster", and "whitelist" access models.
- Treat the account owner’s bare JID (<[email protected]> or <domain.tld>) as a collection node (i.e., as the root collection node for the account’s virtual pubsub service).
- Default the 'deliver_notifications' configuration option to true (i.e., deliver payloads by default).
- Default the 'send_last_published_item' configuration option to on_sub_and_presence (i.e., send the last published item on subscription and on receipt of presence).
A PEP service MAY support other use cases, affiliations, access models, and features, but such support is OPTIONAL.
6 Determining Support
6.1 Account Owner Service Discovery
Naturally, before an account owner attempts to complete any PEP use cases, its client SHOULD determine whether the account owner’s server supports PEP; to do so, it MUST send a Service Discovery (XEP-0030) information request to its own bare JID:
Listing 10: Account owner queries server regarding protocol support
```xml
<iq from='[email protected]/balcony' to='[email protected]' id='disco1' type='get'>
<query xmlns='http://jabber.org/protocol/disco#info'/>
</iq>
```
Because subscriptions are implicit in PEP rather than explicit as in generic pubsub, the on_sub_and_presence setting effectively means sending on presence.
If the account owner’s server supports PEP and the account is provisioned for PEP, the server MUST return an identity of “pubsub/pep” on behalf of the account (as well as a list of the namespaces and other features it supports, including all supported XEP-0060 features):
Listing 11: Server communicates protocol support
```xml
<iq from='[email protected]'
to='[email protected]/balcony'
id='disco1'
type='result'>
<query xmlns='http://jabber.org/protocol/disco#info'>
<identity category='account' type='registered'/>
<identity category='pubsub' type='pep'/>
<feature var='http://jabber.org/protocol/pubsub#access-presence'/>
<feature var='http://jabber.org/protocol/pubsub#auto-create'/>
<feature var='http://jabber.org/protocol/pubsub#auto-subscribe'/>
<feature var='http://jabber.org/protocol/pubsub#config-node'/>
<feature var='http://jabber.org/protocol/pubsub#create-and-configure'/>
<feature var='http://jabber.org/protocol/pubsub#create-nodes'/>
<feature var='http://jabber.org/protocol/pubsub#filtered-notifications'/>
<feature var='http://jabber.org/protocol/pubsub#persistent-items'/>
<feature var='http://jabber.org/protocol/pubsub#publish'/>
<feature var='http://jabber.org/protocol/pubsub#retrieve-items'/>
<feature var='http://jabber.org/protocol/pubsub#subscribe'/>
</query>
</iq>
```
6.2 Contact Service Discovery
A contact MAY send service discovery requests to the account owner’s bare JID (<[email protected]> or <domain.tld>). If the contact already has a subscription to the account owner’s presence, this is not necessary in order to receive notifications from the account owner via personal eventing. However, a user without a presence subscription needs to do so in order to discover if the account owner is a virtual pubsub service and to discover the account owner’s eventing nodes. The relevant protocol flows are demonstrated in XEP-0060. Note: When returning disco#items results, the account owner’s server MUST check the access model for each of the account owner’s PEP nodes and MUST return as service discovery items only those nodes to which the contact is allowed to subscribe or from which the contact is allowed to retrieve items without first subscribing.
7 Implementation Notes
7.1 Cancelling Subscriptions
In order to ensure appropriate access to information published at nodes of type "presence" and "roster", a PEP service MUST re-calculate access controls when:
1. A presence subscription state changes (e.g., when a subscription request is approved).
2. A roster item is modified (e.g., when the item is moved to a new roster group).
If the modification results in a loss of access, the service MUST cancel the entity’s subscription. In addition, the service MAY send a message to the (former) subscriber informing it of the cancellation (for information about the format of messages sent to notify subscribers of subscription cancellation, see the "Notification of Subscription Denial or Cancellation" section of XEP-0060).
7.2 One Node Per Namespace
An earlier version of this document specified that there could be only one publish-subscribe node associated with any given payload type (XML namespace) for the account owner (e.g., there could be only one pubsub node for geolocation events, one node for tune events, and one node for mood events, etc.). However, this rule is now considered overly restrictive because some data formats can be used to encapsulate many different kinds of information; the usual example is Atom as defined in RFC 4287, for which many extensions exist. Therefore, this document now does not specify that there is a one-to-one relationship between NodeIDs and payload namespaces. A specification that defines a given payload format for use in PEP MUST specify whether there shall be only one node per namespace, or whether multiple NodeIDs for the same namespace are allowable.
8 Security Considerations
A PEP service MAY enforce additional privacy and security policies when determining whether an entity is allowed to subscribe to a node or retrieve items from a node; however, any such policies shall be considered specific to an implementation or deployment and are out of scope for this document.
9 IANA Considerations
This document requires no interaction with the Internet Assigned Numbers Authority (IANA).
10 XMPP Registrar Considerations
10.1 Service Discovery Category/Type
The XMPP Registrar includes a category of "pubsub" in its registry of Service Discovery identities (see <https://xmpp.org/registrar/disco-categories.html>); as a result of this document, the Registrar includes a type of "pep" to that category. The registry submission is as follows:
```
<category>
<name>pubsub</name>
<type>
<name>pep</name>
<desc>
A personal eventing service that supports the publish-subscribe subset defined in XEP-0163.
</desc>
<doc>XEP-0163</doc>
</type>
</category>
```
11 XML Schema
Because PEP simply reuses the protocol specified in XEP-0060, a separate schema is not needed.
12 Acknowledgements
The authors wish to thank the participants in the XMPP Interoperability Testing Event held July 24 and 25, 2006, who provided valuable feedback that resulted in radical simplification of the protocol.
Thanks also to the many members of the [email protected] discussion list who patiently suffered through seemingly endless discussion of the auto-create and publish-and-configure features.
|
{"Source-Url": "https://xmpp.org/extensions/xep-0163.pdf", "len_cl100k_base": 7188, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 49910, "total-output-tokens": 8827, "length": "2e12", "weborganizer": {"__label__adult": 0.00042724609375, "__label__art_design": 0.0006928443908691406, "__label__crime_law": 0.001678466796875, "__label__education_jobs": 0.0015869140625, "__label__entertainment": 0.0004131793975830078, "__label__fashion_beauty": 0.00016629695892333984, "__label__finance_business": 0.001880645751953125, "__label__food_dining": 0.0002541542053222656, "__label__games": 0.0009603500366210938, "__label__hardware": 0.002288818359375, "__label__health": 0.0007252693176269531, "__label__history": 0.0003535747528076172, "__label__home_hobbies": 0.00011038780212402344, "__label__industrial": 0.00035190582275390625, "__label__literature": 0.0006341934204101562, "__label__politics": 0.0005064010620117188, "__label__religion": 0.0005707740783691406, "__label__science_tech": 0.09954833984375, "__label__social_life": 0.00026798248291015625, "__label__software": 0.312744140625, "__label__software_dev": 0.57275390625, "__label__sports_fitness": 0.0002429485321044922, "__label__transportation": 0.00033402442932128906, "__label__travel": 0.00023365020751953125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32726, 0.02724]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32726, 0.12085]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32726, 0.81383]], "google_gemma-3-12b-it_contains_pii": [[0, 383, false], [383, 2918, null], [2918, 4772, null], [4772, 7654, null], [7654, 9395, null], [9395, 9603, null], [9603, 10691, null], [10691, 12377, null], [12377, 14426, null], [14426, 16436, null], [16436, 18552, null], [18552, 20912, null], [20912, 23730, null], [23730, 25354, null], [25354, 27152, null], [27152, 29424, null], [29424, 31495, null], [31495, 32538, null], [32538, 32726, null]], "google_gemma-3-12b-it_is_public_document": [[0, 383, true], [383, 2918, null], [2918, 4772, null], [4772, 7654, null], [7654, 9395, null], [9395, 9603, null], [9603, 10691, null], [10691, 12377, null], [12377, 14426, null], [14426, 16436, null], [16436, 18552, null], [18552, 20912, null], [20912, 23730, null], [23730, 25354, null], [25354, 27152, null], [27152, 29424, null], [29424, 31495, null], [31495, 32538, null], [32538, 32726, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32726, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32726, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32726, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32726, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32726, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32726, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32726, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32726, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32726, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32726, null]], "pdf_page_numbers": [[0, 383, 1], [383, 2918, 2], [2918, 4772, 3], [4772, 7654, 4], [7654, 9395, 5], [9395, 9603, 6], [9603, 10691, 7], [10691, 12377, 8], [12377, 14426, 9], [14426, 16436, 10], [16436, 18552, 11], [18552, 20912, 12], [20912, 23730, 13], [23730, 25354, 14], [25354, 27152, 15], [27152, 29424, 16], [29424, 31495, 17], [31495, 32538, 18], [32538, 32726, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32726, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
e88bc699df26de146d971bb0b0b73461faf79861
|
Generating of Business Database Application Elements
Artur Kornatka\textsuperscript{1,2}\textsuperscript{*}
\textsuperscript{1}Institute of Computer Science, Maria Curie-Sklodowska University, pl. M. Curie-Sklodowskiej 5, 20-031 Lublin, Poland.
\textsuperscript{2}Department of Computer Science, Nowy Sącz School of Business - National-Louis University, Zielona 27, 33-300 Nowy Sącz, Poland
Abstract – The paper presents the outline of an innovative conception of the functioning of generator for business database applications elements and shows also the working principles of the author's prototype system named BACG (Business Application Code Generator) which implements the aforementioned conception.
1 Introduction
The main factor determining the cost of software creation is the value of programmers labour expenses [1]. High competition on the software market for business application compels producers to reduce these costs. Substantial reduction of expenses pertaining to implementation of information systems can be achieved through using the specialized generators which enable replacement of programmers and speed up the process of software production. Usage of optimally working generators allows for automation of selected stages of business application developments while at the same time keeping high quality standards of the final product. In this case it is very important that all generated elements differ as little as possible from those created by a programmer. Skilful application of such specialized systems by companies producing business applications, may be a key factor determining their competitive advantage on the market.
The main aim of the paper is to present an innovative concept of functioning of generator for selected code elements of database business application and showing
\textsuperscript{*}[email protected]
the principles of working of the author’s prototype system called BACG (Business Application Code Generator), which is meant to accomplish this concept.
The BACG system is an innovative generator of business application elements. The main feature which makes it different form the other similar tools ([2], [3]) is its focus on the generating of optimal and professional code which is coherent with the standing design patterns. The code is created automatically according to the rules and principles obligatory in advanced business applications development, and is almost indiscernible from the one that would be created by a professional programmer. Thus, the BACG system replaces a programmer during development of advanced code in contrast to the existing systems which concentrate on the generating of suitable forms only without paying attention to the quality of the built code.
The BACG system was created with the help of the Microsoft Visual Studio 2010. The application code was written in the C# language. The generator is able to work with the databases managed by the Microsoft SQL Server 2008.
All code elements which have been generated by the BACG system are compatible with the MVVM design pattern. The second section of the present work contains description of this template and technology used in the generated parts of application.
The third section describes the process of generating of selected business application elements.
The research results are presented in section 4 where the innovative features of the BACG system have been gathered and further development of the system is presented.
2 Theory – short description of MVVM, WPF, and XAML
WPF (Windows Presentation Foundation) is a presentation system for building Windows client applications with visually stunning user experiences. It defines the latest programming standard for the expanded user interfaces. Due to this technology the programmers can use up-to-date controls which allow to employ the full possibilities that are offered by the Microsoft operating system.
XAML (eXtensible Application Markup Language) is a declarative markup language created by Microsoft for programming user interfaces created in the WPF technology. The syntax of XML is based on the classical XML language. Subsequent tags of the XAML language describe all elements of the WPF user interface.
More details on WPF and XAML can be found in [4].
MVVM (Model-View-ViewModel) is a modern design pattern used for creating applications with the extended presentation part. The main purpose of this pattern is the separation of three basic layers of the application. According to [5] these are the Model, View, and ViewModel layers.
The Model layer of MVVM describes data logic or business logic of the application. This layer is completely independent of the user interface. It consists of many business objects which implement specific goals of the application.
The View layer of MVVM consists of visual elements of the application. A view may be an application window or the user control (UserControl) which can be placed on any application window.
The ViewModel layer of MVVM makes up a connection between a model and a view. The main task of objects in this layer is retrieving the selected data from a source, next transforming them into the form characteristic of the given view, or passing the properly modified information from the view to the data source.
The MVVM design pattern is ideally suited for creating user interfaces with the help of the WPF technology. More information on MVVM can be found in [6], [7], [8].
3 The process of generating of business database applications
Let us assume that using the Microsoft Visual Studio 2010 development environment we want to create a professional business application. The programming language we select is C#. We also assume that the database of this application has been created in Microsoft SQL Server 2008. The database should contain all necessary tables and relationships between them. After all necessary parameters for setting up a connection with the database have been introduced, we can set about to apply the BACG system in order to generate the elements of business application which is being created. After launching, the BACG system asks for providing the name of a directory where all generated elements will be stored, next it retrieves information about the structure of selected tables from the database server and displays it on the screen.
The process of generating of database business application elements by the BACG system can be divided into five main stages. It starts with the generating of the elementary classes and fundamental stored procedures. Next, the system creates subsequent elements compatible with the above described design pattern MVVM. Thus BACG generates the Model layer, that is the classes which are responsible for contact with the database. The next stage is the creation of the View layer, that is advanced views realizing specific scenarios. In this stage the components of the ViewModel layers are also generated which are meant for intermediation between the Model and View layers.
After this cursory description of operating of the generator, we can move on to the detailed description of the aforementioned parts.
3.1 Elementary classes generation
In the first stage of generating of database business application elements the BACG system creates a collection of elementary classes. During this stage a collection of classes which facilitates the basic operations comes into being.
The first element to be generated is the AccessToDataBase class. It contains a private field named connectionString which keeps all parameters necessary for setting up the connection with SQL Server and the database created on this server. A constructor
of this class initializes this field. The key element here is the method named CreateConnection() which creates an open connection with the database (on the server) and returns an object of the SQLConnection type.
In this stage a class named ObjectQuery is also created. It contains a set of static methods responsible for calling SQL queries and various stored procedures which are located on the database server. An example of such method can be a function named RunSingleValueProcedure(String procedureName) which runs the procedure (passed to it by its name) returning single value.
The next class which is generated in this stage is named DelegateCommand. It enables calling the indicated function specified in the class layer ViewModel by the element defined in the layer View. All of this is accomplished through the Command mechanism, which is supported by the WPF technology.
This stage is finalized by the creating of the ComboBoxKeyAndValue class which is necessary for the correct functioning of the controls ComboBox on the View layer views.
3.2 Stored procedures generating
In the subsequent stage of the generator run we can choose the tables which – in the future application – will be subjected to such operations as: adding, deleting, retrieving all or selected records. For each of these tables BACG generates a suitable SQL query code which performs the mentioned operations. Then on the basis of each such SQL query, the SQL code creating the stored procedure on the server is formed. BACG sends and executes this code on the SQL Server. As a consequence of this process, a set of stored procedures on the database server comes into being which enables addition, deletion, retrieval of all or selected records. Of course, these procedures are generated for all tables indicated by the user of the system.
3.3 Generating of the classes responsible for the database operations
After the elementary classes and stored procedures have been created, the generator proceeds to the creation of layer Model elements according to the MVVM design pattern.
3.3.1 “Type R” classes
The entity class – which will be called “type R” – is created for each table from the collection selected in section 3.2. During the construction of these classes a popular C# mechanism of properties is used. Hence, for each table field a property is created in the corresponding “type R” class, which has the same name and associate type.
The creation of every property is accompanied by generating two methods: get and set which allow reading and setting values to specified private fields of the class. The set method contains a mechanism for checking whether a new value is the same as the existing one. In this case a new assignment is not carried out. During the “type R”
class generation, a special attention should be paid to the table fields which are foreign
keys (connected with some record from the related table). In this case, apart from
the standard field and property, an additional field is created (and the corresponding
property) in the “type R” class and its type is determined by the “type R” class of the
related table. In this case, apart from the access to the foreign key values, we can
obtain a direct access to the related object. Calling the get method for such a field
(of related object) is bound up with running on the database server a suitable stored
procedure which takes data from the related record. Data are assigned to the object
and the field becomes a reference to this object.
3.3.2 “Type C” classes
When the “type C” class generation is completed, the BACG system starts generating
classes which are responsible for several database operations. Thus for each table
selected in point 3.2 we can create a class which will enable us to add, delete, update,
or read records from this table. We will call such classes “type C”.
Here are some exemplary functions contained in the “type C” class:
- the public method Add(…) to which an object of the “type R” class is passed,
and then by calling a proper system procedure of the database server it adds
a new related to this object record in the table
- the public method Delete(…) which calls a proper stored procedure to delete
a specified database record
- the public method GetAll(…) which returns a collection of “type R” classes;
this method calls a stored procedure retrieving all records form the selected
table, and next it creates the proper object of the “type R” class from each
record and adds it to the objects collection.
3.4 Views generation
The next stage of the database business application elements generation by the
BACG system is the creation of views, that is the elements of the View layer according
to the MVVM design pattern. Views are created with the help of the WPF technology
and XAML language.
A user of the BACG system picks up a table for which he would like to create the
views, and next he invokes a special window of the generator. In this window BACG
displays all fields of the selected table together with their properties. Apart from that
it presents tables related to the one selected. After the selection of the foreign key
we see all fields in the related table. Moreover, the window allows us to choose the
scenario of the object which is being created.
Currently the BACG system generates views according to two scenarios.
3.4.1 The first scenario – ObjectView
The BACG generator is capable of creating a view according to the first scenario. This scenario allows to save, edit, and update the selected record in the table stored in the database – which is of the “type R” class object. For example, if we want to create a view allowing to save a new record in a table, first we point to the fields which are to be filled. According to the WPF rules the BACG system, with the help of the XAML language, creates a special UserControl and for each pointed field adds a label describing this field (usually a component of the Label type), and next adds the editable field (e.g. TextBox or DataPicker). UserControl is associated with a suitable object of the ViewModel class and each editable field is assigned to the suitable property defined in the class of this object (see section 3.5). Fields association is performed through the Binding mechanism which is supported by the WPF technology. All labels and fields are placed in a special component Grid which controls a displacement of these elements. However, the fields corresponding to the table foreign keys are treated in a special way. For example, for such fields there can be created a special ComboBox which is able to display the related records from the related table so instead of filling in the value of the foreign key one can choose from the expandable list.
The UserControl element contains also the button which invokes a save function from a suitable ViewModel class. This action is realized by the Command of the Button control property.
Created UserControl can be placed in any window of the business application.
3.5 Class ViewModel generation
When subsequent views are generated, the BACG system creates the classes of the ViewModel layer. The main task of these classes is providing data in a suitable form to the views or passing from the view properly modified information to a data source. These classes act as intermediary between the View layer elements (created in the WPF technology) and the Model layer classes. In the same way as with the views, which are generated by two scenarios, we can also divide the ViewModel layer classes into two categories. The first category includes the classes created for the views produced by
the first scenario, the second one includes those for the views produced by the second scenario.
3.5.1 The ViewModel class generated for the views according to the first scenario
All classes of the ViewModel, generated for the views according to the first scenario, contain two fields. One field is an object of the suitable “type R” class, and the second field is an object of the related “type C” class. The “type R” and “type C” classes are those that have been created for the tables on which the view is to act upon. Additionally, the ViewModel layer classes contain a constructor which initializes their files based on its own parameters. The key element of these classes is the Properties area which contains a collection of properties which correspond to the editable controls of the view. Of course, these controls are bound to these properties through the Binding mechanism. The get method obtains a suitable field of the above described object of the “type R” class while the set method assigns a value to it. The generator pays attention to the consistence of the types between the “type R” class properties and the class ViewModel properties.
Special properties have been generated for the elements of the ComboBox type views which return a list of ComboBoxKeyAndValue type objects. These properties, using their own get method(), call a specially generated function GetAllOnlySelectedFields_FieldsName() for a “type C” class object, which returns the pointed collection. Thus, the generator has to complete the code of the suitable “type C” classes with new functions. In consequence, it requires to create new stored procedures which are to provide selected data to the returned objects collection of the ComboBoxKeyAndValue type.
The last elements of these classes are special properties (ICommand type) which in the get() method create a command of the DelegateCommand type by calling suitable methods for the “type C” class object. As an example we can mention here the property public ICommand SaveCommand which creates a new object of the DelegateCommand class in the method get() by calling the method Add() for the “type C” class object.
3.5.2 The ViewModel class generated for the views according to the second scenario
Classes of the ViewModel, generated for the views according to the second scenario, contain one field. It is an object of the related “type C” class. Moreover, they contain the initializing constructor of the class field, based on its own parameter. The key part of these classes is the region Properties where the property named Show is located, which in the get() method returns a list of objects of the specially defined ObjectNameForAllView type by calling a new function getAllView() for an object of the “type C” class. The ObjectNameForAllView type is a separately generated class which is composed only of properties created on the fields that a user would like to see in the view that is being created. In this situation it is necessary to complete a suitable “type
Generating of Business Database Application Elements
C” class with the getForAllView() method I whose purpose is creating a collection of objects of the ObjectNameForAllView type and filling in the data in these objects. Of course, in this case also the retrieving of suitable data from a database is bound with the calling the proper stored procedures kept in the SQL Server database management system.
The Show property is associated in the view with a DataGrid component and its subsequent columns with subsequent properties of the new ObjectNameForAllView type.
4 Summary
The BACG generator is an innovative system which automatically creates selected code elements of the database business applications. Its innovative character is realized through the following features:
- the system generates the optimal, professional source code using up-to-date advanced programming technologies and binds it with the business presentation layer (in contrast to the other generators which focus mainly on the forms creation without paying attention to the code quality),
- generated elements of the business application are compliant with the current Model View ViewModel (MVVM) design pattern,
- owing to the application of the readable design pattern a programmer retains a full control over the automatically generated code, and when the needs arise it can be easily modified and completed,
- each layer of the generated application is created independently, hence it is easy to modify one layer without interfering with the others (e.g. in the case when the user interface has to be changed),
- all generated views are created with painstaking attention to their business functionality and future application for the more elaborate generator,
- the BACG system uses up-to-date technologies for building the advanced Windows forms,
- the BACG system guarantees an effective code which is responsible for the access to the database (the Model layer) by strict binding with the stored procedures located on the database server.
In the future development of the BACG system the author plans:
- to extend the module for view generating so that it will enable to create more professional forms meeting the sophisticated business demands,
- to add a mechanism for extracting the interfaces and abstract classes in the process of code generation for specialized business functions,
- to create a mechanism enabling a cooperation with the object query language, operating on the objects of the Model layer which will be tightly bound with the generator of the suitable stored procedures kept in the database servers.
References
|
{"Source-Url": "http://journals.umcs.pl/ai/article/download/3343/2537", "len_cl100k_base": 4107, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 19063, "total-output-tokens": 4802, "length": "2e12", "weborganizer": {"__label__adult": 0.00033974647521972656, "__label__art_design": 0.00024187564849853516, "__label__crime_law": 0.00023603439331054688, "__label__education_jobs": 0.0003952980041503906, "__label__entertainment": 3.4749507904052734e-05, "__label__fashion_beauty": 0.0001150965690612793, "__label__finance_business": 0.0003039836883544922, "__label__food_dining": 0.0002837181091308594, "__label__games": 0.0003070831298828125, "__label__hardware": 0.0004787445068359375, "__label__health": 0.0002586841583251953, "__label__history": 0.00012218952178955078, "__label__home_hobbies": 5.4717063903808594e-05, "__label__industrial": 0.0002498626708984375, "__label__literature": 0.0001175999641418457, "__label__politics": 0.00016391277313232422, "__label__religion": 0.00025653839111328125, "__label__science_tech": 0.001529693603515625, "__label__social_life": 4.3392181396484375e-05, "__label__software": 0.003814697265625, "__label__software_dev": 0.98974609375, "__label__sports_fitness": 0.0002008676528930664, "__label__transportation": 0.0003554821014404297, "__label__travel": 0.0001809597015380859}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21946, 0.0152]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21946, 0.55006]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21946, 0.90458]], "google_gemma-3-12b-it_contains_pii": [[0, 1861, false], [1861, 4798, null], [4798, 7693, null], [7693, 10471, null], [10471, 13065, null], [13065, 15354, null], [15354, 18378, null], [18378, 20991, null], [20991, 21946, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1861, true], [1861, 4798, null], [4798, 7693, null], [7693, 10471, null], [10471, 13065, null], [13065, 15354, null], [15354, 18378, null], [18378, 20991, null], [20991, 21946, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21946, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21946, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21946, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21946, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21946, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21946, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21946, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21946, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21946, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21946, null]], "pdf_page_numbers": [[0, 1861, 1], [1861, 4798, 2], [4798, 7693, 3], [7693, 10471, 4], [10471, 13065, 5], [13065, 15354, 6], [15354, 18378, 7], [18378, 20991, 8], [20991, 21946, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21946, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
45bc7b296d7dc40d5868d93ca96ddeedd3a08fec
|
Bridging Missions and Architecture in Software-intensive Systems-of-Systems
Eduardo Silva, Everton Cavalcante, Thais Batista
Department of Informatics and Applied Mathematics
Federal University of Rio Grande do Norte
Natal, Brazil
[email protected], {everton, thais}@dimap.ufrn.br
Flavio Oquendo
IRISA – UMR CNRS
Université Bretagne Sud
Vannes, France
[email protected]
Abstract—Missions represent a key concern in the development of Systems-of-Systems (SoS) since they are related to both capabilities of constituent systems and interactions among these systems that contribute to the accomplishment of global goals of the SoS. In a mission-oriented approach to design software-intensive SoS, the activity towards the concretization of the mission model is its refinement to an architectural model. This paper addresses this synergetic relationship between mission and architectural models. As main contribution, we introduce a model-based refinement process supported by model-to-model transformations intended to apply mission models represented in mKAOS, a language to model missions, for automatically generating architecture descriptions in SosADL, a formal language to describe SoS software architectures.
Keywords—systems-of-systems; missions; software architecture; architecture description language; model refinement.
I. INTRODUCTION
A System-of-Systems (SoS) results from local interactions of multiple constituent systems that cooperate to form a larger, more complex system for accomplishing a given mission [8]. Each of these constituent systems has individual missions and can contribute to the accomplishment of the global mission of the SoS. The collaboration among such constituent systems enables an SoS to offer new capabilities that cannot be provided by any of these systems working as individual entities, the so-called emergent behavior. Besides emergent behavior, there are other intrinsic characteristics that make an SoS distinct from other distributed complex and large-scale systems: (i) the operational and managerial independence of constituent systems, which provide their own functionalities even when they do not cooperate within the scope of an SoS and can be managed independently from it, and (ii) the evolutionary development of the SoS, which may evolve over time to respond to changes on its operational environment, on the constituent systems, or on its own mission. Altogether, these characteristics have posed a set of challenges which made system engineering processes to be no longer suitable for developing these systems [2].
An important concern in the design of SoSs is the systematic modeling of both global and individual missions, as well as all relevant mission-related information. Missions play a key role in the SoS context since they define required capabilities of constituent systems and the interactions among these systems that lead to emergent behaviors towards the accomplishment of the global goals of the SoS. Therefore, mission models are the starting point for designing an SoS and are used as a basis of the whole evolutionary development process.
In a mission-oriented approach for designing software-intensive SoS, the next step towards the concretization of the mission model is its refinement to an architectural model, i.e., a model expressing the SoS software architecture.
The SoS software architecture is recognized as the key factor for achieving missions [7]. Therefore, mission models can be used as a basis for the further elaboration of architectural models by SoS software architects. Such a refinement allows specifying the SoS software architecture in compliance with the mission model, so that it is possible to establish traceability links between missions and architectural elements.
This paper concerns the synergetic relationship between mission and architectural models. Our proposal relies on mission models described in mKAOS, a pioneering language introduced in our previous work [16][17] aimed to support the specification of missions and the definition of relationships between such missions and other concerns of the SoS.
The main goal of mKAOS is to allow for a detailed modeling of missions in the SoS context and to enable stakeholders to identify or define specific elements for the SoS, e.g., constituent systems, required capabilities, and/or desired emergent behaviors.
On the other hand, the architectural representation is addressed by using SosADL [10][12], a formal, well-founded theoretically architecture description language (ADL) targeting the description of SoS software architectures under both structural and behavioral viewpoints while intending to cope with the features of software-intensive SoS.
More precisely, SosADL provides novel architectural concepts and the language constructs concretely embodying the SoS architectural concepts coping with the SoS defining characteristics. It is formally grounded on the π-Calculus for SoS [11].
As main contribution, this paper proposes a model-based refinement process to automatically generate architecture descriptions represented in SosADL from mKAOS mission models. The generated architecture descriptions encompass the whole structural view capable of achieving the mission.
The remainder of this paper is structured as follows. Section II provides an overview of both mKAOS and SosADL languages. Section III introduces our proposal for refining mission models towards architecture descriptions. Section IV briefly discusses related work. Section V contains some concluding remarks.
II. BACKGROUND
A. Missions in SoS
mKAOS is a specialization of KAOS [6], a requirements specification language. The basic elements defined in KAOS are goals, which are related to requirements, conflicts, obstacles, and expectations in order to ensure that a requirement has at least one operational capability implementing it. mKAOS extends KAOS with constructs to represent mission-related concepts for supporting SoS mission modeling. As in KAOS, mKAOS separates models according to their concerns as well as allows overlaps to have cross-view perspectives. Besides specializing concepts defined in KAOS, mKAOS creates specific constructs suited to the SoS context, in particular emergent behaviors and missions.
Missions of SoS can be specified in mKAOS through six different models, each one with its own syntax and semantics.
The main mKAOS model is the Mission Model, which describes missions and expectations. The Responsibility Model concerns the description of both constituent systems, environment agents, and the assignment of missions/expectations to them. The Object Model specifies objects used by the SoS for data exchange and physical structures in terms of: (i) entities, which represent a data abstraction or physical entity; (ii) events that can be raised or handled; (iii) domain hypothesis, defined as constraints; and (iv) domain invariants, defined as constraints that must be held during SoS execution and further evolutions.
mKAOS also provides two Capability Models: the Operational Capability Model defines a set of operations that each constituent system is able to execute, i.e., their operational capabilities, whereas the Communicational Capability Model specifies the possible interactions and cooperation among constituent systems, the so-called communicational capabilities.
Finally, the Emergent Behavior Model describes emergent behaviors, specific features that are produced from the interaction between at least constituent systems. Table I summarizes the elements of the mKAOS models. More details about each of these models and their elements can be found in [17].
The Mission Model follows a tree structure in which leaf nodes represent individual missions and non-leaf nodes represent global missions, respectively assigned to constituent systems and to the SoS as a whole.
<table>
<thead>
<tr>
<th>mKAOS Model</th>
<th>Model elements</th>
</tr>
</thead>
<tbody>
<tr>
<td>Mission Model</td>
<td>Mission, expectation</td>
</tr>
<tr>
<td>Responsibility Model</td>
<td>Constituent system, environment agent</td>
</tr>
<tr>
<td>Object Model</td>
<td>Entity, event, domain hypothesis, domain invariant</td>
</tr>
<tr>
<td>Operational Capability Model</td>
<td>Operational capability</td>
</tr>
<tr>
<td>Communication Capability Model</td>
<td>Communicational capability</td>
</tr>
<tr>
<td>Emergent Behavior Model</td>
<td>Emergent behavior</td>
</tr>
</tbody>
</table>
In this model, expectations represent assertions on the SoS environment that might influence the achievement of its missions.
In the Mission Model, Refinement Links establish a refinement relationship among missions, so that a given mission can be refined into other sub-missions and/or expectations. The assignment of missions to constituent systems is defined in a corresponding mKAOS Responsibility Model, in which each constituent system must have at least one assigned individual mission and each individual mission must be assigned to exactly one constituent system. In turn, expectations must be assigned to environment agents, which are external agents that somehow interfere on the SoS. Fine-grained mission-related specification can be expressed in mKAOS using the other provided models. mKAOS also allows defining relationships among missions.
B. Formally Describing SoS Software Architectures
When describing SoS software architectures, it is fundamental to consider: (i) both structural and behavioral definitions of its constituent systems and how they form together an SoS coalition; (ii) interactions among constituent systems; (iii) evolutions due to the dynamic scenarios in which an SoS operate; and (iv) properties, constraints, and quality attributes [1][13]. To cope with these concerns, SosADL [10][12] was defined as a formal language to comprehensively describe SoS software architectures while allowing for their automated, rigorous analysis. The formal foundations of SosADL rely on an extension of the π-calculus process algebra [11], thereby being a universal model of computation enhanced with SoS concerns.
One of the main characteristics of SoS software architectures is that the concrete constituent systems that are to be part of the SoS are partially known or even unknown at design time. For this reason, the constituent systems need to be bound dynamically. Thereby, an SoS software architecture is evolutionarily concretized only at runtime. To cope with this characteristic, SosADL allows describing SoS software architectures in an intentional, abstract way. This means that the architecture description expresses only the types of constituent systems required to accomplish the missions of the SoS as a whole (at design-time), but the concrete constituent systems themselves will be identified and evolutionarily incorporated into the SoS at runtime. Furthermore, the communication among constituent systems is said to be mediated in the sense that it is not solely restricted to communication (as in traditional systems), but it also allows for coordination, cooperation and collaboration.
SosADL uses a set of eleven elements, summarized in Table II: (i) systems; (ii) gates; (iii) connections; (iv) assumptions; (v) guarantees; (vi) properties; (vii) behavior; (viii) mediators; (ix) duties; (x) coalitions; and (xi) bindings. While coping with architectural concepts found in ADLs, the concepts defined in SosADL are aligned with the terminology of SoSs, fitting its domain semantics. More details about these elements can be found in [10] for the structural viewpoint and [12] for the behavioral viewpoint.
The system concept is an abstract representation of a constituent system that may be part of the SoS, but that is not under its control due to its operational and managerial independences. A system encompasses gates (specified by properties in terms of assumptions and guarantees), and an internal behavior describing its operational capability to achieve its individual mission. A gate groups interaction points of a constituent system with its environment, encompassing at least one connection. A connection is a typed communication channel through which the constituent system sends or receives data. Assumptions express properties expected by a gate of a constituent system to be satisfied by the environment, e.g., rules related to provided/required data in gates. Guarantees describe properties that must be enforced by the constituent system, thereby being a way of representing specific properties at the architectural level. A behavior represents the operational capabilities of the system and how it interacts with the environment by sending/receiving data.
In SosADL, a mediator is an architectural element under control of the SoS that mediates the communication, coordination, cooperation and collaboration among constituent systems, thus also promoting interoperability among them. Mediators differ from system-to-system connectors as they are used not only as mere communication channels, but also as elements responsible for the coordination, cooperation and collaboration among the interacting constituent systems. Therefore, mediators play an essential role in terms of making possible to an SoS to achieve its missions through emergent behaviors arising from such interactions. Similarly to systems, mediators can be also described abstractly, so that concrete mediators can be synthesized and deployed at runtime in order to cope with the highly dynamic environment of an SoS. A mediator definition encompasses a set of duties, which express obligations to be fulfilled by gates of constituent systems that may interact with the mediator. Moreover, a mediator allows defining properties, assumptions, guarantees, and an internal behavior.
A coalition represents the configuration of the SoS itself by intentionally defining how constituent systems and mediators can be temporarily arranged to compose the SoS. As constituent systems are not under the SoS control, it is necessary to specify how the mediators can be created and which systems will interact with them to define a concrete SoS. For this purpose, coalitions are composed by a set possible systems, mediators, and bindings that will be realized at runtime. A binding is the construct responsible for establishing dynamic connections between systems through mediators, in particular binding gates to duties. Such a dynamic nature of bindings is an important aspect for SoS since it is often not possible to foresee which concrete constituent systems will be connected through the mediators at runtime.
It is important to highlight that SosADL concerns focus on the architecture of the SoS as a whole. Therefore, the individual architectures of the constituent systems (even though desirable) are not mandatory in an SosADL description. This covers the fact that the internal architectures of the constituent systems are often unavailable, an typical case in the SoS domain. Nonetheless, the architecture of the SoS strongly depends on the interfaces of each constituent system, defined in terms of gates.
### III. Refining Mission Models to Architecture Descriptions
mKAOS was designed as a descriptive language for specifying missions of SoSs, focusing on what the system must be able to achieve instead of how it will achieve. Nevertheless, the descriptive elements of mKAOS refine mission definitions to the system level, assigning responsibilities and obligations of each constituent system. At this point, no further description related to how the system will achieve the existing missions is possible in mKAOS, indeed. Therefore, the SoS architectural description provides a new level of concretion by refining mKAOS models to an operational, architectural level. Although the proposed refinement relies on a mapping from missions to architecture, neither the mission model nor the architectural description provides sufficient information to represent the information from the each other, some data are not reflected in the architectural description during the refinement process (e.g., missions) and hence both models must be co-maintained for its own purposes.
Considering that mKAOS and SosADL provide different levels of abstractions for the SoS, the mapping process is based on the common concepts between both languages. These common concepts are the interfaces of the constituent systems and of the SoS. In mKAOS, an interface is a composition of the interfaces of the operational capabilities and communicational capabilities. On the other hand, SosADL defines interfaces as essential, explicit elements for defining the architecture of an SoS. It represents the interfaces as gates and associated connections, which are used for both structural and behavioral specifications.
The proposed mapping process is based on model-to-model (M2M) transformations [15][19], consisting in auto-
<table>
<thead>
<tr>
<th>Element</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>System</td>
<td>Abstract representation of a constituent system</td>
</tr>
<tr>
<td>Mediator</td>
<td>Abstract representation of the communication and coordination between two systems</td>
</tr>
<tr>
<td>Gate</td>
<td>Interaction point of a system with its environment</td>
</tr>
<tr>
<td>Duty</td>
<td>Obligation to be fulfilled by gates of systems that may interact with a mediator</td>
</tr>
<tr>
<td>Connection</td>
<td>Communication channel used by a system or mediator to send/receive data</td>
</tr>
<tr>
<td>Assumption</td>
<td>Property expected by a gate/duty to be satisfied by the environment</td>
</tr>
<tr>
<td>Guarantee</td>
<td>Property to be enforced by a system/mediator</td>
</tr>
<tr>
<td>Behavior</td>
<td>Functional capabilities of a system/mediator and how it interacts with the environment</td>
</tr>
<tr>
<td>Coalition</td>
<td>Representation of an SoS as an arrangement of systems and mediators forming its architecture</td>
</tr>
<tr>
<td>Binding</td>
<td>Dynamic attachment between a system and a mediator</td>
</tr>
</tbody>
</table>
TABLE II. SUMMARY OF THE MAIN ELEMENTS OF THE SOSADL LANGUAGE
matically refining models to lower abstraction levels aiming to reflect solutions defined in higher levels. As the implementation of the mapping steps is intended to be automatic, all of them shall be programmatically executed using a M2M transformation. This ensures the traceability of the missions and simplify the architecture design process: the architect is concerned only on describing behavior and detailing further elements not related to the mission model. It is important to highlight that the transformation does not encompass all mKAOS elements neither the SosADL elements, but it still can be realized in both directions. Both mission and architectural models are complimentary to each other and they must be dependently maintained. In the proposed mapping process, we have chosen a constructive approach in which the refinement will produce a single architecture capable of achieving the required missions and making possible to emerge the desired behaviors. An alternative is to build a set of possible architectures and verify the conformance of each one with the mission model, but this approach is however computationally expensive.
The mapping process is divided into five steps, as depicted in Fig. 1.
Step 1 consists in identifying the data types used in the Object Model (entities and events) and defining them in SosADL. Step 2 involves identifying constituent systems from the Responsibility Model and defining them as possible constituent systems in SosADL. In Step 3, for each system, it is necessary to select the associated operational capabilities specified in the Operational Capability Model and define a gate whose connections are defined for each input, output, and event. Input events will result in input connections while produced events will be mapped to output connections. In Step 4, each communicational capability defined in the Communicational Capability Model is concretized by a mediator whose duties are defined based on the input and outputs for the capability, similarly to the gate production. Inputs or outputs from communicational capabilities not used by any operational capability are described as inputs/outputs for the SoS as a whole. Finally, Step 5 consists in connecting constituent systems and mediators using the data association defined by input and output links in mKAOS, thereby establishing bindings in SosADL for each of these links. This last step involves the Object Model and both Operational and Communicational Capability Models, as well as the links between the objects and capabilities.
As an illustrative example, consider an SoS aimed to detect flash floods in a flood-prone area crossed by a river, with maximum confidence [10].
To achieve this global mission, such an SoS can combine information provided by multiple collaborating independent systems such as river monitoring systems and meteorological systems.
Within the SoS, river monitoring systems composed of sensor networks monitor the river water level as an indicator of flood while meteorological systems comprise weather stations and satellites to collect and analyze atmospheric parameters for weather conditions.
As depicted in Fig. 3 (top), these missions and systems responsible for achieving them can be modeled using overlapped Mission and Responsibility Models of mKAOS. A river monitoring system is capable of providing information about the state of the river and a meteorological system is capable of producing weather bulletins. These data can be aggregated to enable the SoS to provide more precise information about the possible risk of flood.
These operational and communicational capabilities can be represented in a Capability Model of mKAOS, as shown in Fig. 3.
Mapping these mKAOS models by means of the proposed process will result in an architectural model in SosADL composed of constituent systems, mediators, their respective gates and duties, and bindings attaching them.
Fig. 3 illustrates what would be the result of refining operational and communicational capabilities defined in a mKAOS Capability Model towards an architectural model in SosADL: (i) the River Monitoring System and the Meteorological System are mapped to systems; (ii) the respective ProvideRiverInformation and ProduceWeatherBulletin operational capabilities are mapped to gates interfacing to systems and thereby to their operational capabilities; (iii) the communicational capability that enables these systems to interact with each other (AggregateData) is mapped to a mediator; and (iv) bindings are created to attach these elements in order to form a coalition representing the SoS as a whole.
Fig. 1. Refinement process from mKAOS to SosADL models
Table III summarizes the correspondences between the mKAOS and SosADL elements, implemented by the mapping process. As mKAOS assumes that emergent behaviors arise from the communicational capabilities and the mapping process promotes an implementation for each communicational capability as mediators, it is expected for the architecture to emerge the desired behaviors. In turn, individual missions are direct consequence of operational capabilities of constituent systems and hence they are covered by the mapping of these capabilities to the gates of the constituent systems that compose the architecture.
The generated SoS architecture in SosADL, produced from the mapping process, needs to have its behavior defined since mKAOS does not cover behavioral concerns of SoS. This step is essentially manual because it largely depends on architectural decisions regarding how the constituent systems will implement their capabilities. It is also fundamental for the architect to consider non-functional requirements (NFRs), identify which constituent systems and mediators are involved in each requirement, and define and apply assumptions and guarantees for them, as well as for the SoS architecture itself. These NFRs can be manually derived from mKAOS domain hypothesis and invariants.
### Table III. Correspondences between the Elements of the mKAOS and SosADL Languages
<table>
<thead>
<tr>
<th>mKAOS</th>
<th>SosADL</th>
</tr>
</thead>
<tbody>
<tr>
<td>Constituent system</td>
<td>System</td>
</tr>
<tr>
<td>Communicational capability</td>
<td>Mediator</td>
</tr>
<tr>
<td>Operational capability</td>
<td>Gate (in system)</td>
</tr>
<tr>
<td>Input/output/event</td>
<td>Input/output connection</td>
</tr>
<tr>
<td>Entity</td>
<td>Data type</td>
</tr>
<tr>
<td>Event</td>
<td>Data type</td>
</tr>
</tbody>
</table>
This work is pioneer with respect to the refinement of mission models towards generating architectural models in SoS. To the best of our knowledge, the only relevant work that is worth highlighting is COMPASS (Comprehensive Modeling for Advanced Systems of Systems) [5], a manual methodology to produce and evaluate SoS architectural models from requirements. Such a methodology consists in using different tools to produce and analyze different models expressed in the OMG’s Systems Modeling Language (SysML) complemented by a model in the COMPASS Modeling Language (CML), a formal specification language specifically designed to support SoS modeling and analysis. SysML is used to model the constituent systems and interfaces among them in an SoS whereas CML is used to enrich these specifications with interaction contracts. Indeed, the authors of CML have acknowledged that it is a low-level formal language and mapping SysML models to CML produce less intelligible descriptions.
In terms of refinement process, our proposal differs from the one adopted in COMPASS regarding several aspects. First, we use the mission concept, which is more suitable for use in the SoS context [18]. Second, the SysML language is not tailored to SoS as it lacks some important features required for SoS, such as emergent behaviors. Third, the solution provided by COMPASS is more concerned with concrete architectures for SoS, whereas our approach produces abstract architectures to be further concretized according to the availability of the constituent systems forming the SoS. Oppositely to COMPASS, our approach does not require knowing the actual constituent systems a priori (as it is often expected for SoSSs). The COMPASS methodology also needs additional information about constituent systems, such as stakeholders and specificities regarding communication capabilities.
V. CONCLUSION
In this paper, we have introduced a model-based refinement methodology to produce architectural descriptions from mission models described using mKAOS, a pioneer descriptive language for SoS mission definition. The proposed approach relies on a mapping process that results in architectural descriptions expressed in SosADL, a formal ADL for the specification of SoS. Focusing on capabilities, our refinement process derives a structural description from both operational capabilities of the constituent systems and communicational capabilities of the SoS. Similarly to the existing approaches for deriving software architectures from requirements, the proposed mapping process relies on a top-down approach that allows producing SoS software architectures based on a high-level description of the constituent systems. The mapping process also ensures traceability between the mission and architectural models as it is based on a model transformation, thereby enabling architects to precisely identify which pieces of the software architecture are responsible for realizing each mission.
As future work, the model transformation rules from mKAOS to SosADL will continued to be integrated in a toolkit for model-based development of SoSs. We also intend to implement a verification and conformance mechanism to support the co-evolution of SoS missions and SoS architectures.
It is worth mentioning that mKAOS is a semi-formal notation, whose formalization is planned in future work. We will investigate the use of a temporal logic of evolving systems [14] as the basis for this formalization.
Also planned as future work is the application of mKAOS together with SosADL in several industrial-scale projects. They include joint work with DCNS for applying SosADL to architect naval SoSs, with IBM for applying SosADL to architect smart-farms in cooperative settings, and with SEGULA for applying SosADL to architect SoSs in the transport domain.
ACKNOWLEDGMENT
This work was partially supported by the Brazilian National Agency of Petroleum, Natural Gas and Biofuels (PRH-22/ANP/MCTI Program) and the Brazilian National Council for Scientific and Technological Development (CNPq) under grant 308725/2013-1.
REFERENCES
|
{"Source-Url": "http://www.dimap.ufrn.br/~everton/publications/2016-ICECCS.pdf", "len_cl100k_base": 5665, "olmocr-version": "0.1.42", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 19435, "total-output-tokens": 7001, "length": "2e12", "weborganizer": {"__label__adult": 0.0003561973571777344, "__label__art_design": 0.0004732608795166016, "__label__crime_law": 0.000331878662109375, "__label__education_jobs": 0.00074005126953125, "__label__entertainment": 6.449222564697266e-05, "__label__fashion_beauty": 0.0001647472381591797, "__label__finance_business": 0.00024187564849853516, "__label__food_dining": 0.0003848075866699219, "__label__games": 0.0005922317504882812, "__label__hardware": 0.0007367134094238281, "__label__health": 0.0005397796630859375, "__label__history": 0.000286102294921875, "__label__home_hobbies": 8.52346420288086e-05, "__label__industrial": 0.0004351139068603515, "__label__literature": 0.00030231475830078125, "__label__politics": 0.0002574920654296875, "__label__religion": 0.0004916191101074219, "__label__science_tech": 0.028076171875, "__label__social_life": 8.994340896606445e-05, "__label__software": 0.0053253173828125, "__label__software_dev": 0.958984375, "__label__sports_fitness": 0.000293731689453125, "__label__transportation": 0.000499725341796875, "__label__travel": 0.0002238750457763672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33054, 0.00948]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33054, 0.55668]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33054, 0.90361]], "google_gemma-3-12b-it_contains_pii": [[0, 5578, false], [5578, 11796, null], [11796, 18296, null], [18296, 22993, null], [22993, 26602, null], [26602, 33054, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5578, true], [5578, 11796, null], [11796, 18296, null], [18296, 22993, null], [22993, 26602, null], [26602, 33054, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33054, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33054, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33054, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33054, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33054, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33054, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33054, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33054, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33054, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33054, null]], "pdf_page_numbers": [[0, 5578, 1], [5578, 11796, 2], [11796, 18296, 3], [18296, 22993, 4], [22993, 26602, 5], [26602, 33054, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33054, 0.23729]]}
|
olmocr_science_pdfs
|
2024-11-22
|
2024-11-22
|
6fa589af70a75a13131ccf792165a8ed14e0121f
|
Abstract — This paper presents a new class complexity metric of an Object-Oriented (OO) program which is used to predict the understandability of classes. The propose complexity metric is evaluated theoretically against Weyuker’s properties to analyze the nature of metric and empirically evaluated against three small projects developed by Post Graduate (PG)/Under Graduate (UG) teams. Least Square Regression Analysis technique is performed to arrive at the result and find correlation coefficient of propose metric with the Degree of Understandability. The result indicates that the propose metric is a good predictor of understandability of classes. JHAWK TOOL (Java Code Metrics Tool) were used to evaluate the parameters values involved in propose metric and for analyzing the results of projects, Matlab6.1 and IBM SPSS software were used.
Index Terms — Complexity, Metrics, Object-Oriented, Classes, Understandability, Methods, Instance variables.
1. Introduction
Program complexity plays an important role in the amount of time spent on development of the program. Software metrics are units of measurement, which are used to characterize software engineering products, processes and people. By careful use, they can allow us to identify and quantify improvement and make meaningful estimates. Developers in large projects use measurements to help them understand their progress towards completion. Managers look for measurable milestones so that they can assess schedule and other commitments. The metrics gathered from historical data also provide an estimate of future similar projects.
Software complexity is defined as the degree to which a system or component has a design or implementation that is difficult to understand and verify [1] i.e. complexity of a code is directly depend on the understandability. All the factors that makes program difficult to understand are responsible for complexity.
Various OO complexity and quality metrics have been proposed and their reviews are available in the literature. Rajnish et al [5] has studied the effect of class complexity (measured in terms of lines of codes, distinct variables names and function) on development time of various C++ classes. Rajnish et al [1] has proposed a complexity metric which is used to measure the complexity of class at the design stage. Kulkarni et al [4] presents a case study of applying design measures to assess software quality. Sanjay et al [3] applied their proposed metric on a real project for empirical validation and compared it with Chidamber and Kemerer metrics suites [6] and their theoretical, practical and empirical validations and the comparative study prove the robustness of the measure. Alshayeb and Li have presented an empirical study of OO metrics in two processes [7]. They predict that OO metrics are effective in predicting design efforts and lines of source code added, changed and deleted in one case and ineffective in other. Emam, Benlbari, Goel and Rai validate the various OO metrics for effects of class size [8]. This view is however not agreed to by Evanco [9]. Churcher et al [10] show some of the ambiguities associated with the seemingly simple concept of the number of methods per class. K. K. Agarwal et al [11] presented a set metrics which measure the robustness of the design. Koh et al [12] attempts to review the 12 OO software metrics proposed in 90s’ by Chidamber and Kemerer [6] and Li [13]. Arisholm, Briand and Foyen study various Java classes to empirically evaluate the effect of dynamic coupling measures with the change proneness of classes [14]. Chae, Kwon and Bae investigated the effects of dependent instance variables on cohesion metrics for object-oriented programs [15]. They also proposed an approach to identify the dependency relations among instance variables. Liu et al [16] proposed new quality metrics that measure the method calling relationships between classes and they also conducted experiments on five open source systems to evaluate the effectiveness of the new measurement. Basilli et al [17] presents the results of study in which they empirically investigated the suite of OO design metrics introduced in [6] and their goal is to assess these metrics as predictors of fault-prone classes and determine whether they can be used as early quality indicators. Yacoub et al [18] defined two metrics for object coupling (Import Object Coupling and Export Object Coupling) and operational complexity based on state charts as dynamic complexity metrics. The metrics are applied to a case study and measurements are used to compare static and dynamic metrics. Jagdish et al [19] described an improved hierarchical model for the assessment of high-level design quality attributes in OO design. In their model, structural and behavioral design properties of classes, objects, and their relationships are
evaluated using a suite of OO design metrics. Their model relates design properties such as encapsulation, coupling and cohesion to high-level quality attributes such as reusability, flexibility, and complexity using empirical and anecdotal information. Munson et al. [20] showed that relative complexity gives feedback on the same complexity domains that many other metrics do. Thus, developers can save time by choosing one metric to do the work of many. Mayo et al. [21] explained the automated software quality measures: Interface and Dynamic metrics. Interface metrics measure the complexity of communicating modules, whereas Dynamic metrics measure the software quality as it is executed. Sandip et al. [22-23] presented in his paper to analytically evaluate against the Weyuker’s property [24] and empirically validate a proposed inheritance metrics (against a three versions of the same project) that can be used to measure the quality (especially focus on the quality factors “Reuse” and “Design Complexity”) of an OO systems in terms of the using class inheritance tree.
The rest of the paper is organized as follows: Section 2 presents a Weyuker’s properties. Section 3 presents description of proposed metric and its analysis on data sets. Section 4 presents Conclusion and Future scope respectively.
2. Weyuker’s Property
The basic nine properties proposed by Weyuker’s [24] are listed below. The notations used are as follows: P, Q, and R denote classes, P+Q denotes combination of classes P and Q, µ denotes the chosen metrics, µ(P) denotes the value of the metric for class P, and P≡Q (P is equivalent to Q) means that two class designs, P and Q, provide the same functionality. The definition of combination of two classes is taken here to be same as suggested by [25], i.e., the combination of two classes results in another class whose properties (methods and instance variables) are the union of the properties of the component classes. Also, “combination” stands for Weyuker’s notion of “concatenation”.
Property 1. Non-coarseness: Given a class P and a metric µ, another class Q can always be found such that, µ(P)≠µ(Q).
Property 2. Granularity: There is a finite number of cases having same metric value. This property will be met by any metric measured at the class level.
Property 3. Non-uniqueness (notion of equivalence): There can exist distinct classes P and Q such that µ(P)=µ(Q).
Property 4. Design details are important: for two class designs, P and Q, which provide the same functionality, it does not imply that the metric vales for P and Q will be same.
Property 5. Monotonicity: For all classes P and Q the following must hold: µ(P) ≤ µ(P+Q) and µ(Q) ≤ (P+Q) where P+Q implies combination of P and Q.
Property 6. Non-equivalence of interaction: ∃P, ∃Q, ∃R such that µ (P) = µ (Q) does not imply that µ(P+R) = µ (Q+R).
Property 7. Permutation of elements within the item being measured can change the metric value.
Property 8. When the name of the measured entity changes, the metric should remain unchanged.
Property 9. Interaction increases complexity. ∃P and ∃Q such that: µ (P) + µ (Q) < µ (P + Q)
Weyuker’s list the properties has been criticized by some researchers; however, it is widely known formal approach and serves as an important measure to evaluate metrics. In the above list however, property 2 and 8 will trivially satisfied by any metric that is defined for a class. Weyuker’ second property “granularity” only requires that there be a finite number of cases having the same metric value. This metric will be met by any metric measured at the class level. Property 8 will also be satisfied by all metrics measured at the class level since they will not be affected by the names of class or the methods and instance variables. Property 7 requires that permutation of program statements can change the metric value. This metric is meaningful in traditional program design where the ordering of if-then-else blocks could alter the program logic and hence the metric. In OOD (Object-Oriented Design) a class is an abstraction of a real world problem and the ordering of the statements within the class will have no effect in eventual execution. Hence, it has been suggested that property 7 is not appropriate for Object-Oriented Design (OOD) metrics.
Analytical evaluation is required so as to mathematically validate the correctness of a measure as an acceptable metric. For example Properties 1, 2 and 3 namely Non-Coarseness, Granularity, and Non-Uniqueness are general properties to be satisfied by any metric. By evaluating the metric against any property one can analyze the nature of the metric. For example, property 9 of Weyuker will not normally be satisfied by any metric for which high values are an indicator of bad design measured at the class level. In case it does, this would imply that it is a case of bad composition, and the classes, if combined, need to be restructured. Having analytically evaluated a metric, one can proceed to validate it against data.
Assumptions. Some basic assumptions used in Section 3 have been taken from Chidamber and Kemerer [6] regarding the distribution of methods and instance variables in the discussions for the metric properties.
Assumption 1:
Let X_i= the number of methods in a given class i
Y_i= the number of methods called from a given method i
Z_i= the number of instance variables used by a method i
There are discrete random variables each characterized by some general distribution functions. Further, all the $X_i$s are independent and identically distributed. The same is true for all the $Y_i$s, $Z_i$s. This suggests that the number of methods and variables follow a statistical distribution that is not apparent to an observer of the system. Further, that observer cannot predict the variables and methods of one class based on the knowledge of the variables and methods of another class in the system.
**Assumption 2:**
In general, two classes can have a finite number of “identical” methods in the sense that a combination of the two classes into one would result in one class’s version of the identical methods becoming redundant. For example, a class “foo_one” has a method “draw” that is responsible for drawing an icon on a screen; another class “foo_two” also has a “draw” method. Now a designer decides to have a single class “foo” and combines the two classes. Instead of having two different “draw” methods the designer can decide to just have one “draw” method.
### 3. Propose Metric and its Analysis
#### 3.1 Class Complexity Metric (CCM)
The metric CCM is proposed for class level and will be used in this study for predicting the understandability of classes. To calculate CCM, Total Cyclomatic Complexity (TCC) of a class, Number of Methods (NOMT) of a class, Number of Instance Variables (INST) declared, Number of External Methods (EXT) called, Number of Local Methods (LMC) called, and Total Lines of Code (NLOC) have been taken. The formula for CCM is:
$$CCM = k + w_1 * TCC + w_2 * NOMT + w_3 * INST + w_4 * EXT + w_5 * LMC + w_6 * NLOC$$
Where, the weights $w_1, w_2, w_3, w_4, w_5, w_6$ and the constant $k$ are derived at by least square regression analysis.
CCM is based upon the following assumptions:
- The number of methods, number of variables, total cyclomatic complexity, total lines of code, number of external methods called, number of local methods called is predictor of understandability (how much time and effort is required to develop and maintain the class).
- Method names are counted as distinct variable names.
- A local variable of same name in two different blocks is considered to have two distinct variable names.
- CCM directly relates to understandability of classes. Higher the value of CCM, less understandability (more complex) and more mental exercise is required to design and code the class and vice-versa with low CCM.
The CCM is directly related to Total Cyclomatic Complexity (TCC) of a class, Number of Methods (NOMT) of a class, Number of Instance Variables (INST) declared, Number of External Methods (EXT) called, Number of Local Methods (LMC) called, and Total Lines of Code (NLOC). So, more relation increases the understandability and a good design should have less complex classes in nature. So the objective is to find the better correlation coefficients between the number of relation and propose complexity measure. The number of relation is calculated by multiplying the Total Number of Methods in a Classes (TNMC) and the Total Number of Instance Variables in a Classes (TNVC) and named it Degree of Understandability (DU).
Based on the above fact two hypotheses has been designed to test the results:
- $H_{U0}$: the positive correlation of CCM with DU increases understandability of class’s i.e. direct relation with DU which is less complex in nature.
- $H_{U1}$: the negative correlation of CCM with DU decreases understandability of class’s i.e. inverse relation with DU which is more complex in nature.
To test these hypotheses correlation coefficient of CCM with DU has been calculated for understanding the classes in projects/ or software system.
#### 3.2 Analytical Evaluation of CCM against Weyuker properties
From assumption 1, the number of methods, number of instance variables, number of external variables, total lines of code, and number of local methods called in class $P$ and another class $Q$ are independent and identically distributed, this implies that there is a nonzero probability that there exist $Q$ such that $CCM (P) \neq CCM (Q)$, therefore Property 1 (Non-coarseness) is satisfied. Similarly, there is a nonzero probability that there exist $R$ such that $CCM (P) = CCM (R)$. Therefore Property 3, Non-uniqueness (notion of equivalence) is satisfied. There is finite number of cases in the system having the same CCM values for classes. Since CCM is measured at the class level so Property 2, Granularity is satisfied. The choice of number of methods, number of instance variables, number of external variables, total lines of code, and number of local methods is a design decision and independent of the functionality of the class, therefore property 4 design details matter is satisfied. From assumptions 1, and 2 and let $CCM (P) = X_P$ and $CCM (Q) = X_Q$, then $CCM (P+Q) = X_P + X_Q - y$, where $y$ is the number of common methods, number of common instance variables, number of common external variables, cyclomatic complexity of the common method, total lines of code, and number of local methods.
between \( P \) and \( Q \), so the maximum value of \( y \) is \( \min \left(X_p, X_q, X_0 \right) \). Therefore \( CCM \left(P+Q \right) \geq X_p + X_q - \min \left(X_p, X_q, X_0 \right) \). It follows that \( CCM \left(P+Q \right) \geq CCM \left(P \right) \) and \( CCM \left(P+Q \right) \geq CCM \left(Q \right) \), thereby satisfying property 5 (monotonicity). Now, let \( CCM \left(P \right) = x \). \( CCM \left(Q \right) = x \) and there exist a class \( R \) such that it has a number of common methods, number of common instance variables, number of common external variables, cyclomatic complexity of the common method, total lines of code, and number of local methods \( \alpha \) in common with \( Q \) (as per assumption 1 and 2) and \( \mu \) methods, variables, external variables, cyclomatic complexity, total lines of code, and number of local methods in common with \( P \), where \( \alpha \neq \mu \). Let \( CCM \left(R \right) = r \):
\[
CCM \left(P+R \right) = x + r - \mu
\]
\[
CCM \left(Q+R \right) = x + r - \alpha,
\]
Therefore \( CCM \left(P+R \right) \neq CCM \left(Q+R \right) \) and property 6 (non-equivalence of interaction) is satisfied. Property 7 requires that permutation of program statements can change the metric value. This metric is meaningful in traditional program design where the ordering of if-then-else blocks could alter the program logic and hence the metric. In OOD (Object-Oriented Design) a class is an abstraction of a real world problem and the ordering of the statements within the class will have no effect in eventual execution. Hence, it has been suggested that property 7 is not appropriate for OOD metrics. Property 8 is satisfied because when the name of the measured entity changes, the metric should remain unchanged. For any two classes \( P \) and \( Q \), \( X_p + X_q - y < X_p + X_q \) i.e. \( CCM \left(P+Q \right) < CCM \left(P \right) + CCM \left(Q \right) \) for any \( P \) and \( Q \). Therefore, property 9 (interaction increases complexity) is not satisfied. Table 1 presents the results of analytical evaluation of CCM against Weyuker’s Property.
<table>
<thead>
<tr>
<th>Property Number</th>
<th>CCM</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>✓</td>
</tr>
<tr>
<td>2</td>
<td>✓</td>
</tr>
<tr>
<td>3</td>
<td>✓</td>
</tr>
<tr>
<td>4</td>
<td>✓</td>
</tr>
<tr>
<td>5</td>
<td>✓</td>
</tr>
<tr>
<td>6</td>
<td>✓</td>
</tr>
<tr>
<td>7</td>
<td>✓</td>
</tr>
<tr>
<td>8</td>
<td>✓</td>
</tr>
<tr>
<td>9</td>
<td>×</td>
</tr>
</tbody>
</table>
✓: Metric satisfies the properties
×: Metric does not satisfy the properties
3.3 Analysis on Data
This section presents the description of data collection, algorithm of the proposed work, summary of graphs and tables, and their interpretation.
3.3.1 Data Collection
This section presents an outline of applied approach. The variables of interest in this study are: TCC, NOMT, INST, EXT, LMC, NLOC, which is to be modeled by CCM. The above-mentioned six values were collected for classes from three different project categories. In each project categories the author had given the responsibility to the team members of each project to frame out the parameters/variables used in CCM.
The first project is related to “Account Department” (Named it Set A). This project had been developed by well experienced Post Graduate (PG)/Under Graduate (UG) teams, they had developed the project in Java Language. The project involves 5 team members and containing 85 Java classes.
The second project is related to “Bio-Technology Department” (named it Set B). This project had been developed by PG teams. They have a sound knowledge of Java Programming. They had developed a small tool for the Department Research work. The project involves 2 team members and containing 20 Java classes.
The third project is related to “Corporate Department” (named it Set C). This project had been developed by experienced PG teams. This project had been developed in Java and for faculties for On-Line shopping. The project involves 3 team members and containing 20 Java Classes.
3.3.2 Algorithm of Propose work
This section presents the algorithm of the proposed work which is represented with the following steps:
1. Propose Quality metric.
2. Identify Quality factors (predicting understandability of classes in software projects which is to be used in this study).
3. Collect data of three different categories (Named as Data Set A, Data Set B, and Data Set C).
4. LOOP: for each data sets perform following actions:
a) Generate TCC, NOMT, INST, EXT, LMC, and NLOC values used in CCM using Java Tool (named JHAWK TOOL (Named JAVA CODE METRIC))
b) Generate values for weights w1, w2, w3 w4, w5, w6 and the constant k used in CCM using Least Square Regression Analysis by MATLAB6.1 TOOL.
5. LOOP: for each data sets do the following:
a) Find summary statistics TCC, NOMT, INST, EXT, LMC, NLOC and DU using IBM SPSS Software.
6. LOOP: for each data sets do the following:
a) Find the Correlation Coefficients of CCM with DU and also find the Correlation Coefficients
TCC, NOMT, INST, EXT, LMC, and NLOC with DU using MATLAB.1 TOOL.
b) Plot graph and analysis of data using IBM SPSS Software.
END LOOP;
3.3.3 Empirical Data
Multivariate Regression Analysis was applied on all three data sets and correlation coefficients were calculated. The summary statistics, correlation coefficients, and graphs used for CCM for three different data sets are shown (at the end of this paper in Appendix) in Table 2, Table 3, Table 4, Table 5, Table 6, Fig. 1, Fig. 2 and Fig. 3.
3.3.4 Discussion
The CCM has been applied to each class of three software projects. Total 125 Java classes have given as input to JHAWK tool to calculate the values of CCM, TCC, NOMT, INST, EXT, LMC, NLOC and DU for each data set. Correlation Coefficient approach was used to validate the performance of the proposed metric for predicting understandability of classes. The proposed complexity metric is directly related to TCC, NOMT, INST, EXT, LMC, NLOC and relation between them. So, more relation increases the understandability of classes and a good design should have less complex classes in nature.
Certain observations made from Table 6. The first six columns list out the correlation coefficient obtained when TCC, NOMT, INST, EXT, LMC, NLOC are independently related with DU. The seven column lists out the correlation coefficient obtained when all the six (TCC, NOMT, INST, EXT, LMC, NLOC) are combined for regression with DU. In all the cases this column entry has the highest values in each row. In the first case, the data had been collected from a well-defined similar group of PG/UG teams (with very similar programming experiences), and the CCM turned out to be a better predictor of understandability of classes. In the second case data had been collected from novice group of PG teams (with very sound knowledge of Java), CCM is turned out to be good than TCC, NOMT, INST, EXT, LMC, NLOC as a predictor of DU. In the last case, since the data came from experienced PG teams and CCM turned out to be a best predictor of DU.
The overall observations found that CCM has a better direct relation with DU in Data Set A and Data Set C but with Data Set B it has a direct relation with DU when combined but less direct relation with DU when measured individually. So there may be a necessity of redesign in Data Set B to predict the better understandability of classes.
4. Conclusion and Future Scope
In this paper, an attempt has been made to define new Complexity Metric CCM which is used to predict the understandability of classes in software projects. On evaluating CCM against a set of standard criteria CCM is found to possess a number of desirable properties and suggest some ways in which the OO approach may differ in terms of desirable or necessary design features from more traditional approaches. Generally, CCM satisfy the majority of the properties presented by Weyuker with one strong exception, Property 9 (Interaction Increases Complexity). Failing to meet Property 9 implies that a Complexity Metric could increase rather than reduce if a class is divided into more classes. In other words complexity can increase when classes are divided into more classes.
In addition to the proposal and analytical evaluation, this paper has also presented empirical data on CCM from three software projects. All projects are developed in Java. From Table 6, it is found that the CCM is turned out to the best predictor of understandability of classes in chosen software projects.
In this study, the CCM is used for predicting the understandability of classes and through CCM one can choose to measure the same and complex design.
The future scope includes some fundamental issues:-
✓ To analyze the nature of proposed metric with performance indicators such as maintenance effort and system performance.
✓ Another interesting study would be together different complexity metrics at various intermediate stages of the project. This would provide insight into how application complexity evolves and how it can be managed/control through the use of metrics.
References
Appendix
**TABLE 2:** Summary Statistics for the Data Set A
<table>
<thead>
<tr>
<th></th>
<th>Minimum</th>
<th>Maximum</th>
<th>Mean</th>
<th>Std. Deviation</th>
</tr>
</thead>
<tbody>
<tr>
<td>TCC</td>
<td>1.00</td>
<td>16.00</td>
<td>7.1294</td>
<td>3.18773</td>
</tr>
<tr>
<td>NOMT</td>
<td>1.00</td>
<td>10.00</td>
<td>3.1647</td>
<td>1.12172</td>
</tr>
<tr>
<td>INST</td>
<td>0.00</td>
<td>34.00</td>
<td>7.7765</td>
<td>6.34218</td>
</tr>
<tr>
<td>EXT</td>
<td>6.00</td>
<td>51.00</td>
<td>25.6941</td>
<td>9.65232</td>
</tr>
<tr>
<td>LMC</td>
<td>0.00</td>
<td>1.00</td>
<td>0.2353</td>
<td>0.42670</td>
</tr>
<tr>
<td>NLOC</td>
<td>30.00</td>
<td>157.00</td>
<td>67.5059</td>
<td>20.85587</td>
</tr>
<tr>
<td>DU</td>
<td>0.00</td>
<td>120.00</td>
<td>26.5294</td>
<td>22.57453</td>
</tr>
</tbody>
</table>
Figure 1: Parameters Values used in CCM for the Data Set A
<table>
<thead>
<tr>
<th></th>
<th>Minimum</th>
<th>Maximum</th>
<th>Mean</th>
<th>Std. Deviation</th>
</tr>
</thead>
<tbody>
<tr>
<td>TCC</td>
<td>0.00</td>
<td>59.00</td>
<td>17.30</td>
<td>16.89628</td>
</tr>
<tr>
<td>NOMT</td>
<td>0.00</td>
<td>48.00</td>
<td>10.95</td>
<td>11.17080</td>
</tr>
<tr>
<td>INST</td>
<td>0.00</td>
<td>12.00</td>
<td>2.20</td>
<td>2.83957</td>
</tr>
<tr>
<td>EXT</td>
<td>0.00</td>
<td>43.00</td>
<td>6.70</td>
<td>11.61261</td>
</tr>
<tr>
<td>LMC</td>
<td>0.00</td>
<td>8.00</td>
<td>0.90</td>
<td>2.14966</td>
</tr>
<tr>
<td>NLOC</td>
<td>7.00</td>
<td>400.00</td>
<td>88.050</td>
<td>100.47910</td>
</tr>
<tr>
<td>DU</td>
<td>0.00</td>
<td>60.00</td>
<td>17.150</td>
<td>20.75490</td>
</tr>
</tbody>
</table>
TABLE 3: Summary Statistics for the Data Set B
Figure 2: Parameters Values used in CCM for the Data Set B
<table>
<thead>
<tr>
<th></th>
<th>Minimum</th>
<th>Maximum</th>
<th>Mean</th>
<th>Std. Deviation</th>
</tr>
</thead>
<tbody>
<tr>
<td>TCC</td>
<td>1.00</td>
<td>21.00</td>
<td>5.1111</td>
<td>5.77916</td>
</tr>
<tr>
<td>NOMT</td>
<td>1.00</td>
<td>15.00</td>
<td>4.3889</td>
<td>4.48709</td>
</tr>
<tr>
<td>INST</td>
<td>0.00</td>
<td>17.00</td>
<td>1.8889</td>
<td>4.39102</td>
</tr>
<tr>
<td>EXT</td>
<td>0.00</td>
<td>39.00</td>
<td>3.5556</td>
<td>9.31932</td>
</tr>
<tr>
<td>LMC</td>
<td>0.00</td>
<td>2.00</td>
<td>0.500</td>
<td>0.85749</td>
</tr>
<tr>
<td>NLOC</td>
<td>4.00</td>
<td>172.00</td>
<td>26.6111</td>
<td>42.18048</td>
</tr>
<tr>
<td>DU</td>
<td>0.00</td>
<td>204.00</td>
<td>23.0556</td>
<td>54.46043</td>
</tr>
</tbody>
</table>
TABLE 4: Summary Statistics for the Data Set C
Table 5: Values of the coefficients for the six independent variables and constant used in CCM from three different data sets by Least Square Regression Analysis
<table>
<thead>
<tr>
<th>SET</th>
<th>W_1</th>
<th>W_2</th>
<th>W_3</th>
<th>W_4</th>
<th>W_5</th>
<th>W_6</th>
<th>k</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>-1.9865</td>
<td>9.7890</td>
<td>3.2926</td>
<td>-0.4374</td>
<td>17.4461</td>
<td>-0.1164</td>
<td>0</td>
</tr>
<tr>
<td>B</td>
<td>0.4253</td>
<td>0.6650</td>
<td>5.2092</td>
<td>0.6272</td>
<td>1.5007</td>
<td>-0.1403</td>
<td>0.0007</td>
</tr>
<tr>
<td>C</td>
<td>-6.3444</td>
<td>7.4854</td>
<td>15.2038</td>
<td>1.4431</td>
<td>-2.8707</td>
<td>-0.3579</td>
<td>0</td>
</tr>
</tbody>
</table>
Table 6: Correlation Coefficient with respect to DU for the three different Data Sets
<table>
<thead>
<tr>
<th></th>
<th>TCC</th>
<th>NOMT</th>
<th>INST</th>
<th>EXT</th>
<th>LMC</th>
<th>NLOC</th>
<th>CCM</th>
</tr>
</thead>
<tbody>
<tr>
<td>SET A</td>
<td>0.5183</td>
<td>0.5917</td>
<td>0.9006</td>
<td>0.5187</td>
<td>0.2786</td>
<td>0.8513</td>
<td>0.9586</td>
</tr>
<tr>
<td>SET B</td>
<td>0.3533</td>
<td>0.3934</td>
<td>0.6701</td>
<td>0.2146</td>
<td>0.2198</td>
<td>0.2112</td>
<td>0.9085</td>
</tr>
<tr>
<td>SET C</td>
<td>0.9248</td>
<td>0.8121</td>
<td>0.9950</td>
<td>0.7939</td>
<td>0.7766</td>
<td>0.9567</td>
<td>0.9980</td>
</tr>
</tbody>
</table>
Author Profile
**Kumar Rajnish:** He is an Assistant Professor in the Department of Information Technology at Birla Institute of Technology, Mesra, Ranchi, Jharkhand, India. He received his PhD in Engineering from Birla Institute of Technology Mesra, Ranchi, Jharkhand, India in the year of 2009. He received his MCA Degree from Madan Mohan Malaviya Engineering College, Gorakhpur, State of Uttar Pradesh, India in the year of 2001. He received his B.Sc Mathematics (Honours) from Ranchi College Ranchi, India in the year 1998. He has 28 International and National Research Publications. His Research area is Object-Oriented Metrics, Object-Oriented Software Engineering, Software Quality Metrics, Programming Languages, and Database System.
|
{"Source-Url": "http://www.mecs-press.org/ijieeb/ijieeb-v6-n1/IJIEEB-V6-N1-8.pdf", "len_cl100k_base": 7628, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 30232, "total-output-tokens": 9301, "length": "2e12", "weborganizer": {"__label__adult": 0.0004014968872070313, "__label__art_design": 0.0003311634063720703, "__label__crime_law": 0.00036454200744628906, "__label__education_jobs": 0.0014190673828125, "__label__entertainment": 5.14984130859375e-05, "__label__fashion_beauty": 0.0001735687255859375, "__label__finance_business": 0.00022172927856445312, "__label__food_dining": 0.0003662109375, "__label__games": 0.0005497932434082031, "__label__hardware": 0.0006356239318847656, "__label__health": 0.0005745887756347656, "__label__history": 0.0002058744430541992, "__label__home_hobbies": 9.53078269958496e-05, "__label__industrial": 0.00033283233642578125, "__label__literature": 0.00027561187744140625, "__label__politics": 0.0002332925796508789, "__label__religion": 0.00045680999755859375, "__label__science_tech": 0.0089569091796875, "__label__social_life": 0.00010991096496582033, "__label__software": 0.003063201904296875, "__label__software_dev": 0.97998046875, "__label__sports_fitness": 0.00034427642822265625, "__label__transportation": 0.0004355907440185547, "__label__travel": 0.00019216537475585935}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32786, 0.07942]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32786, 0.31754]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32786, 0.88243]], "google_gemma-3-12b-it_contains_pii": [[0, 4870, false], [4870, 10298, null], [10298, 15428, null], [15428, 20410, null], [20410, 25516, null], [25516, 29855, null], [29855, 31080, null], [31080, 32786, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4870, true], [4870, 10298, null], [10298, 15428, null], [15428, 20410, null], [20410, 25516, null], [25516, 29855, null], [29855, 31080, null], [31080, 32786, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32786, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32786, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32786, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32786, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32786, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32786, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32786, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32786, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32786, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32786, null]], "pdf_page_numbers": [[0, 4870, 1], [4870, 10298, 2], [10298, 15428, 3], [15428, 20410, 4], [20410, 25516, 5], [25516, 29855, 6], [29855, 31080, 7], [31080, 32786, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32786, 0.27273]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
3c4665b7c15ee34978696a4f24a55d9534412fd3
|
ASAAM-T
Aspectual Software Architecture Analysis in Eclipse
F.B. Scholten (0002550)
23-05-2005
Abstract
The ASAAM is a scenario-based architecture evaluation method which analyzes crosscutting concerns in conceptual software architectures. As with most software engineering methods, tool support is needed to properly evaluate and improve the method and its application. Additionally, tool support can provide inspiration for further research in the area of aspectual software architecture design. In the context of a free assignment an Eclipse plugin called ASAAM-T has been developed to support the ASAAM. The plugin has been implemented and designed by the author and the assignment has been supervised by Dr. Bedir Tekinerdogan of the Software Engineering chair of the University of Twente. This report provides an overview of the capabilities of ASAAM-T, its design and implementation, and a roadmap of possible improvements for further development of ASAAM-T.
## Contents
1 **Introduction** ................................................. 4
1.1 The Aspectual Software Architecture Analysis Method ........ 4
2 **ASAAAM-T User Guide** ........................................... 5
2.1 Architecture Development .................................. 5
2.2 Scenario Development ...................................... 5
2.3 Scenario Evaluation ....................................... 5
2.3.1 Screen Layout ....................................... 5
2.3.2 Evaluating Scenarios ................................. 8
2.4 Architectural Component Identification ..................... 8
2.5 Impact Analysis ........................................... 9
3 **Architectural Design of ASAAAM-T** ............................. 11
3.1 Multi-page Editor .......................................... 11
3.2 Method model ............................................... 12
4 **Implementing ASAAAM-T** ........................................ 13
4.1 The Eclipse Modeling Framework ............................ 13
4.1.1 Code generation with EMF ............................ 13
4.1.2 EMF code details .................................... 13
4.1.3 Using the generated code ............................ 14
4.2 ASAAAM Models Implementation ............................. 15
4.2.1 Method Model ......................................... 15
4.2.2 Artifact Model ....................................... 15
4.2.3 Asaam Evaluation Model ............................... 17
4.3 Implementing the Multi-page Editor ......................... 17
4.3.1 Artifact Development ................................ 18
4.3.2 Method Application .................................. 18
4.3.3 Result Overview ..................................... 18
4.4 Improving the current implementation ....................... 18
5 **Conclusions** .................................................. 20
5.1 ASAAAM-T Requirements ................................... 20
5.2 Educational results ....................................... 20
6 **Discussion** .................................................... 21
6.1 Usability .................................................. 21
6.2 ASAAAM Workflow .......................................... 21
1 Introduction
1.1 The Aspectual Software Architecture Analysis Method
A software architecture is a high level design, consisting of components and the relationships between them [1]. During the design of conceptual software architectures, software engineers define the major responsibilities and behaviours of these components. Each component performs one of the major concerns of the application. Adopting the principle of 'separation of concerns' eases maintenance and evolution, since changes corresponding to a single concern will result in changes localized at a single component.
Unfortunately, there are concerns which cannot be localized in a single component. They are called crosscutting concerns. The behaviour corresponding with the concern is scattered over multiple components; the concern crosscuts several components. Tangled and duplicated code in multiple classes can be an indicator of the presence of a cross-cutting concern. Fortunately, crosscutting concerns can be localized using aspects at the programming level, for example by using the Java extension AspectJ [2]. The goal of ASAAM is to identify crosscutting concerns at the architecture design level [3]. The ASAAM-T plugin can be used to perform ASAAM evaluations of conceptual architectures, by defining an architecture and a set of scenarios and following the steps of ASAAM.
This paper is organized in the following manner. The first section presents a user guide for ASAAM-T. It describes how users can define and analyze software architectures with ASAAM-T. The following sections describe the design and implementation of ASAAM-T. A list of knowledge domains is provided which have been used as inspiration for the design of ASAAM. It concludes with suggestions for improving the design of ASAAM-T, as well as introducing new features relevant for further development and provides hints on how these features can be designed. After this overview, the following section describes the implementation aspects of ASAAM-T in the Eclipse environment. It presents several Eclipse plugins and frameworks which were used to implement ASAAM-T. Additionally it shows how the existing implementation can be improved. Finally, we will provide an overview of results of the ASAAM-T assignment.
2 ASAAM-T User Guide
This section provides a short tutorial for performing ASAAM evaluations with ASAAM-T. To start an ASAAM-T evaluation, open the navigator window and create a file with an ‘.eval’ extension in a project. If necessary create a new project first. Next, open the file in the default ASAAM-T editor. ASAAM-T is a multi-page editor, which has a separate page for each major activity in ASAAM. The following sections provide a walkthrough of each of these activities.
2.1 Architecture Development
The first activity of ASAAM is architecture development. Figure 1 shows the page for developing a candidate architecture. This page provides an interface to create, remove and modify architectural components. By clicking the ‘add component’ button, a new component is added the viewer on the left of the screen. The ‘remove’ and ‘clear’ buttons can be used to remove all or a few selected components. The viewer in the section ‘architectural components’ consists of two columns. The first column shows an id, the other column shows the components name. The user can change the components name by clicking on the text in the second column. When a component is selected, the section ‘architectural component details’ the right part of the screen becomes visible. This section provides an interface to edit the components description. Naturally, the architectural components defined through this interface originate from a previous architecture development phase.
2.2 Scenario Development
The second type of input artifact of ASAAM are scenarios. The user interface resembles the user interface on the previous page. You can create scenarios, similar to the way architectural components are created in the architecture development page. Scenarios consist of a short description, as well as a more detailed description. The detailed description provides additional contextual information about the scenario interaction. Only the short description is used in the latter parts of evaluation.
2.3 Scenario Evaluation
2.3.1 Screen Layout
The scenario evaluation page provides an interface for evaluation scenarios, according to the ASAAM rules R1 through R6. The top of the screen shows a viewer containing all scenarios defined through the previous page. The columns show the scenarios id, its description, its ASAAM type, the number of interacting components and its crosscutting nature. By selecting a scenario from this viewer, the lower half of the screen becomes active. The middle of the page shows the second viewer containing all components defined through the architecture development page. The columns in this viewer show the components id,
Figure 1: The user defines a candidate architecture
Figure 2: The user creates several scenarios
Figure 3: The user evaluates individual scenarios
its name, a dropdown box containing interaction types and a textual impact. This viewer is used for the scenario evaluation activity. The bottom part of the page shows the question ‘Is the scenario scattered?’ with ‘yes’ and ‘no’ buttons.
### 2.3.2 Evaluating Scenarios
To evaluate a scenario, the user first selects a scenario from the first viewer. Then the user turns to the second viewer and by selecting ‘direct’ or ‘indirect’ from the dropdown box in the second column, the user can decide the what kind of interaction a scenario has with the component on that row. If the empty field in the dropdown box is selected, the scenario has no interaction with the corresponding component. After this activity, the user answers ‘yes’ or ‘no’ to question which asks if the scenario is scattered. After all scenarios have been evaluated, each scenario has been identified with an ASAAM-T scenario type.
### 2.4 Architectural Component Identification
This page shows the interacting scenarios for each component. The viewer at the top of the page has four columns, which show the components id, its name, its ASAAM type and the number of interacting scenarios. By selecting a
component from the viewer, three additional viewer appear on the middle of the screen. These viewers show direct, indirect and aspectual scenarios, interacting at the selected component. Based on this information the viewer can answer 'yes' or 'no' to several questions, which ask whether the component performs semantically close scenarios, and whether the component can be decomposed. This entire activity activates ASAAM rules R7 through R16.
2.5 Impact Analysis
The final page shows the overall impact analysis of the architecture. The columns of the viewer represent components and the rows are scenario ids. An 'D' in the viewer means that a scenario interacts directly at a component, while a 'T' means an indirect interaction. If a red ball is present, it means the interacting scenario aspectual.
Figure 5: The impact analysis shows the impact of scenarios on components
3 Architectural Design of ASAAM-T
ASAAM-T is a CASE tool for evaluating software architectures. A CASE tool provides an interface to apply a certain software development method. The two main concerns for ASAAM-T are method application and user interaction. To support these concerns, ASAAM-T is built as a layered system with two layers, as depicted in Figure 6. The upper layer is a multi-page editor, the lower layer is the method model, responsible for method execution and updating the multi-page editor. The method model is the most important part of ASAAM-T, as it directly represents ASAAM. We will discuss the design of the method model and the multi-page editor in the next sections.
3.1 Multi-page Editor
The multi-page editor provides an interface for user interaction in ASAAM-T. Each page in the editor provides an interface for a subactivity in ASAAM, such as candidate architecture development, scenario evaluation, and so on. All pages provide a different view of the method model. The user can switch pages to continue one of the different activities. All user interface components are refreshed if the method model changes state. A detailed overview of the components contained in the multi-page editor can be found in the section 4.3.
3.2 Method model
The domain of method engineering has been used as a source of inspiration for ASAAM and its concepts have been used to design ASAAM-T. Method engineering is the study of designing methods. Several concepts from [4] have been used as a basis for our method model. It demonstrates the concepts ‘method’, ‘artifact’, ‘rule’ and ‘process’. The goal of methods is to transform or manipulate artifacts using method rules. Artifacts are the concepts relevant to the domain of interest. In the case of ASAAM, the domain is aspectual architecture evaluation and the artifacts are multiple types of scenarios and components. Method rules manipulate artifacts through actions based on conditions of artifact properties. These method rules are evaluated in a certain order. A process determines the order of rule evaluation. Figure 7 depicts the method model used in the design of ASAAM-T.
4 Implementing ASAAM-T
This section describes implementation aspects of ASAAM-T. Since ASAAM-T uses a lot of code generated by the Eclipse Modeling Framework plugin, we will first provide an introduction to this framework. After this introduction, we will describe how the different components of ASAAM-T are implemented. That part is divided in three sections. The first section describes the implementation of the method model, which implements the core of ASAAM. The second section describes the artifact model in more detail. Finally, the implementation of the overall ASAAM evaluation is described. At the end we will give an overview of possible improvements of the current implementation.
4.1 The Eclipse Modeling Framework
4.1.1 Code generation with EMF
EMF is capable of generating code from annotated Java interfaces or from an Ecore model created with the Ecore editor, available through the EMF plugin [5]. An Ecore model is an XMI document which specifies classes, methods, attributes and relationships between classes, such as inheritance and composition. In the development of ASAAM-T we have used the Ecore editor from EMF to create the models described in later sections. After the model definition, a generator model, or genmodel, is used to generate Java code to be used as a base model for the ASAAM-T plugin. The Java interfaces generated from the Ecore model can be manually changed by inserting new methods or member variables and inserting a comment /* @generated NOT */. In the next section we will describe the nature of the code that is generated by the EMF genmodel. After this, we provide short examples on how we have used the generated code in ASAAM-T.
4.1.2 EMF code details
The code generation process of EMF can be configured through the genmodel file. Several features can be toggled, such as insertion of notification code, choice of types for reference values, and so on. EMF can generate 3 plugins containing generated code, the model plugin, the edit plugin and the editor plugin. The model plugin contains a direct implementation of the Java interfaces, which respects cardinalities of reference values. For each attribute, get and set methods are generated. Notification code is inserted around every change of attribute or reference values. This code involves calling the eNotify method of the org.eclipse.emf.common.notify interface with certain parameters. In case of multiplicity-many references, get-methods returns an org.eclipse.emf.common.util.EList, containing the referenced objects. The model plugin also contains a Factory class which has factory methods for each class in the model. The edit plugin, com.asaam.architectureevaluation.emfmodel.edit, contains an implementation of adapter classes, used for editing and viewing the generated model objects. A more thorough description of workings of the EMF edit framework can be found.
in [6]. The most important classes are the ItemProvider classes. ItemProviders can wrap model objects in a way that they can be used in combination with JFace viewers, such as TreeViewers or TableViewers. ItemProviders have several responsibilities, as shown in [6]. Most importantly, they delegate notification of changes in a model object to the JFace viewer, so that the viewer updates if a model element changes state. Second, they implement one of the many ContentProvider and LabelProvider interfaces, which define the textlabels and icons that are shown in the JFace viewers. The third plugin contains code for a complete editor. This editor can be used to create & edit model elements. Because the ASAAM-T editor was going to very different than the generated editor we have not used the code from this plugin.
4.1.3 Using the generated code
ASAAM-T uses a lot of JFace viewers for displaying collections of artifacts, such as a selection of scenarios. Figure 8 shows an example on how to initialize a JFace viewer with generated EMF model objects. A JFace viewer can be initialized by setting ContentProviders and LabelProviders and setting a container object, in this case the architecture, as the input. Normally we would have to create the provider classes ourselves, but we can use the AdapterFactoryContentProvider and AdapterFactoryLabelProvider classes to wrap our generated ItemProvider-AdapterFactory class, the ArchitectureEvaluationItemProviderAdapterFactory. The ItemProviderAdapterFactory has references to all ItemProviders. If the content of the viewer needs to be refreshed, the AdapterFactoryContentProvider delegates this command to the wrapped ItemProviderAdapterFactory which delegates the command in turn to the ItemProvider object of the model object, which computes a new text label using the model object. This mechanism works in the same way as with AdapterFactoryLabelProviders to generate new icons.
Usually, an ItemProviderAdapterFactory needs to delegate to other ItemProviders to create icons and text labels. In case of ASAAM-T we have many viewers and pages which each need different icons and text labels. For example, the viewers in the scenario evaluation page have different column names and values than the viewers in the component identification page, even though the input to the viewers is the same, the architecture object. The problem is that we need multiple presentations of a model, while the AdapterFactory design used in EMF only allows a single presentation for a type. We have added a setType and getType method to parameterize the workings of the factory ArchitectureEvaluationItemProviderAdapterFactory. This is not an elegant solution, as it goes against the design considerations of the AdapterFactory. The author has not found a good solution for this problem, using the current documentation and knowledge about EMF. However, the EMF framework is being used more and more and many solutions will present themselves as more different types of modelling problems will be tackled.
Figure 8: JFace Viewer initialization
```java
ArchitectureEvaluationItemProviderAdapterFactory af = new ArchitectureEvaluationItemProviderAdapterFactory();
af.setType(ArchitectureEvaluationItemProviderAdapterFactory.COMPONENT_ANALYSIS);
archViewer.setContentProvider(new AdapterFactoryContentProvider(af));
archViewer.setLabelProvider(new AdapterFactoryLabelProvider(af));
archViewer.setUseHashlookup(true);
archViewer.setInput(eval.getArchitecture());
```
Figure 9: Asaam Method Interfaces
![Diagram of Asaam Method Interfaces]
4.2 ASAAM Models Implementation
4.2.1 Method Model
The method model described in 3.2 was implemented using the interfaces described in Figure 9. We could define the interfaces of the method model almost directly in the EMF ecore editor. A few classes needed to be altered manually after code generation. For example, method rules needed to be instantiated in the AsaamMethod class. Rules needed be registered to a method, by declaring which artifact it will manipulate when a condition is met.
4.2.2 Artifact Model
Figure 10 describes some more detailed interfaces of the artifacts. The artifact model consists of the superinterface AsaamArtifact, and two subinterfaces AsaamComponent and AsaamScenario. These interfaces define the properties
necessary for an ASAAM evaluation, such as the ASAAM types & properties and descriptions.
Finally, the fourth interface in this model is the MappedScenario subinterface which inherits from AsaamScenario. This interface defines impact methods for AsaamScenario artifacts which have been mapped to components. Since scenarios can be evaluated to multiple components, impact information needs to be kept for each specific evaluation. We have solved this problem by creating a decorator which wraps the original scenario and introduces localized state, the impact of the scenario.
4.2.3 Asaam Evaluation Model
In the previous sections we have described the detailed definition of artifacts and ASAAM method interfaces. An actual ASAAM evaluation involves several scenarios and architectural components. This section describes interfaces for working with complete ASAAM evaluations. These are top level interfaces which act as containers for several artifacts in ASAAM. The graphical editor uses these interfaces as an access point to the entire ASAAM evaluation. Figure 11 shows the interfaces for Architecture, ScenarioSelection and ArchitectureEvaluation.
The AsaamEvaluation interface is the top level interface for an ASAAM evaluation. It provides methods which access the AsaamArchitecture and AsaamScenarioSelection interfaces. Additionally, it defined methods which compute metrics about the evaluation. The Architecture interface provides methods for adding, removing and retrieving architectural components. The ScenarioSelection interface provides similar functionality for AsaamScenarios.
4.3 Implementing the Multi-page Editor
User interaction is the most important part of applying any software engineering method. Several requirements were needed for the ASAAM-T user interface. First, the user must have an interface for creating artifacts. Second, the user needs an interface to inspect and manipulate artifact properties to apply the ASAAM. The process of applying an ASAAM evaluation consists of artifact development, method application and presenting a results overview. We have used a multi-page editor for implementing the ASAAM-T GUI. These pages use SWT buttons and textfields and JFace viewers. SWT is the standard GUI toolkit on which Eclipse itself is built. JFace is a high level API build on SWT,
used to easy GUI programming in Eclipse. Additionally we used the Form API from Eclipse 3.0 to create a flat layout, which resembles the Eclipse plug-in editor [7]. For each of the mentioned process steps we will describe the user interface in more detailed fashion.
4.3.1 Artifact Development
ASAAM uses two types of input artifacts: scenario and architectural component. In the artifact development phase the user needs to create several artifacts for both types, so we have decided to create a separate page for each type. Each page contains a Master-Details structure. The master part contains a viewer of the created artifacts with buttons to add and remove artifacts. The details part shows the properties of the selected artifact in the viewer. The user can manipulate properties in the details part.
4.3.2 Method Application
The method application phase of ASAAM-T consists of the third and fourth page. The ASAAM method rules manipulate multiple scenario and component artifacts at the same time and create relationships between artifacts. For example, the user needs to view all direct scenarios of a component, as well as all components interacting at a scenario. As a consequence, the scenario and component artifacts each get their own page. These pages consist of several viewers for scenario and component types. JFace filters are used to view only certain ASAAM artifact types.
4.3.3 Result Overview
The method application pages contain a partial overview of the results of the ASAAM method. However, one final page is needed to show the complete overview of the relationships between scenarios and components. This page consists of a grid which shows interactions between components and scenarios.
4.4 Improving the current implementation
The current implementation of the ASAAM-T GUI consists of a combination of SWT and JFace controls to represent a view of the underlying method model. The JFace components use a clearly defined protocol for communicating with the method model. The object in the method model objects notifies their item-providers, which in turn notify their JFace viewers. The SWT controls however do not use such a clearly defined protocol for communication with the method model. As a consequence, notification and manipulation of the model through SWT controls had to be implemented in an ad hoc manner. An improvement to the existing implementation would be to create a stable model view controller design. This will require a view which is uniformly updated when the model send notifications.
5 Conclusions
5.1 ASAAM-T Requirements
The delivered tool supports only a small subset of the requirements originally presented in the requirements specification. Supporting tools, such as a graphical architecture modeler, as well as an architecture refactoring tool were cancelled due to time and scope constrains. The tool can be used to evaluate simple software architectures according to the ASAAM method. All the associated method rules are implemented accordingly, as described in [3]. Software architectures and scenario selections created with the corresponding generated EMF editors can be saved and loaded into the evaluation tool. Graphical Architecture modeling seemed to be beyond the scope of the assignment due to time constraints. Furthermore, the use of composite architectural components is not implemented.
5.2 Educational results
During this assignment the author has acquired a considerable amount of experience with developing in the Eclipse platform. Eclipse provides many frameworks to develop sophisticated GUIs. Some components them have been used extensively, such as JFace, SWT & EMF, while many others are not used at all, such as Wizards and Views. We noticed that the developing in the Eclipse environment had a greater learning curve than expected. Besides learning to program in the Eclipse environment, the author has learned to use the JUnit testing framework. During this assignment much experience has gained on not only implementing applications in a new environment, the Eclipse environment, but also with dealing with the complexity of a problem domain. The main steps of the process of developing the tool are simultaneous investigation of possibilities and limitations of the eclipse platform on the one hand and finding solutions to problems from the problem domain of the tool on the other hand. One important lesson the author learned is that with every time one encounters a new technology, time is needed to practice and learn to use it. In this project the time to learn to use and implement tools in Eclipse was underestimated.
6 Discussion
This last section proposes several suggestions for additional features of and ideas about the ASAAM-T. During the development of ASAAM-T, numerous additional requirements were identified that would be relevant for future development. The goal of this section is to provide ways in which ASAAM-T can evolve to a more complete CASE tool. There are many possible directions in which ASAAM-T can evolve. We now present several features and concepts from different knowledge domains which can be implemented in the future. The collection of items mentioned is by no means exhaustive and is meant as a general direction for further development of ASAAM-T.
6.1 Usability
Methods outline a process to achieve a certain result. In case of ASAAM, the result is an architecture evaluation with respect to crosscutting concerns or aspects. The user of ASAAM-T continually needs to view and manipulate information. When architectures become complex and the scenario selection large, the issue of searching information becomes relevant. If a software engineer performs an architecture analysis he will need to view information on different levels of granularity. In case of scenario evaluation he needs to analyze a scenario one at the time. To assess impact on several components, multiple descriptions have to be viewed and an impact must be added. At each point the software engineer should be aware which choices he has made and yet has to make. One possibility is that the software engineer may want to query the scenario database or architecture to for example quickly find architectural components which contain descriptions mentioning keywords such as 'networking' or 'graphical user interfaces'. To support these requirements additional research may be needed to use create a scalable user interface for ASAAM-T.
6.2 ASAAM Workflow
ASAAM-T is built on concepts from the method domain model, described in the introductory paper of ASAAM [ASAAM]. This domain model uses concepts rule, method, process and artifact. This domain model provides a way to structure rule evaluation in ASAAM-T. While ASAAM is a method which exists as a separate conceptual framework, ASAAM-T is a CASE tool which requires additional features to support user activity. While many features are implemented in the current ASAAM-T version, many more features are necessary to create sophisticated user support. Examples of such features are contextual menus [8]. The former patterns are related to human computer interaction or user interface design domains.
Another concept which may be relevant for user involvement in ASAAM-T is a workflow. Workflows integrate user activity with well defined methods. Workflows use state machines to formally describe the underlying method and
uses validators and action providers to allow feedback and control to users to manage the workflow. The Workflow perl module provides all these features and can be used as an example of how to model workflows [9]. The workflow model can be thought of as an extension to the method model described in [3]. It explicitly models constructs which allow the user to control methods, while the method model only defines rule evaluation and method structure, not how rules are activated and how method feedback can be shown to the user or the component controlling the workflow.
References
|
{"Source-Url": "http://trese.cs.utwente.nl/Synthesis/Tools/ASAAM/asaam-t-manual.pdf", "len_cl100k_base": 5804, "olmocr-version": "0.1.50", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 40387, "total-output-tokens": 7005, "length": "2e12", "weborganizer": {"__label__adult": 0.00030803680419921875, "__label__art_design": 0.00043654441833496094, "__label__crime_law": 0.00021338462829589844, "__label__education_jobs": 0.00107574462890625, "__label__entertainment": 5.3048133850097656e-05, "__label__fashion_beauty": 0.00012874603271484375, "__label__finance_business": 0.00013184547424316406, "__label__food_dining": 0.00024187564849853516, "__label__games": 0.00043082237243652344, "__label__hardware": 0.00039505958557128906, "__label__health": 0.0001862049102783203, "__label__history": 0.00016307830810546875, "__label__home_hobbies": 5.728006362915039e-05, "__label__industrial": 0.0002257823944091797, "__label__literature": 0.00020182132720947263, "__label__politics": 0.0001735687255859375, "__label__religion": 0.00032067298889160156, "__label__science_tech": 0.0023746490478515625, "__label__social_life": 7.838010787963867e-05, "__label__software": 0.004405975341796875, "__label__software_dev": 0.98779296875, "__label__sports_fitness": 0.00022840499877929688, "__label__transportation": 0.0003044605255126953, "__label__travel": 0.00016248226165771484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31203, 0.04676]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31203, 0.64192]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31203, 0.87983]], "google_gemma-3-12b-it_contains_pii": [[0, 96, false], [96, 968, null], [968, 3341, null], [3341, 5612, null], [5612, 8274, null], [8274, 8326, null], [8326, 8371, null], [8371, 9599, null], [9599, 10407, null], [10407, 10481, null], [10481, 11738, null], [11738, 12634, null], [12634, 15527, null], [15527, 18572, null], [18572, 19852, null], [19852, 20430, null], [20430, 22178, null], [22178, 24722, null], [24722, 24722, null], [24722, 26803, null], [26803, 29570, null], [29570, 30142, null], [30142, 31203, null]], "google_gemma-3-12b-it_is_public_document": [[0, 96, true], [96, 968, null], [968, 3341, null], [3341, 5612, null], [5612, 8274, null], [8274, 8326, null], [8326, 8371, null], [8371, 9599, null], [9599, 10407, null], [10407, 10481, null], [10481, 11738, null], [11738, 12634, null], [12634, 15527, null], [15527, 18572, null], [18572, 19852, null], [19852, 20430, null], [20430, 22178, null], [22178, 24722, null], [24722, 24722, null], [24722, 26803, null], [26803, 29570, null], [29570, 30142, null], [30142, 31203, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31203, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31203, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31203, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31203, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 31203, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31203, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31203, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31203, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31203, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31203, null]], "pdf_page_numbers": [[0, 96, 1], [96, 968, 2], [968, 3341, 3], [3341, 5612, 4], [5612, 8274, 5], [8274, 8326, 6], [8326, 8371, 7], [8371, 9599, 8], [9599, 10407, 9], [10407, 10481, 10], [10481, 11738, 11], [11738, 12634, 12], [12634, 15527, 13], [15527, 18572, 14], [18572, 19852, 15], [19852, 20430, 16], [20430, 22178, 17], [22178, 24722, 18], [24722, 24722, 19], [24722, 26803, 20], [26803, 29570, 21], [29570, 30142, 22], [30142, 31203, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31203, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
fcdff7abbfd417346a68ca2666d25a36b973ead8
|
Dynamic Ontology-Based Redefinition of Events
Intended to Support the Communication of Complex
Information in Ubiquitous Computing
Carlos Rodríguez-Domínguez, Kawtar Benghazi, Manuel Noguera, María Bermúdez-Edo, José Luis Garrido
Department of Computer Languages and Systems, University of Granada
Escuela Técnica Superior de Ingenierías Informática y Telecomunicaciones
C/ Periodista Daniel Saucedo Aranda S/N 18071 Granada, Spain
E-mail: {carlosrodriguez, benghazi, mnoguera, mbe, jgarrido}@ugr.es
Received: June 22, 2010 Accepted: August 31, 2010 DOI: 10.5296/npa.v2i3.421
Abstract
Ubiquitous systems should properly support the connection/disconnection of entities at run-time. Accordingly, the communication of information in this type of systems should be able to adapt themselves to changes in their structure and participant entities without any need of user intervention. In this regard and due to the dynamic nature of these systems, asynchronous communication is more useful than synchronous one. Particularly, the publish/subscribe paradigm is used as it supports, not only asynchronous communications, but also the loose coupling between system entities, which is an important requirement that has to be satisfied in order to deal with the changes in the configuration of the communications between the involved entities.
In this paper, we propose a model of dynamically redefinable events in order to support dynamic reconfiguration of communications in ubiquitous systems. We also introduce associated techniques for publishing, subscribing and combining those events. The structure of the events and their intended semantics will be formally specified in an ontology, which enables automated reasoning based on Description Logics.
Furthermore, the proposal is described by means of an example and implemented as part of a coordination middleware intended to support the development of ubiquitous systems.
Keywords: Event, Publish/Subscribe, Distributed Systems, Data Dissemination, Semantics, Ontology.
1. Introduction
Nowadays, it is very common to have ubiquitous systems [1][2] all around us. These systems are composed by mobile applications, services and “invisible” devices that seamlessly interact with the real environment while permanently being connected to a network (WAN, LAN, PAN, etc.). These systems also show the following requirements: (1) Several heterogeneous devices coexisting in the same network; (2) Limited availability of resources; (3) Continuous changes in the location of the participants; (4) Semantics of the exchanged information will depend on the network and the location. Traditional approaches do not fully satisfy all these requirements [3], as they are intended to support more general requirements and not those that are specifically associated with ubiquitous systems.
Asynchronous communications are used in order to deal with the dynamic nature of these systems, as new entities may appear or disappear constantly and, so, synchronous communications will have to be short-lived in most cases. Particularly, the publish/subscribe communication paradigm [4] is frequently chosen in order to exchange information in ubiquitous systems, mainly due to its asynchronous nature and the loosely coupling of communication participants [5].
Usually, information in the publish/subscribe communication paradigm is presented as events, which can be defined as a significant occurrence that has a location in time and space [6][7]. Representing information as events and adequately managing them may be useful in ubiquitous systems, as information is frequently exchanged only when the status of any of the participants has changed. For readability issues, all along the paper we will also refer to the message that notifies an event occurrence by the term event.
As new devices, services and applications are incorporated into ubiquitous systems (or removed from them), mainly due to steadily changes of their physical location, it becomes more and more complex to predict a suitable configuration to guarantee that, over time, each one will receive and process the required information so as to accomplish their intended objectives [8]. Thus, it is necessary to propose the definition of models and the usage of techniques that support the design and development of dynamically reconfigurable applications, services and devices, i.e., the ability to be modified and extended at run-time [9].
In recent years, some technological solutions based on the publish/subscribe paradigm have been proposed in order to support communications in ubiquitous computing systems. In this regard, some remarkable solutions are Mobile-Gaia [10], Lime [11] and MATE [12]. However, all these pieces of work do not deal with the conceptualization of the communicated information and its elements.
In this paper, we argue that a first step for providing communication participants with on-the-fly reconfiguration capabilities in ubiquitous systems is to include events that are able to be dynamically redefined in the publish/subscribe paradigm. Specifically, we will introduce the notion of redefinable event, a formal ontological model for these events, a formal adaptation of the publication and subscription techniques in order to support the
proposed event model and a technique to combine events at run-time in order to build new ones and to notify them to interested entities.
This research work proposes the usage of formal semantic specification techniques in order to check the consistency of each subscription to events and to store the properties of the internal information that determines each event. Specifically, this formal specification will be represented in an ontology of events and related topics. Events will be able to be built by extracting parts of the internal information that will be stored in other events and by combining them in a consistent way (i.e., a combination of information properties that is consistent with the ontology). When this is done at run-time and new events are built from an event to which information is added or modified, we denote this fact as a redefinition of events.
The remainder of the paper is structured as follows. In Section 2, the proposal for dynamically redefining events in publish/subscribe communication paradigm is described. In Section 3 an overview of the technological support of the proposal is summarized. In Section 4 some work that is related to the proposal described in this paper is analyzed. Finally, Section 5 presents the conclusions drawn from this article and some directions for future work.
2. Redefining Events in Publish/Subscribe Paradigm
This section introduces the proposal to extend the publish/subscribe communication paradigm by providing events with dynamic redefinition capabilities. We present an event model, a definition of this model by means of an ontology, techniques for publishing and subscribing to these events and a technique to combine them at run-time. Finally, we describe an example scenario in order to give more insight about how the communication of information in ubiquitous systems can change at run time.
2.1 Redefinable Events
In the publish/subscribe communication paradigm, the information exchanged between communication participants (applications, services or devices) is encapsulated into events. We define an event as a communication element that is composed of a set of pieces of information that are related to some topics and can be produced by a communication participant as a result of a change in its state. These pieces of information are encapsulated into event nodes. An event node is structured as an identifier-type-value tuple. Each event may have an unlimited number of event nodes to represent any piece of information.
A valid identifier for an event node consists of a unique name. Primitive types can be any of the following ones: integer, floating-point, string, byte, boolean and sequences of these types. It is important to mention that an event node is also an event. This way, hierarchical structures (i.e., trees) can be expressed as events.
Each event is associated with one topic. Each topic represents the semantics of each event, which may be the result of the semantic combination of different topics. For example, if temperature and noise topics exist and there is a semantic association between both topics
in the form of comfort topic, there may exist an event $e_i$ whose topic is comfort and whose event nodes are an association of the event nodes that are used to syntactically represent both temperature and noise events.
In order to deal with the dynamic nature of ubiquitous systems, we introduce the notion of redefinable events so as to allow adapting events to the changes that take place in the system (e.g., to add devices, to remove them, to modify the connections between the existing ones, etc.), by redefining the set of event nodes at run-time. Additionally, event topics can be dynamically inferred as follows:
- If an event is composed of one event node only, its topic will be the topic of such event node.
- Otherwise, if it is composed of more than one event node, its topic will be deduced from the semantic association between the topics of each event node.
Thus, in redefinable events, as event nodes may be dynamically added or removed, the topic of each event can be inferred. For example, if a redefinable event with a temperature event node is expanded by adding a noise event node, and both temperature and noise topics are semantically associated with the more general comfort topic, it will be inferred that the extended event is also semantically associated with the comfort topic.
The metamodel of events is shown in Figure 1 using UML class diagrams, and the elements of this metamodel are summarized in Table 1.
Table 1. Event metamodel elements
<table>
<thead>
<tr>
<th>Element</th>
<th>Interpretation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Event</td>
<td>Communication element that is composed of a set of pieces of information that are related to a specific topic</td>
</tr>
<tr>
<td>Event Node</td>
<td>Pieces of information structured as identifier-type-value tuples</td>
</tr>
<tr>
<td>Redefinable Event</td>
<td>Event whose structure could be redefined at run-time (i.e., adding, removing, or modifying event nodes)</td>
</tr>
<tr>
<td>Topic</td>
<td>Each topic defines a way to indirectly connect publishers with interested subscribers. Some examples of topics are temperature, humidity, users, devices, etc.</td>
</tr>
<tr>
<td>Relation Event-Topic</td>
<td>Each topic represents the formal semantics of each event</td>
</tr>
<tr>
<td>Relation Event-Event Node</td>
<td>An event is composed by a set of event nodes. Each event node has an associated topic, too. This way, it is possible to dynamically infer the topic of an event based only on the nodes: It will be deduced from the semantic association between the topics.</td>
</tr>
<tr>
<td>Relation Event-Redefinable Event</td>
<td>A Redefinable Event will be an specialization of an event that will be able to be modified at run-time. For example, it will be possible to add or to modify event nodes and to change the semantics associated with the original event.</td>
</tr>
<tr>
<td>Relation Topic-Topic</td>
<td>Each topic may be related with other topics as established by the ontology that represents the events (see Section 2.2).</td>
</tr>
<tr>
<td>Relation Event Node - Identifier - Primitive Type - Value</td>
<td>Each event node will have a tuple identifier-type-value associated.</td>
</tr>
</tbody>
</table>
2.2 An Ontological Model of Events
In Computer Science, an ontology is defined as “a specification of a conceptualization” [13]. An ontology describes the topics in a domain, the relations between them and the constraints on them. The metamodel shown in Figure 1 has been specified in a formal ontology. This ontological model is used in order to represent events, their structures (event nodes) and the semantic information associated with them. The ontology will also store the instantiations of the event model.
A system entity, which is going to be called the semantic entity, should monitor all changes the ontology may undergo and send out events so as to notify all the topics that may have been modified. Such changes could also be the result from automated reasoning procedures based on description logics [14] and that may infer implicit knowledge. Additionally, the semantic entity enables to check the substructure of a particular kind of topic from its ontological identifier.
Figure 2 shows a graphical representation of an instantiation of the ontology that is used in order to formalize topics and that is based on the metamodel of Figure 1. Note that in Figure 2, for a comfort type topic, the semantic entity will report that user_condition is a subtype of the comfort topic and that it is the result of the union of noise and temperature topics that, in turn, will have their own properties and characteristics.
Figure 2. An instantiation sample of the ontology.
Figure 3 shows the implementation of part of the ontology in Protégé editor [15]. The ontology has been implemented in the OWL language [16], whose formal semantics is based on Description Logics.
Figure 3. An implementation of the ontology in Protégé ontology editor.
2.3 Event Subscription Techniques
A subscription $\sigma$ to an event $e_i$ is defined as a “filter over a portion of the event content (or the whole of it), expressed through a set of constraints” [17]. The proposed events support two subscription variants:
- **Topic-Based.** Communication participants show interest in a topic and, from that moment, they receive events that are semantically related to that topic. For example, if a participant subscribes to the comfort topic, it will receive events, not only related to the comfort topic, but also to temperature and noise ones (if they are semantically related to it). Formally, the set of events that are received by a communication participant with a topic-based subscription $\sigma_\tau$ is defined as follows:
Let $\text{subscribe}$ be a function that filters all the events that are received by a participant on the basis of their associated topic, let $T$ be the set of all semantically formalized topics and $\tau$ be a topic, then:
$$\sigma_\tau = \text{subscribe}(\tau), \tau \in T$$ (1)
Let $E$ be the set of all possible events and let $R$ be the product set (or the cartesian product) $E \times T$ in which all the relations $(e_i, \tau)$ represent an event $e_i$ that is semantically related to the topic $\tau$. We define the binary operation “semantically related to”, annotated as $\bowtie$, as follows:
$$e_i \bowtie \tau \iff (e_i, \tau) \in R, R = E \times T, e_i \in E, \tau \in T$$ (2)
Let $S_{\sigma_\tau}$ be the set of all the events received by a communication participant whose topic-based subscription is $\sigma_\tau$, let $e_i(k)$ be a function that retrieves the k-th event node of the event $e_i$, which is also an event (see Section 2.1), and let $n_i$ be the number of event nodes of $e_i$, then:
$$S_{\sigma_\tau} = \{e_i \mid e_i \bowtie \tau\} \cup \{e_i \mid \forall k \in \{1, ..., n_i\}, e_i(k) \bowtie \tau\}$$ (3)
- **Content-Based.** Communication participants show interest in receiving events that are semantically associated with a topic when a set of constraints over the event nodes is accomplished. For example, it can be specified as a subscription to the temperature topic and receive events only when this temperature is over 45º C. The set of events that are received by a communication participant with a content-based subscription $\sigma_{(\tau, P_\tau)}$ is formally defined as follows:
Let $e_i(k)$ be the k-th event node of the event $e_i$, let $t_{i,k}$ be the primitive type of the event node $e_i(k)$, let $a$ be a constant of any primitive type and $t_a$ its primitive type. We define the binary operation “is comparable to” (annotated as $\dagger$) as follows:
$$e_i(k) \dagger a \iff t_{i,k} = t_a$$ (4)
If $e_i(k) \dagger a$, five comparing operators can be defined:
\[ \varepsilon_i(k) = a; \varepsilon_i(k) < a; \varepsilon_i(k) \leq a; \varepsilon_i(k) > a; \varepsilon_i(k) \geq a \] (5)
These operators always follow a lexicographical order. For example, \(4 < 5, \text{“aaa”} < \text{“aab”}, (4, 5, 6, 9) < (4, 6, 3, 1), \) etc.
Let \(s_j\) be a comparing operation between any \(\varepsilon_i(k)\) and a constant \(a_i\), where \(\varepsilon_i(k) \vdash a\), let \(\tau\) be a topic and \(\varepsilon_i(k) \vdash \tau\). We define the predicate \(P_{\tau}\) as follows:
\[ P_{\tau} = s_1 \land ... \land s_m \] (6)
Let \(contentSubscribe\) be a function that filters all the events that are received by a participant based on a topic \(\tau\) and a set of constraints described by the predicate \(P_{\tau}\), then:
\[ \sigma_{(\tau,P_{\tau})} = contentSubscribe(\tau,P_{\tau}) \] (7)
Let \(S_{\alpha(\tau,P_{\tau})}\) be the set of all the received events by a communication participant whose content-based subscription is \(\sigma_{(\tau,P_{\tau})}\), whose constraints are specified by the predicate \(P_{\tau}\), and let \(v_{i,k}\) be the value associated with the event node \(\varepsilon_i(k)\), then:
\[ S_{\sigma_{(\tau,P_{\tau})}} = \{ \varepsilon_i : \varepsilon_i \in S_{\sigma_{\tau}}, \{P_{\tau} \land (\varepsilon_i(k) = v_{i,k})\} \vdash \neg \emptyset \} \] (8)
In this formula, \(\{P_{\tau} \land (\varepsilon_i(k) = v_{i,k})\} \vdash \neg \emptyset\) means that the predicate \(P_{\tau}\), when in conjunction with the equality \(\varepsilon_i(k) = v_{i,k}\) has to be consistent (i.e., it does not contain any logical contradictions). For example, if the predicate \(P_{\text{temperature}} = \{\varepsilon_i(1) < 45\}\) and the event \(\varepsilon_i\) is published with \(\varepsilon_i(1) = 46\), then the subscriber will not receive the event, as \(\{\varepsilon_i(1) < 45 \land \varepsilon_i(1) = 46\}\) is not a consistent set.
Each ubiquitous system should have an entity that will be in charge of storing every \(\sigma\) subscription. This entity will be known as the subscription entity and it will provide both \(subscribe\) and \(contentSubscribe\) functions. Additionally, it also provides the function \(checkSubs(\varepsilon_i)\), which checks whether a subscription related to the event \(\varepsilon_i\) exists or not. This function is the basis to publish events, as if an event should be published, it returns the identifiers of the participants that are interested in the specified event.
2.4 Event Publishing Techniques
An event \(\varepsilon_i\) must be sent out to a communication participant that is subscribed to the topic \(\tau\) if that event is a member of the set \(S_{\alpha(\tau)}\) or the set \(S_{\alpha(\tau,P_{\tau})}\). For example, if a communication participant subscribes to the comfort topic and this topic results from the semantic combination between temperature and noise topics, then, comfort, temperature and noise events will be communicated to it. In order to solve the semantic relations between events and topics, the semantic entity is used.
Since each ubiquitous computing system has a subscription entity working, event publishers may invoke the function \(checkSubs(\varepsilon_i)\) to check whether they should publish an
event $e_i$ or not. This way, events are only sent when a previous related subscription exists, thereby, avoiding unnecessary event publications. As it was specified in the previous section, the $checkSubs(e_i)$ will return the identifiers of the participants that are interested in the corresponding event. Hence, direct communications may be established between event publishers and subscribers.
It is important to note that the previous technique to publish events may be developed as a publish function in a centralized publication entity or as an intrinsic function to each communication participant. As a consequence, both centralized and distributed publication techniques are supported.
Moreover, when time constraints should be fulfilled in the ubiquitous system (for example, a restriction in the interval of time elapsed between a time of publishing an event and receiving it by its subscribers), it is recommended to choose the centralized publication technique, as it can be more predictable, which is an important requirement in real-time systems [18]. In case that the ubiquitous system should meet requirements like predictability and scalability, which would require on the one hand, the centralized technique and, on the other hand, the distributed technique, it will have to be decided which of the requirements are more important to be fulfilled. Also, these techniques may be mixed to meet some of these “conflicting” requirements by combining participants that use a publication entity and participants that implement their own publication techniques.
2.5 Event Combination
Event combination is made by using a combination entity, which will be in charge of receiving all the events that are notified in a system and will store the last received event of each topic. Whenever a new event is stored, it will extract some of its event nodes and will try to combine it with other nodes extracted from other stored events. In order to do that in a limited period of time, the semantic entity will be used so as to only make combinations that are consistent with the properties of the topics that are stored in the ontology. Additionally, event combinations in order to build events of certain topics will only be done if at least one system entity is interested in that topic (i.e., it is subscribed to that topic). For instance, if it exists a comfort topic whose properties are “degrees” and “db” and the combination entity receives several events related with several topics, it will only try to extract “degrees” and “db” information in order to build a new comfort event.
Whenever a new event is built, this entity will notify it and entities that are subscribed to its related topic will receive it.
By combining events with a combination entity, other entities will not have to take into account that the information that they require comes from different sources and has to be combined into a single piece of information.
2.6 An Example
In order to give more insight about how the structure of the information to be communicated in an ubiquitous system can be reconfigured at run time, we show a scenario
that consists of a temperature sensor and a noise sensor (see Figure 4), to which is going to be added (A) and removed (B) a humidity sensor. These sensors publish temperature, noise and humidity events whenever values measured change. There is also a mobile application that is subscribed to the comfort topic, which is the most general one in this scenario. Its intention is to show to an end user all the information that is related with the comfort of the room in which it is situated.
It is important to note that the combination entity will receive all the events that are published in the whole system and will synchronously communicate with the semantic entity in order to request information or to provide it.
Published events, which can be considered as sets of event nodes or tuples of identifiers-types-values, store the following information (where x is a constant of the same primitive type specified by the corresponding event node):
- $e_{temperature} = \{("degrees", float, x)\}$
- $e_{noise} = \{("db", integer, x)\}$
- $e_{comfort} = \{ e_{temperature}, e_{noise} \}$
The combination entity will receive both temperature and noise events and will try to build comfort events. In order to detect which are the consistent combinations between the retrieved pieces of information, this entity will use the semantic entity. Whenever it combines the information in an appropriate way, it will publish a comfort event. This event will be notified to the mobile application, which will use the internal information of it in order to fulfil its objective (i.e., to show the available comfort information of a room).
In this initial scenario, it is added a humidity sensor that produces events related to
humidity topic. As this one did not exist previously, the ontology that stores all the topics is modified in order to incorporate it at the same abstraction level of temperature and noise ones. Thus, the humidity event is created and the comfort event is redefined. They will have the following structures:
- $\varepsilon_{\text{humidity}} = \{"percentage", \text{float}, x\}$
- $\varepsilon_{\text{comfort}} = \{\varepsilon_{\text{temperature}}, \varepsilon_{\text{noise}}, \varepsilon_{\text{humidity}}\}$
As the combination entity will keep trying to make combinations between the information that is stored in the received events by using the semantic entity, it will now build a comfort event whose event nodes are extracted from humidity, temperature and noise events. Each time this entity builds a new event, it will notify it. Thus, the mobile application will receive these new comfort events automatically after adding the humidity sensor to the system and will be able to show humidity information to an end user without requiring any previous reconfiguration.
If the humidity sensor is removed from the scenario (figure 4, B), the comfort event will be redefined again so as to only contain information about the temperature and noise. Again, the combination entity will try to make combinations of the received events by using the semantic entity. Thus, it will now build comfort events by only using the information about temperature and noise. Therefore, it is possible to add or to remove participants to/from an ubiquitous system without requiring any changes in the subscriptions of any of the previously existing participants, as redefining events entails a dynamic reconfiguration of communications.
3. Technological Support
At the technological level, we have implemented a middleware based on the model and techniques proposed to show the feasibility of the approach. The middleware is defined as a software layer that is located between the operating system and the applications in order to hide the existing heterogeneity of different physical computer architectures, operating systems and programming languages, thereby, simplifying the process of transferring information between the different machines that are part of a distributed system [19].
The IceP communication protocol [20] has been used to codify network messages. An event handler is used by communication participants for sending and receiving events. Events are coded as dictionary structures. These dictionaries are collections of unique pairs of keys and values. Keys are, in this case, strings that represent the identifier of each event node (see Section 2.1) and values are arrays of bytes. These arrays of bytes represent values as specified by the IceP communication protocol.
The communication abstraction layer consists of a set of modules that hide which communication interface is used when transferring data from one machine to others. For example, by default, one module is provided for transferring data over a TCP/IP compatible interface to hide the inherent complexity of transferring messages over sockets.
Data distribution (publish function) may be centralized in a service or distributed
between different communication participants, depending on the characteristics of the network. Thus, in multicast networks, participants will have support to distribute events to other participants without requiring any specialized entity, although communications would be unreliable (i.e., based on the UDP protocol). In order to support reliable communications (i.e., based on the TCP protocol), both publication and subscription entities, which are described in Section 2, have been implemented as services. In contrast with the unreliable distribution of events, in this case, communications are centralized and, thus, there are risks of bottlenecks in some communication scenarios.
The semantic entity has been developed as a service that makes use of a combination of an ontology described in OWL and Pellet reasoner [21].
Finally, in order to increase the portability of the middleware, its API is offered in several programming languages, like C++, Objective-C, Java, Python and PHP, and, in addition to Windows, Linux and MacOSX, it has been ported to iPhone and Android mobile platforms.
4. Related Work
Different works in the literature have dealt with the fulfilment of the requirements of ubiquitous systems by using the publish/subscribe communication paradigm. Specifically, some of them have introduced several notions of events and different publication and subscription techniques.
Courtenage [22] defines a way of specifying and detecting composite events in content-based publish/subscribe systems. Similarly, Data Distribution Service (DDS) [23] introduces the notion of MultiTopic in the description of a Topic, that is, a topic that results from the combination of several Topics. For doing that, it uses SQL-like expressions. In these proposals, event topics are not specified using an ontology that deals with the their conceptualization, they can not be modified dynamically and subscription and publishing techniques are not adapted to support some of the requirements associated with ubiquitous systems, such as providing a decentralized (or P2P) operation mode or supporting dynamic changes in the system (i.e., adding or removing applications, services or devices). By using an ontology, it will be supported to detect the consistency of the combinations, to formalize the information associated with each topic and to establish system-wide complex relations between topics.
Wang [24] and Petrovic [25] define two proposals that involve the usage of ontologies in order to support complex subscriptions, demonstrating the possibility of delivering efficient implementations of this kind of publish/subscribe systems. These papers are not focused on ubiquitous systems and, thus, they do not specifically address their requirements.
Cugola and Jacobsen [5] propose a publish/subscribe middleware for mobile systems that meets most of the requirements associated with ubiquitous systems. Our proposal is focused on how to deal with a dynamically changing environment in terms of adapting the information that is exchanged between the communicating participants, which is not the approach that follows the work referenced above.
5. Conclusions and Future Work
In this paper, we have proposed a model of redefinable events to support dynamic reconfiguration of communications in ubiquitous systems. We have also introduced some techniques for publishing and subscribing those events. The structure of the events and their intended semantics have been formally specified in an ontology, which enables automated reasoning based on Description Logics. Thus, our proposals give support to the mobility of communication participants between several networks, ensuring a valid data dissemination due to shared and formal semantics. Furthermore, this proposal has been implemented as part of a coordination middleware that supports both distributed and centralized publication techniques. Finally, this proposal may help in the design and development of Mobile Ad-Hoc Networks (MANETs), which consist of a set of mobile hosts that may exchange information mutually and roam around at their will without a base station to technologically support their communications [26] and which are gaining a special attention as a network topology to support communications in ubiquitous systems [27].
Also it is important to note that in the proposal, high level semantic information is interconnected with low-level layers, so as to provide functionality that was previously only offered at the application level. For example, currently, if an application needs to show information that may come from different sources and that only results from the combination of the information of those sources, this combination has to be “hard coded”, affecting negatively to the flexibility of the application and its maintainability.
As future work we aim at extending the proposed middleware to support traditional coordination functionalities between communication participants (e.g., Linda coordination model [28]) in ubiquitous systems based on the publish/subscribe communication paradigm. Coordination capabilities will be enabled by appropriately combining both publishing and subscription techniques, formalized semantics of events and the combination service.
Finally, we plan to evaluate the performance of the middleware in several real scenarios and to compare the results with some existing middlewares for ubiquitous systems.
Acknowledgement
The Spanish Ministry of Education and Science funds this work through project TIN2008-05995/TSI.
References
Copyright Disclaimer
Copyright reserved by the author(s).
This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
|
{"Source-Url": "http://www.macrothink.org/journal/index.php/npa/article/download/421/354", "len_cl100k_base": 6981, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 35032, "total-output-tokens": 9268, "length": "2e12", "weborganizer": {"__label__adult": 0.00034046173095703125, "__label__art_design": 0.0005283355712890625, "__label__crime_law": 0.00046944618225097656, "__label__education_jobs": 0.001224517822265625, "__label__entertainment": 0.00016927719116210938, "__label__fashion_beauty": 0.0002129077911376953, "__label__finance_business": 0.0004651546478271485, "__label__food_dining": 0.00039315223693847656, "__label__games": 0.0007176399230957031, "__label__hardware": 0.0018110275268554688, "__label__health": 0.0008726119995117188, "__label__history": 0.00045013427734375, "__label__home_hobbies": 0.00013136863708496094, "__label__industrial": 0.0005860328674316406, "__label__literature": 0.00070953369140625, "__label__politics": 0.0004086494445800781, "__label__religion": 0.0006003379821777344, "__label__science_tech": 0.41748046875, "__label__social_life": 0.00016164779663085938, "__label__software": 0.0287933349609375, "__label__software_dev": 0.54248046875, "__label__sports_fitness": 0.0002493858337402344, "__label__transportation": 0.0007610321044921875, "__label__travel": 0.0002472400665283203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38651, 0.02432]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38651, 0.26732]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38651, 0.91354]], "google_gemma-3-12b-it_contains_pii": [[0, 2036, false], [2036, 5295, null], [5295, 8419, null], [8419, 9864, null], [9864, 13500, null], [13500, 13822, null], [13822, 16623, null], [16623, 19868, null], [19868, 23007, null], [23007, 24727, null], [24727, 27934, null], [27934, 31097, null], [31097, 34068, null], [34068, 37054, null], [37054, 38651, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2036, true], [2036, 5295, null], [5295, 8419, null], [8419, 9864, null], [9864, 13500, null], [13500, 13822, null], [13822, 16623, null], [16623, 19868, null], [19868, 23007, null], [23007, 24727, null], [24727, 27934, null], [27934, 31097, null], [31097, 34068, null], [34068, 37054, null], [37054, 38651, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38651, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38651, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38651, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38651, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38651, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38651, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38651, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38651, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38651, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38651, null]], "pdf_page_numbers": [[0, 2036, 1], [2036, 5295, 2], [5295, 8419, 3], [8419, 9864, 4], [9864, 13500, 5], [13500, 13822, 6], [13822, 16623, 7], [16623, 19868, 8], [19868, 23007, 9], [23007, 24727, 10], [24727, 27934, 11], [27934, 31097, 12], [31097, 34068, 13], [34068, 37054, 14], [37054, 38651, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38651, 0.07051]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
929158edeb4d6743e0ec0b64c693dc580e7d149b
|
[REMOVED]
|
{"len_cl100k_base": 5389, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 28259, "total-output-tokens": 7143, "length": "2e12", "weborganizer": {"__label__adult": 0.0002875328063964844, "__label__art_design": 0.0004742145538330078, "__label__crime_law": 0.00025963783264160156, "__label__education_jobs": 0.001129150390625, "__label__entertainment": 5.334615707397461e-05, "__label__fashion_beauty": 0.0001518726348876953, "__label__finance_business": 0.00027108192443847656, "__label__food_dining": 0.0002732276916503906, "__label__games": 0.0004150867462158203, "__label__hardware": 0.0006041526794433594, "__label__health": 0.0004963874816894531, "__label__history": 0.0002677440643310547, "__label__home_hobbies": 9.09566879272461e-05, "__label__industrial": 0.0004024505615234375, "__label__literature": 0.0002715587615966797, "__label__politics": 0.00016367435455322266, "__label__religion": 0.00040602684020996094, "__label__science_tech": 0.034393310546875, "__label__social_life": 8.445978164672852e-05, "__label__software": 0.01422882080078125, "__label__software_dev": 0.9443359375, "__label__sports_fitness": 0.00022101402282714844, "__label__transportation": 0.0003840923309326172, "__label__travel": 0.0001842975616455078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28315, 0.02167]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28315, 0.47108]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28315, 0.88748]], "google_gemma-3-12b-it_contains_pii": [[0, 2800, false], [2800, 4747, null], [4747, 7120, null], [7120, 8948, null], [8948, 11303, null], [11303, 14865, null], [14865, 17171, null], [17171, 18726, null], [18726, 20150, null], [20150, 23449, null], [23449, 26827, null], [26827, 28315, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2800, true], [2800, 4747, null], [4747, 7120, null], [7120, 8948, null], [8948, 11303, null], [11303, 14865, null], [14865, 17171, null], [17171, 18726, null], [18726, 20150, null], [20150, 23449, null], [23449, 26827, null], [26827, 28315, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28315, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28315, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28315, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28315, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28315, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28315, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28315, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28315, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28315, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28315, null]], "pdf_page_numbers": [[0, 2800, 1], [2800, 4747, 2], [4747, 7120, 3], [7120, 8948, 4], [8948, 11303, 5], [11303, 14865, 6], [14865, 17171, 7], [17171, 18726, 8], [18726, 20150, 9], [20150, 23449, 10], [23449, 26827, 11], [26827, 28315, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28315, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
3ed0334da48eb1938ca3f8dc045d1013b777f00a
|
# Table of Contents
Architecture ........................................................................................................... 1
Overview ............................................................................................................... 1
Cloud Volumes Service architecture ................................................................. 1
Control plane architecture .................................................................................. 5
Data plane architecture ...................................................................................... 6
Data encryption in transit .................................................................................... 7
Data encryption at rest ........................................................................................ 10
Firewall .............................................................................................................. 11
Architecture
Overview
Part of trusting a cloud solution is understanding the architecture and how it is secured. This section calls out different aspects of the Cloud Volumes Service architecture in Google to help alleviate potential concerns about how data is secured, as well as call out areas where additional configuration steps might be required to obtain the most secure deployment.
The general architecture of Cloud Volumes Service can be broken down into two main components: the control plane and the data plane.
Control plane
The control plane in Cloud Volumes Service is the backend infrastructure managed by Cloud Volumes Service administrators and NetApp native automation software. This plane is completely transparent to end users and includes networking, storage hardware, software updates, and so on to help deliver value to a cloud-resident solution such as Cloud Volumes Service.
Data plane
The data plane in Cloud Volumes Service includes the actual data volumes and the overall Cloud Volumes Service configuration (such as access control, Kerberos authentication, and so on). The data plane is entirely under the control of the end users and the consumers of the Cloud Volumes Service platform.
There are distinct differences in how each plane is secured and managed. The following sections cover these differences, starting with a Cloud Volumes Service architecture overview.
Cloud Volumes Service architecture
In a manner similar to other Google Cloud native services such as CloudSQL, Google Cloud VMware Engine (GCVE), and FileStore, Cloud Volumes Service uses Google PSA to deliver the service. In PSA, services are built inside a service producer project, which uses VPC network peering to connect to the service consumer. The service producer is provided and operated by NetApp, and the service consumer is a VPC in a customer project, hosting the clients that want to access Cloud Volumes Service file shares.
The following figure, referenced from the architecture section of the Cloud Volumes Service documentation, shows a high-level view.
The part above the dotted line shows the control plane of the service, which controls the volume lifecycle. The part below the dotted line shows the data plane. The left blue box depicts the user VPC (service consumer), the right blue box is the service producer provided by NetApp. Both are connected through VPC peering.
**Tenancy model**
In Cloud Volumes Service, individual projects are considered unique tenants. This means that manipulation of volumes, Snapshot copies, and so on are performed on a per-project basis. In other words, all volumes are owned by the project that they were created in and only that project can manage and access the data inside of them by default. This is considered the control plane view of the service.
**Shared VPCs**
On the data plane view, Cloud Volumes Service can connect to a shared VPC. You can create volumes in the hosting project or in one of the service projects connected to the shared VPC. All projects (host or service) connected to that shared VPC are able to reach the volumes at the network layer (TCP/IP). Because all clients with network connectivity on the shared-VPC can potentially access the data through NAS protocols, access control on the individual volume (such as user/group access control lists (ACLs) and hostnames/IP addresses for NFS exports) must be used to control who can access the data.
You can connect Cloud Volumes Service to up to five VPCs per customer project. On the control plane, the project enables you to manage all created volumes, no matter which VPC they are connected to. On the data plane, VPCs are isolated from one another, and each volume can only be connected to one VPC.
Access to the individual volumes is controlled by protocol specific (NFS/SMB) access control mechanisms.
In other words, on the network layer, all projects connected to the shared VPC are able to see the volume, while, on the management side, the control plane only allows the owner project to see the volume.
VPC Service Controls
VPC Service Controls establish an access control perimeter around Google Cloud services that are attached to the internet and are accessible worldwide. These services provide access control through user identities but cannot restrict which network location requests originate from. VPC Service Controls close that gap by introducing the capabilities to restrict access to defined networks.
The Cloud Volumes Service data plane is not connected to the external internet but to private VPCs with well-defined network boundaries (perimeters). Within that network, each volume uses protocol-specific access control. Any external network connectivity is explicitly created by Google Cloud project administrators. The control plane, however, does not provide the same protections as the data plane and can be accessed by anyone from anywhere with valid credentials (JWT tokens).
In short, the Cloud Volumes Service data plane provides the capability of network access control, without the requirement to support VPC Service Controls and does not explicitly use VPC Service Controls.
Packet sniffing/trace considerations
Packet captures can be useful for troubleshooting network issues or other problems (such as NAS permissions, LDAP connectivity, and so on), but can also be used maliciously to gain information about network IP addresses, MAC addresses, user and group names, and what level of security is being used on endpoints. Because of the way Google Cloud networking, VPCs, and firewall rules are configured, unwanted access to network packets should be difficult to obtain without user login credentials or JWT tokens into the cloud instances. Packet captures are only possible on endpoints (such as virtual machines (VMs)) and only possible on endpoints internal to the VPC unless a shared VPC and/or external network tunnel/IP forwarding is in use to explicitly allow external traffic to endpoints. There is no way to sniff traffic outside of the clients.
When shared VPCs are used, in-flight encryption with NFS Kerberos and/or SMB encryption can mask much of the information gleaned from traces. However, some traffic is still sent in plaintext, such as DNS and LDAP queries. The following figure shows a packet capture from a plaintext LDAP query originating from Cloud Volumes Service and the potential identifying information that is exposed. LDAP queries in Cloud Volumes Service currently do not support encryption or LDAP over SSL. CVS-Performance support LDAP signing, if requested by Active Directory. CVS-SW does not support LDAP signing.
unixUserPassword is queried by LDAP and is not sent in plaintext but instead in a salted hash. By default, Windows LDAP does not populate the unixUserPassword fields. This field is only required if you need to leverage Windows LDAP for interactive logins through LDAP to clients. Cloud Volumes Service does not support interactive LDAP logins to the instances.
The following figure shows a packet capture from an NFS Kerberos conversation next to a capture of NFS over AUTH_SYS. Note how the information available in a trace differs between the two and how enabling in-flight encryption offers greater overall security for NAS traffic.
---
**VM network interfaces**
One trick attackers might attempt is to add a new network interface card (NIC) to a VM in promiscuous mode (port mirroring) or enable promiscuous mode on an existing NIC in order to sniff all traffic. In Google Cloud, adding a new NIC requires a VM to be shut down entirely, which creates alerts, so attackers cannot do this.
In addition, NICs cannot be set to promiscuous mode at all and will trigger alerts in Google Cloud.
**Control plane architecture**
All management actions to Cloud Volumes Service are done through API. Cloud Volumes Service management integrated into the GCP Cloud Console also uses the Cloud Volumes Service API.
**Identity and Access Management**
Identity and Access Management (IAM) is a standard service that enables you to control authentication (logins) and authorization (permissions) to Google Cloud project instances. Google IAM provides a full audit trail of permissions authorization and removal. Currently Cloud Volumes Service does not provide control plane auditing.
**Authorization/permission overview**
IAM offers built-in, granular permissions for Cloud Volumes Service. You can find a complete list of granular permissions here.
IAM also offers two predefined roles called netappcloudvolumes.admin and netappcloudvolumes.viewer. These roles can be assigned to specific users or service accounts.
Assign appropriate roles and permission to allow IAM users to manage Cloud Volumes Service.
Examples for using granular permissions include the following:
- Build a custom role with only get/list/create/update permissions so that users cannot delete volumes.
- Use a custom role with only snapshot.* permissions to create a service account that is used to build application-consistent Snapshot integration.
- Build a custom role to delegate volumereplication.* to specific users.
**Service accounts**
To make Cloud Volumes Service API calls through scripts or Terraform, you must create a service account with the roles/netappcloudvolumes.admin role. You can use this service account to generate the JWT tokens required to authenticate Cloud Volumes Service API requests in two different ways:
- Generate a JSON key and use Google APIs to derive a JWT token from it. This is the simplest approach, but it involves manual secrets (the JSON key) management.
- Use Service account impersonation with roles/iam.serviceAccountTokenCreator. The code (script, Terraform, and so on.) runs with Application Default Credentials and impersonates the service account to gain its permissions. This approach reflects Google security best practices.
See Creating your service account and private key in the Google cloud documentation for more information.
**Cloud Volumes Service API**
Cloud Volumes Service API uses a REST-based API by using HTTPS (TLSv1.2) as the underlying network
transport. You can find the latest API definition here and information about how to use the API at Cloud Volumes APIs in the Google cloud documentation.
The API endpoint is operated and secured by NetApp using standard HTTPS (TLSv1.2) functionality.
**JWT tokens**
Authentication to the API is performed with JWT bearer tokens (RFC-7519). Valid JWT tokens must be obtained by using Google Cloud IAM authentication. This must be done by fetching a token from IAM by providing a service account JSON key.
**Audit logging**
Currently, no user-accessible control plane audit logs are available.
**Data plane architecture**
Cloud Volumes Service for Google Cloud leverages the Google Cloud private services access framework. In this framework, users can connect to the Cloud Volumes Service. This framework uses Service Networking and VPC peering constructs like other Google Cloud services, ensuring complete isolation between tenants.
For an architecture overview of Cloud Volumes Service for Google Cloud, see Architecture for Cloud Volumes Service.
User VPCs (standalone or shared) are peered to VPCs within Cloud Volumes Service managed tenant projects, which hosts the volumes.
The preceding figure shows a project (the CVS consumer project in the middle) with three VPC networks connected to Cloud Volumes Service and multiple Compute Engine VMs (GCE1-7) sharing volumes:
• VPC1 allows GCE1 to access volumes A and B.
• VPC2 allows GCE2 and GCE4 to access volume C.
• The third VPC network is a shared VPC, shared with two service projects. It allows GCE3, GCE4, GCE5, and GCE6 to access volumes D and E. Shared VPC networks are only supported for volumes of the CVS-Performance service type.
GCE7 cannot access any volume.
Data can be encrypted both in-transit (using Kerberos and/or SMB encryption) and at-rest in Cloud Volumes Service.
Data encryption in transit
Data in transit can be encrypted at the NAS protocol layer, and the Google Cloud network itself is encrypted, as described in the following sections.
Google Cloud network
Google Cloud encrypts traffic on the network level as described in Encryption in transit in the Google documentation. As mentioned in the section “Cloud Volumes Services architecture,” Cloud Volumes Service is delivered out of a NetApp-controlled PSA producer project.
In case of CVS-SW, the producer tenant runs Google VMs to provide the service. Traffic between user VMs and Cloud Volumes Service VMs is encrypted automatically by Google.
Although the data path for CVS-Performance isn’t fully encrypted on the network layer, NetApp and Google use a combination of IEEE 802.1AE encryption (MACSec), encapsulation (data encryption), and physically restricted networks to protect data in transit between the Cloud Volumes Service CVS-Performance service type and Google Cloud.
NAS protocols
NFS and SMB NAS protocols provide optional transport encryption at the protocol layer.
SMB encryption
SMB encryption provides end-to-end encryption of SMB data and protects data from eavesdropping occurrences on untrusted networks. You can enable encryption for both the client/server data connection (only available to SMB3.x capable clients) and the server/domain controller authentication.
When SMB encryption is enabled, clients that do not support encryption cannot access the share.
Cloud Volumes Service supports RC4-HMAC, AES-128-CTS-HMAC-SHA1, and AES-256-CTS-HMAC-SHA1 security ciphers for SMB encryption. SMB negotiates to the highest supported encryption type by the server.
NFSv4.1 Kerberos
For NFSv4.1, CVS-Performance offers Kerberos authentication as described in RFC7530. You can enable Kerberos on a per-volume basis.
The current strongest available encryption type for Kerberos is AES-256-CTS-HMAC-SHA1. NetApp Cloud Volumes Service supports AES-256-CTS-HMAC-SHA1, AES-128-CTS-HMAC-SHA1, DES3, and DES for
NFS. It also supports ARCFOUR-HMAC (RC4) for CIFS/SMB traffic, but not for NFS.
Kerberos provides three different security levels for NFS mounts that offer choices for how strong the Kerberos security should be.
As per RedHat’s Common Mount Options documentation:
- **sec=krb5** uses Kerberos V5 instead of local UNIX UIDs and GIDs to authenticate users.
- **sec=krb5i** uses Kerberos V5 for user authentication and performs integrity checking of NFS operations using secure checksums to prevent data tampering.
- **sec=krb5p** uses Kerberos V5 for user authentication, integrity checking, and encrypts NFS traffic to prevent traffic sniffing. This is the most secure setting, but it also involves the most performance overhead.
As a general rule, the more the Kerberos security level has to do, the worse the performance is, as the client and server spend time encrypting and decrypting NFS operations for each packet sent. Many clients and NFS servers provide support for AES-NI offloading to the CPUs for a better overall experience, but the performance impact of Kerberos 5p (full end-to-end encryption) is significantly greater than the impact of Kerberos 5 (user authentication).
The following table shows differences in what each level does for security and performance.
<table>
<thead>
<tr>
<th>Security level</th>
<th>Security</th>
<th>Performance</th>
</tr>
</thead>
</table>
| NFSv3—sys | • Least secure; plain text with numeric user IDs/group IDs
• Able to view UID, GID, client IP addresses, export paths, file names, permissions in packet captures | • Best for most cases |
| NFSv4.x—sys | • More secure than NFSv3 (client IDs, name string/domain string matching) but still plain text
• Able to view UID, GID, client IP addresses, name strings, domain IDs, export paths, file names, permissions in packet captures | • Good for sequential workloads (such as VMs, databases, large files)
• Bad with high file count/high metadata (30-50% worse) |
<table>
<thead>
<tr>
<th>Security level</th>
<th>Security</th>
<th>Performance</th>
</tr>
</thead>
</table>
| NAS—krb5 | • Kerberos encryption for credentials in every NFS packet—wraps UID/GID of users/groups in RPC calls in GSS wrapper
• User requesting access to mount needs a valid Kerberos ticket (either through username/password or manual key tab exchange); ticket expires after a specified time period and user must reauthenticate for access
• No encryption for NFS operations or ancillary protocols like mount/portmapper/nlm (can see export paths, IP addresses, file handles, permissions, file names, atime/mtime in packet captures) | • Best in most cases for Kerberos; worse than AUTH_SYS |
| NAS—krb5i | • Kerberos encryption for credentials in every NFS packet—wraps UID/GID of users/groups in RPC calls in GSS wrapper
• User requesting access to mount needs a valid Kerberos ticket (either via username/password or manual key tab exchange); ticket expires after a specified time period and user must reauthenticate for access
• No encryption for NFS operations or ancillary protocols like mount/portmapper/nlm (can see export paths, IP addresses, file handles, permissions, file names, atime/mtime in packet captures)
• Kerberos GSS checksum is added to every packet to ensure nothing intercepts the packets. If checksums match, conversation is allowed. | • Better than krb5p because the NFS payload is not encrypted; only added overhead compared to krb5 is the integrity checksum. Performance of krb5i won’t be much worse than krb5 but will see some degradation. |
<table>
<thead>
<tr>
<th>Security level</th>
<th>Security</th>
<th>Performance</th>
</tr>
</thead>
<tbody>
<tr>
<td>NFS – krb5p</td>
<td>• Kerberos encryption for credentials in every NFS packet—wraps UID/GID of users/groups in RPC calls in GSS wrapper</td>
<td>• Worst performance of the security levels; krb5p has to encrypt/decrypt more.</td>
</tr>
<tr>
<td></td>
<td>• User requesting access to mount needs a valid Kerberos ticket (either via username/password or manual keytab exchange); ticket expires after specified time period and user must reauthenticate for access</td>
<td>• Better performance than krb5p with NFSv4.x for high file count workloads.</td>
</tr>
<tr>
<td></td>
<td>• All of the NFS packet payloads are encrypted with the GSS wrapper (cannot see file handles, permissions, file names, atime/mtime in packet captures).</td>
<td></td>
</tr>
<tr>
<td></td>
<td>• Includes integrity check.</td>
<td></td>
</tr>
<tr>
<td></td>
<td>• NFS operation type is visible (FSINFO, ACCESS, GETATTR, and so on).</td>
<td></td>
</tr>
<tr>
<td></td>
<td>• Ancillary protocols (mount, portmap, nlm, and so on) are not encrypted - (can see export paths, IP addresses)</td>
<td></td>
</tr>
</tbody>
</table>
In Cloud Volumes Service, a configured Active Directory server is used as Kerberos server and LDAP server (to lookup user identities from an RFC2307 compatible schema). No other Kerberos or LDAP servers are supported. NetApp highly recommends that you use LDAP for identity management in Cloud Volumes Service. For information on how NFS Kerberos is shown in packet captures, see the section “Packet sniffing/trace considerations.”
**Data encryption at rest**
All volumes in Cloud Volumes Service are encrypted-at-rest using AES-256 encryption, which means all user data written to media is encrypted and can only be decrypted with a per-volume key.
- For CVS-SW, Google-generated keys are used.
- For CVS-Performance, the per-volume keys are stored in a key manager built into the Cloud Volumes Service.
Starting in November 2021, preview customer-managed encryption keys (CMEK) functionality was made available. This enables you to encrypt the per-volume keys with a per-project, per-region master key that is hosted in Google Key Management Service (KMS). KMS enables you to attach external key managers.
For information about configuring KMS for CVS-Performance, see Setting up customer-managed encryption keys.
Firewall
Cloud Volumes Service exposes multiple TCP ports to serve NFS and SMB shares:
- Ports required for NFS access
- Ports required for SMB access
Additionally, SMB, NFS with LDAP including Kerberos, and dual-protocol configurations require access to a Windows Active Directory domain. Active Directory connections must be configured on a per-region basis. Active Directory Domain controllers (DC) are identified by using DNS-based DC discovery using the specified DNS servers. Any of the DCs returned are used. The list of eligible DCs can be limited by specifying an Active Directory site.
Cloud Volumes Service reaches out with IP addresses from the CIDR range allocated with the gcloud compute address command while on-boarding the Cloud Volumes Service. You can use this CIDR as source addresses to configure inbound firewalls to your Active Directory domain controllers.
Active Directory Domain Controllers must expose ports to the Cloud Volumes Service CIDRs as mentioned here.
|
{"Source-Url": "https://docs.netapp.com/us-en/netapp-solutions/pdfs/sidebar/Architecture.pdf", "len_cl100k_base": 4498, "olmocr-version": "0.1.49", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 26120, "total-output-tokens": 4990, "length": "2e12", "weborganizer": {"__label__adult": 0.00039315223693847656, "__label__art_design": 0.0008082389831542969, "__label__crime_law": 0.0013971328735351562, "__label__education_jobs": 0.0007185935974121094, "__label__entertainment": 0.0001571178436279297, "__label__fashion_beauty": 0.00016438961029052734, "__label__finance_business": 0.0016603469848632812, "__label__food_dining": 0.0002567768096923828, "__label__games": 0.0007476806640625, "__label__hardware": 0.0055694580078125, "__label__health": 0.0003812313079833984, "__label__history": 0.0003859996795654297, "__label__home_hobbies": 0.0001773834228515625, "__label__industrial": 0.000919818878173828, "__label__literature": 0.00026702880859375, "__label__politics": 0.0003969669342041016, "__label__religion": 0.0004291534423828125, "__label__science_tech": 0.2159423828125, "__label__social_life": 0.00015473365783691406, "__label__software": 0.2315673828125, "__label__software_dev": 0.53662109375, "__label__sports_fitness": 0.0002130270004272461, "__label__transportation": 0.0005946159362792969, "__label__travel": 0.00022554397583007812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22414, 0.00621]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22414, 0.11473]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22414, 0.8655]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 962, false], [962, 3044, null], [3044, 5027, null], [5027, 7610, null], [7610, 8606, null], [8606, 11105, null], [11105, 12489, null], [12489, 14987, null], [14987, 16945, null], [16945, 18514, null], [18514, 21311, null], [21311, 22414, null], [22414, 22414, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 962, true], [962, 3044, null], [3044, 5027, null], [5027, 7610, null], [7610, 8606, null], [8606, 11105, null], [11105, 12489, null], [12489, 14987, null], [14987, 16945, null], [16945, 18514, null], [18514, 21311, null], [21311, 22414, null], [22414, 22414, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 22414, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22414, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22414, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22414, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22414, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22414, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22414, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22414, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22414, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22414, null]], "pdf_page_numbers": [[0, 0, 1], [0, 962, 2], [962, 3044, 3], [3044, 5027, 4], [5027, 7610, 5], [7610, 8606, 6], [8606, 11105, 7], [11105, 12489, 8], [12489, 14987, 9], [14987, 16945, 10], [16945, 18514, 11], [18514, 21311, 12], [21311, 22414, 13], [22414, 22414, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22414, 0.08696]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
86f2af8886082b102e75f788dfb2444d270d8bd4
|
First-Order Modular Logic Programs and their Conservative Extensions
(Extended Abstract)
Amelia Harrison
University of Texas at Austin
[email protected]
Yuliya Lierler
University of Nebraska Omaha
[email protected]
Abstract
This paper introduces first-order modular logic programs, which provide a way of viewing answer set programs as consisting of many independent, meaningful modules. We also present conservative extensions of such programs. This concept helps to identify strong relationships between modular programs as well as between traditional programs. For example, we illustrate how the notion of a conservative extension can be used to justify the common projection rewriting. This is a short version of a paper presented at the 32nd International Conference on Logic Programming [Harrison and Lierler, 2016].
1 Introduction
Answer set programming is a prominent knowledge representation paradigm with roots in logic programming [Leake, 2016]. It is especially useful for addressing combinatorial search problems. In answer set programming, a given computational problem is represented by a declarative program that describes the properties of a solution to the problem. Then, an answer set solver is used to generate answer sets, also called stable models, for the program. These models correspond to solutions to the original problem.
In this paper we show how some logic programs under the answer set semantics can be viewed as consisting of various “modules”, and how stable models of these programs can be computed by composing the stable models of the modules. We call collections of such modules first-order modular programs. To illustrate this approach consider the following two rules
\[ r(X, Y) \leftarrow \text{in}(X, Y). \quad (1) \]
\[ r(X, Y) \leftarrow r(X, Z), r(Z, Y). \quad (2) \]
Intuitively, these rules encode that the relation \( r \) is the transitive closure of the relation \( \text{in} \). The empty set is the only answer set of the program composed of these rules alone. Thus, in some sense the meaning of these two rules in isolation is the same as the meaning of any program that has a single answer set that is empty. (The empty program is an example of another program with a single empty answer set.) We show how to view rules (1) and (2) as a module and use the operator \( \text{SM} \) introduced by Ferraris et al. (2011) to define a semantics that corresponds more accurately to the intuitive meaning of these rules. The operator \( \text{SM} \) provides a definition of the stable model semantics for first-order logic programs that does not refer to grounding or fixpoints as does the original definition [Gelfond and Lifschitz, 1988]. The operator \( \text{SM} \) has proved an effective tool for studying properties of logic programs with variables, which are the focus of this paper.
Modularity is essential for modeling large-scale practical applications. Here we propose first-order modular programs and argue their utility for reasoning about answer set programs. We use the Hamiltonian Cycle problem as a running example to illustrate that a “modular” view of a program gives us
- a more intuitive reading of the parts of the program;
- the ability to incrementally develop modules or parts of a program that have stand-alone meaning and that interface with other modules via a common signature;
- a theory for reasoning about modular rewritings of individual components with a clear picture of the overall impact of such changes.
First-order modular programs can be viewed as a generalization of propositional modular logic programs [Lierler and Truszczynski, 2013]. In turn, propositional modular logic programs generalize the concept of modules introduced by Oikarinen and Janhunen 2008. ASP-FO logic [Denecker et al., 2012] is another related formalism. It is a modular formalization of generate-define-test answer set programming [Lifschitz, 2002] that allows for unrestricted interpretations as models, non-Herbrand functions, and first-order formulas in the bodies of rules. An ASP-FO theory is a set consisting of modules of three types: G-modules (G for generate), D-modules (D for define), and T-modules (T for test). In contrast, there is no notion of type among modules in the modular programs introduced here.
We also define conservative extensions for first-order modular programs. This concept is related to strong equivalence for logic programs [Lifschitz et al., 2001]. If two rules are strongly equivalent, we can replace one with the other within the context of any program and the answer sets of the resulting program coincide with those of the original one. Conservative extensions allow us to reason about rewritings even
when the rules in question have different signatures. The theorem stated at the end of this paper, for instance, shows that conservative extensions can be used to justify the projection rewriting [Faber et al., 1999], which is commonly employed to improve program performance. Consider the rule
\[ \text{not } r(X,Y), edge(X,Z), edge(Z',Y). \] (3)
which says that every vertex must be reachable from every other vertex. This rule can be replaced with the following three rules without affecting the stable models in an “essential way”
\[ \text{not } r(X,Y) \land v1(X) \land v2(Y). \]
\[ v1(X) \iff edge(X,Y). \]
\[ v2(Y) \iff edge(X,Y). \]
Furthermore, this replacement is valid in the context of any program, as long as that program does not already contain either of the predicates \( v1 \) and \( v2 \). Currently, these performance-enhancing rewritings are done manually. We expect the theory about conservative extensions developed here to provide a platform for automating such rewritings in the future. We note that conservative extensions are related to the notion of knowledge forgetting in [Wang et al., 2014]. However, that work applies only to propositional programs.
2 Review: Traditional Programs
A (traditional logic) program is a finite set of rules of the form
\[ \text{ \( a_1; \ldots; a_k \iff a_{k+1}, \ldots, a_l, \text{not } a_{l+1}, \ldots, \text{not } a_m, \) \text{ not } \text{not } a_{m+1}, \ldots, \text{not } a_n, } \] (4)
\[ (0 \leq k \leq l \leq m \leq n), \] where each \( a_i \) is an atomic formula, possibly containing function symbols, variables, or the equality symbol with the restriction that atomic formulas \( a_1, \ldots, a_k \) and \( a_{m+1}, \ldots, a_n \) may not contain the equality symbol. The expression containing atomic formulas \( a_{k+1} \) through \( a_m \) is called the body of the rule. A rule with an empty body is called a fact. An instance of a rule \( R \) in a program \( \Pi \) is a rule that can be formed by replacing all variables in \( R \) with ground terms formed from function symbols and object constants occurring in \( \Pi \). The process of grounding a traditional logic program consists of the following steps:
1. each rule is replaced with all of its instances by substituting ground terms for variables;
2. in each instance, every atomic formula of the form \( t_1 = t_2 \)
is replaced by \( \top \) if \( t_1 \) is the same as \( t_2 \) and by \( \bot \) otherwise.
It is easy to see that the resulting ground program does not have equality symbols and can be viewed as a propositional program. The answer sets of a traditional program \( \Pi \) are stable models of the result of grounding \( \Pi \), where stable models are defined using the fixpoint operation introduced in [Ferraris, 2005].
Traditional programs do not include some constructs available in ASP input languages. (For example, aggregate expressions and arithmetic are not covered.) Even so, they do cover a substantial practical fragment. In particular, according to [Ferraris and Lifschitz, 2005] and [Ferraris, 2005], rules of the form (4) are sufficient to capture the meaning of the commonly used choice rule construct. For instance, the choice rule \( \{ p(X) \} \iff q(X), \text{ not } p(X) \).
Consider the Hamiltonian Cycle problem on an undirected graph. A Hamiltonian Cycle is a subset of the set of edges in a graph that forms a cycle going though each vertex exactly once. A sample program for finding such a cycle can be constructed by adding rules (1), (2), and (3) to the following:
\[ \text{edge}(a,a'). \ldots \text{edge}(c,c'). \] (5)
\[ \text{edge}(X,Y) \iff \text{edge}(Y,X). \] (6)
\[ \{ \text{in}(X,Y) \} \iff \text{edge}(X,Y). \] (7)
\[ \iff \text{in}(X,Y), \text{in}(X,Z), Y \neq Z. \] (8)
\[ \iff \text{in}(X,Y), \text{in}(Y,Z), X \neq Y. \] (9)
\[ \iff \text{in}(X,Y), \text{in}(Y,X). \] (10)
Each answer set of the Hamiltonian Cycle program above corresponds to a Hamiltonian cycle of the given graph, specified by facts (5), so that the predicate \text{in} encodes these cycles. If an atom \( \text{in}(a,b) \) appears in an answer set it says that the edge between \( a \) and \( b \) is part of the subset forming the Hamiltonian cycle. Intuitively,
- the facts in (5) define a graph instance by listing its edges, and rule (6) ensures that this edge relation is symmetric (since we are dealing with an undirected graph); the vertices of the graph are implicit—they are objects that occur in the edge relation;
- rule (7) says that any edge may belong to a Hamiltonian cycle;
- rules (8) and (9) impose the restriction that no two edges in a Hamiltonian cycle may start or end at the same vertex, and rule (10) requires that each edge appears at most once in a Hamiltonian cycle (recall that \( \text{in}(a,b) \) and \( \text{in}(b,a) \) both encode the information that the edge between \( a \) and \( b \) is included in a Hamiltonian cycle);
- rules (1) and (2) define a relation \( \text{r} \) (reachable) that is the transitive closure of relation \( \text{in} \);
- rule (3) ensures that every vertex in a Hamiltonian cycle must be reachable from every other vertex.
Clearly, rules in the program can be grouped according to intuitive meaning. Yet, considering these groups separately will not produce “meaningful” logic programs under the answer set semantics as discussed in the introduction. In this paper, we show how we can view each of these groups of rules as a separate module, and then use the SM operator [Ferraris et al., 2011; 2009], along with a judicious choice of “intensional” and “extensional” predicates to achieve a more accurate correspondence between the intuitive reading of the groups of rules and their model-theoretic semantics.
3 Operator SM
The SM operator was introduced by Ferraris et al. (2011; 2009). There are a few key differences between the original stable model semantics and the semantics provided by the
\footnote{This precludes graphs that include isolated vertices.}
SM operator that make the latter a convenient formalism for facilitating a view of groups of rules in a program as separate units or “modules”. First, the SM operator does not rely on grounding, but instead operates on first-order sentence representations of logic programs. Secondly, the semantics as defined by SM allows for a distinction between “extensional” and “intensional” predicates. Intuitively, “extensional” predicates correspond to the input of the program, or module, and “intensional” predicates correspond to output or auxiliary concepts. Finally, unlike the original stable model semantics, the SM operator does not involve a fixpoint calculation, but instead defines stable models as models of a second-order formula. The result of applying the SM operator to a first-order sentence $F$ with intensional predicates $p$ is a second-order formula, denoted SM$_p[F]$.
The SM operator applies to first-order sentences, rather than to logic programs. Yet, traditional logic programs can be identified with the first-order sentences of a particular form. For example, we understand the Hamiltonian Cycle presented in Section 2 as an abbreviation for the conjunction
$$\forall x \forall y (\neg \neg \text{in}(x,y) \land \text{edge}(x,y)) \rightarrow \text{in}(x,y))$$
of each individual logic program in the collection. We call a formula of the form SM$_p$ a def-module $P_h$ consists of five def-modules:
\begin{align*}
\text{SM}_{\text{edge}}[\text{edge}(a,d') \land \ldots \land \text{edge}(c,c')] & \land \\
\forall x (\text{edge}(x,y) \rightarrow \text{edge}(x,y)) & \\
\forall x (\neg \neg \text{in}(x,y) \land \text{edge}(x,y)) & \rightarrow \text{in}(x,y))
\end{align*}
(12)
\begin{align*}
\text{SM}_{\text{in}}[\forall x y ((\neg \neg \text{in}(x,y) \land \text{edge}(x,y)) & \rightarrow \text{in}(x,y))] \\
\text{SM}_{\text{hc}}[\forall x y z ((\text{in}(x,y) \land \text{in}(x,z) \land \neg (y = z)) & \rightarrow \bot) \\
\forall x y z ((\text{in}(x,z) \land \text{in}(y,z) \land \neg (z = y)) & \rightarrow \bot) \\
\forall x y ((\text{in}(x,y) \land \text{in}(x,y)) & \rightarrow \bot) \\
\forall x (\text{in}(y,x) & \rightarrow r(x,y)) \\
\forall x y z ((r(x,z) \land r(y,z)) & \rightarrow r(x,y)) \\
\forall x y z (\neg \neg (r(x,y) \land \text{edge}(x,z) \land \text{edge}(c',y)) & \rightarrow \bot)
\end{align*}
where $a, a', \ldots, c, c'$ are object constants and $x, y, z, z'$ are variables.
The answer sets of any traditional program $\Pi$ that contains at least one object constant coincide with Herbrand models of $\text{SM}_p[\Pi]$, where $p$ is the list of all predicates occurring in $\Pi$.
### 4 Modular Logic Programs
A first-order modular logic program is a collection of logic programs, where the SM operator is used to compute models of each individual logic program in the collection.
We call a formula of the form $\text{SM}_p[F]$, where $p$ is a tuple of predicate symbols and $F$ is a traditional program, a defining module (of $p$ in $F$) or a def-module. We can view any traditional program $\Pi$ as a def-module $\text{SM}_p[\Pi]$, where $p$ is the list of all predicates occurring in $\Pi$. A first-order modular logic program (or, modular program) $P$ is a finite set of def-modules
\[\{\text{SM}_p[F_1], \ldots, \text{SM}_p[F_n]\}\].
By $\sigma(P)$ we denote the set of all function and predicate symbols occurring in a modular program $P$, also called the signature of $P$. A stable model of a modular program $P$ is any interpretation over $\sigma(P)$ that is a model of the conjunction of all def-modules in $P$.
We now illustrate how modular programs capture the encoding (11) of the Hamiltonian Cycle so that each of its modules carries its intuitive meaning. The modular program $P_{hc}$ consists of five def-modules:
\begin{align*}
\text{SM}_{\text{edge}}[\text{edge}(a,d') \land \ldots \land \text{edge}(c,c')] & \land \\
\forall x (\text{edge}(x,y) \rightarrow \text{edge}(x,y)) & \\
\forall x (\neg \neg \text{in}(x,y) \land \text{edge}(x,y)) & \rightarrow \text{in}(x,y))
\end{align*}
(12)
\begin{align*}
\text{SM}_{\text{in}}[\forall x y ((\neg \neg \text{in}(x,y) \land \text{edge}(x,y)) & \rightarrow \text{in}(x,y))] \\
\text{SM}_{\text{hc}}[\forall x y z ((\text{in}(x,y) \land \text{in}(x,z) \land \neg (y = z)) & \rightarrow \bot) \\
\forall x y z ((\text{in}(x,z) \land \text{in}(y,z) \land \neg (z = y)) & \rightarrow \bot) \\
\forall x y ((\text{in}(x,y) \land \text{in}(x,y)) & \rightarrow \bot) \\
\forall x (\text{in}(y,x) & \rightarrow r(x,y)) \\
\forall x y z ((r(x,z) \land r(y,z)) & \rightarrow r(x,y)) \\
\forall x y z (\neg \neg (r(x,y) \land \text{edge}(x,z) \land \text{edge}(c',y)) & \rightarrow \bot)
\end{align*}
where $a, a', \ldots, c, c'$ are object constants and $x, y, z, z'$ are variables.
The answer sets of any traditional program $\Pi$ that contains at least one object constant coincide with Herbrand models of $\text{SM}_p[\Pi]$, where $p$ is the list of all predicates occurring in $\Pi$.
### 5 Relating Modular Programs and Traditional Programs
We view a traditional logic program as an abbreviation for a first-order sentence formed as a conjunction of formulas of the form
\[\forall x (\neg a_{k+1} \land \ldots \land a_{k+1}) \land \neg a_{k+1} \land \ldots \land \neg a_{m} \land \neg a_{m+1} \land \ldots \land \neg a_{n} \rightarrow a_{1} \lor \cdots \lor a_{k})\],
(17)
which corresponds to rule (4). The symbol $\forall$ denotes universal closure. We call the disjunction in the consequent of a rule (17) its head, and the conjunction in the antecedent its body. The conjunction $a_{k+1} \land \ldots \land a_{m}$ constitutes the positive part of the body. A modular program is called simple if for every def-module $\text{SM}_p[F]$, every predicate symbol $p$ occurring in the head of a rule in $F$ occurs also in the tuple $p$. For instance, $P_{hc}$ is a simple modular program.
The dependency graph [Ferraris et al., 2009] of a simple modular program $P$, denoted $\text{DG}[P]$, is a directed graph that
- has all intensional predicates in $P$ as vertices, and
-
• has an edge from $p$ to $q$ if there is a def-module $\text{SM}_p[F] \in P$ containing a rule with $p$ occurring in the head and $q$ occurring in the positive part of the body.
We call a simple modular program $P$ coherent if
(i) no two def-modules in $P$ have overlapping intensional predicates, and
(ii) every strongly connected component in the dependency graph of $P$ is contained within $p$ for some def-module $\text{SM}_p[F]$ in $P$.
From the symmetric splitting result from [Ferraris et al., 2009] it follows that the Herbrand stable models of a coherent modular program $P$ that (i) contains at least one object constant and (ii) has each predicate symbol in $P$ occurring in $p$ for some def-module $\text{SM}_p[F]$, coincide with the answer sets of the traditional program constructed as the conjunction of all first order sentences occurring in this modular program.
The strongly connected components of the dependency graph of $P_{hc}$ each consist of a single vertex. It is easy to check that the Hamiltonian Cycle program $P_{hc}$ is coherent and that all of its predicate symbols are intensional in some def-module. Therefore, the Herbrand models of $P_{hc}$ coincide with the answer sets of (11) so that answer set solvers can be used to find these models.
Arguably, when answer set practitioners develop their applications they intuitively associate meaning with components of their programs. We believe that modular programs as introduced here provide us with a suitable model for understanding the meaning of these components.
6 Conservative Extensions
In this section, we study the question of how to formalize common rewriting techniques used in answer set programming, such as projection, and argue their correctness.
For an interpretation $I$ over signature $\sigma$ and a function symbol (or, predicate symbol) $t$ from $\sigma$ by $t^I$ we denote a function (or, relation) assigned to $t$ by $I$. Let $\sigma$ and $\Sigma$ be signatures so that $\sigma \subseteq \Sigma$. For interpretation $I$ over $\Sigma$, by $I_\sigma$ we denote the interpretation over $\sigma$ such that for every function or predicate symbol $t$ in $\sigma$, $t^I = t^I_\sigma$. Let $P$ and $P'$ be modular logic programs such that the set of all predicates occurring in $P$ is a subset of the set of all predicates in $P'$ and both programs share the same function symbols. We say that $P'$ is a conservative extension of $P$ if $M \rightarrow M_{I_\sigma}(P)$ is a 1-1 correspondence between the models of $P'$ and the models of $P$. It turns out that we can replace def-modules in a modular program with their conservative extensions and are guaranteed to obtain a conservative extension of the original modular program. Thus, conservative extensions of def-modules allow us to establish something similar to strong equivalence accounting for the possibility of different signatures.
For example, consider the choice rule \{p\}, a shorthand for the rule $p \leftarrow \text{not not} \ p$. In some answer set programming dialects double negation is not allowed in the body of a rule. It is then common to simulate a choice rule as above by introducing an auxiliary atom $\hat{p}$ and using the rules $p \leftarrow \neg \hat{p}$ and $\hat{p} \leftarrow \neg \text{not not} \ p$. It is easy to check that $\text{SM}_{p,\hat{p}}[(\neg \hat{p} \rightarrow p) \land (\neg \text{not not} \ p \rightarrow \hat{p})]$ is a conservative extension of $\text{SM}_p[\neg \text{not not} \ p \rightarrow p]$. It follows that we can replace the latter with the former within the context of any modular program not containing the predicate symbol $\hat{p}$, and get a conservative extension of the original program.
Similarly, replacing def-module (16) in the $P_{hc}$ by
\[
\text{SM}_{1,2,3} (\forall x (\neg r(x,y) \land v I(x) \land v2(y)) \rightarrow ) \land \\
\forall \tilde{x} \exists (\text{edge}(x,\tilde{z}) \rightarrow \gamma I(x)) \land \\
\forall \tilde{y} (\text{edge}(\tilde{x},y) \rightarrow v2(y))
\]
results in a conservative extension of the original program. This follows from a more general fact about the projection rewriting stated below.
Let $R$ be a rule (17) occurring in a traditional logic program $F$, and let $x$ be a non-empty tuple of variables occurring only in the body of $R$. By $\alpha(x,y)$ we denote the conjunction of all conjunctive terms in the body of $R$ that contain at least one variable from $x$, where $y$ denotes all the variables occurring in these conjunctive terms but not occurring in $x$. By $\beta$ we denote the set of all conjunctive terms in the body of $R$ that do not contain any variables in $x$. By $\gamma$ we denote the head of $R$. Let $t$ be a predicate symbol that does not occur in $F$. Then the result of projecting variables $x$ out of $R$ using predicate symbol $t$ is the conjunction
\[
\text{\neg}(\tilde{t}(y) \land \beta) \rightarrow \gamma \land \forall x (\alpha(x,y) \rightarrow t(y)).
\]
We can project variables out of a traditional logic program by successively projecting variables out of rules. For example, first projecting $z$ out of the traditional logic program in (16) and then projecting $z'$ out of the first rule of the resulting program yields program (18).
Theorem Let $\text{SM}_{p_1,...,p_k}[F]$ be a def-module and $R$ be a rule in $F$. Let $x$ denote a non-empty tuple of variables occurring in the body of $R$, but not in the head. If $G$ is constructed from $F$ by replacing $R$ in $F$ with the result of projecting variables $x$ out of $R$ using a predicate symbol $p_{k+1}$ that is not in the signature of $F$, then $\text{SM}_{p_1,...,p_{k+1}}[G]$ is a conservative extension of $\text{SM}_{p_1,...,p_k}[F]$.
This theorem can be restated in terms of traditional logic programs using the fact that any traditional program can be viewed as a def-module.
7 Conclusion
In this paper, we introduced first-order modular logic programs as a way of viewing logic programs as consisting of many independent, meaningful modules. We also defined conservative extensions, which like strong equivalence for traditional programs, can be useful for reasoning about traditional programs and modular programs. We showed how these concepts justify the common projection rewriting.
Acknowledgments
Many thanks to Joshua Irvin, Vladimir Lifschitz, and Miroslaw Truszczynski for useful discussions regarding ideas in this paper. Thanks as well to the anonymous referees for helpful comments. Amelia Harrison was partially supported by the National Science Foundation under Grant IIS-1422455. Yuliya Lierler was partially supported by University Committee on Research and Creative Activity from the University of Nebraska at Omaha in Summer 2016.
References
|
{"Source-Url": "http://www.cs.utexas.edu/users/ai-lab/downloadPublication.php?filename=http%3A%2F%2Fwww.cs.utexas.edu%2Fusers%2Fameliaj%2Fpapers%2Fprojection_ijcai.pdf&pubid=127644", "len_cl100k_base": 6064, "olmocr-version": "0.1.50", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 20563, "total-output-tokens": 7410, "length": "2e12", "weborganizer": {"__label__adult": 0.0004527568817138672, "__label__art_design": 0.0005125999450683594, "__label__crime_law": 0.0008630752563476562, "__label__education_jobs": 0.0023670196533203125, "__label__entertainment": 0.00014448165893554688, "__label__fashion_beauty": 0.00025534629821777344, "__label__finance_business": 0.0004940032958984375, "__label__food_dining": 0.0007185935974121094, "__label__games": 0.0009813308715820312, "__label__hardware": 0.0009207725524902344, "__label__health": 0.0012979507446289062, "__label__history": 0.0003440380096435547, "__label__home_hobbies": 0.00023066997528076172, "__label__industrial": 0.000980377197265625, "__label__literature": 0.0008234977722167969, "__label__politics": 0.0005269050598144531, "__label__religion": 0.0006303787231445312, "__label__science_tech": 0.1949462890625, "__label__social_life": 0.0001807212829589844, "__label__software": 0.0108642578125, "__label__software_dev": 0.77978515625, "__label__sports_fitness": 0.00040221214294433594, "__label__transportation": 0.0009407997131347656, "__label__travel": 0.00022482872009277344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26169, 0.02592]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26169, 0.58491]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26169, 0.84589]], "google_gemma-3-12b-it_contains_pii": [[0, 4723, false], [4723, 10732, null], [10732, 16836, null], [16836, 23572, null], [23572, 26169, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4723, true], [4723, 10732, null], [10732, 16836, null], [16836, 23572, null], [23572, 26169, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26169, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26169, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26169, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26169, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26169, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26169, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26169, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26169, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26169, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26169, null]], "pdf_page_numbers": [[0, 4723, 1], [4723, 10732, 2], [10732, 16836, 3], [16836, 23572, 4], [23572, 26169, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26169, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
cc3e60220bd6b0acc2ae7d3bb971bc65e444d941
|
Cross-Layer Feedback Architecture for Mobile Device Protocol Stacks
Vijay T. Raisinghani and Sridhar Iyer, Indian Institute of Technology Bombay
ABSTRACT
Applications using traditional protocol stacks (e.g., TCP/IP) from wired networks do not function efficiently in mobile wireless environments. This is primarily due to the layered architecture and implementation of protocol stacks. One mechanism to improve the efficiency of the stack is cross-layer feedback, that is, making information from within one layer available to another layer of the stack. For example, TCP retransmissions can be reduced by making it aware of network disconnections or handoff events. We highlight the need for a cross-layer feedback architecture and identify key design goals for an architecture. We present our ECLAIR architecture, which satisfies these design goals. We describe a prototype implementation that validates ECLAIR. We also discuss other cross-layer architectures and provide a cross-layer design guide.
INTRODUCTION
To ensure interoperability with the existing Internet, standard protocol stacks (e.g., Transmission Control Protocol (TCP)/Internet Protocol (IP) [1]) are being deployed, even in mobile wireless setups, that is, on the mobile devices and intermediate nodes in the wireless network. However, these standard protocol stacks function inefficiently in mobile wireless environments [2]. This is primarily due to the layered architecture and implementation of protocol stacks. We highlight the inefficiencies of layered protocol stacks by using TCP as an example.
TCP is an end-to-end reliable transport protocol. TCP at the sender uses acknowledgments from the receiver as a signal to send additional packets. A missing acknowledgment is interpreted as an indication of packet loss due to congestion in the network. However, in mobile wireless environments packet losses are also caused by poor wireless channel conditions and disconnections. Since TCP is unaware of these channel conditions, it invokes its standard algorithm and reduces its throughput. It can be seen that TCP throughput can be improved by making it aware of the wireless channel conditions. For example, the retransmissions could be deferred until the channel conditions improve. There are various methods of improving TCP performance, which entail modifications at the end station(s) or base station/router. We refer interested readers to [2] for an introduction to TCP algorithms, problems related to TCP in wireless environments, and the various solutions proposed in literature.
In wireless environments, the performance of other layers as well can be improved by enabling cross-layer feedback [3]. The feedback could be from layers above or below a layer (discussed further below).
As new wireless networks are deployed to enhance the performance of the existing protocol stacks, various cross-layer feedback optimizations would be required. These cross-layer optimizations would require easy integration with the existing stack. If the cross-layer optimizations were implemented in an ad hoc manner, it would lead to:
• Decreased execution efficiency of the stack
• Difficulty in ensuring protocol correctness of the protocols modified using cross-layer feedback
• Difficulty in maintaining the cross-layer optimizations
Thus, to help standardize and ease the development, deployment, and maintenance of the various cross-layer optimizations, appropriate architecture is essential. Existing approaches to cross-layer feedback do not satisfy all of these requirements.
Cross-layer feedback optimizations may need to be implemented at the intermediate nodes (base station or router) or mobile host (MH). We focus on cross-layer feedback in the MH since we believe that it would be easier to implement changes on the end-devices than in the network.
Our architecture ECLAIR [4] provides a guideline for designing and implementing cross-layer feedback in an easy and efficient manner on a mobile device. ECLAIR exploits the fact that protocol behavior is determined by the values stored in the protocol’s data structures. In ECLAIR, a tuning layer (TL) for each layer, provides an interface to read and update these protocol data-structures. TLs are used by protocol optimizers (POs), which contain cross-layer feedback algorithms. The POs form the optimizing subsystem (OSS).
We briefly explain ECLAIR’s prototype implementation and validation. We then provide guidelines for architecture selection and
CROSS-LAYER FEEDBACK ARCHITECTURE
An ad hoc approach could be used to implement cross-layer feedback, that is, blocks of code could be introduced in the existing layers to enable cross-layer feedback. For example, to enable TCP to receive hand-off information from the MAC layer, additional code would be introduced in the TCP and MAC layers. This additional code in TCP would query the MAC layer and determine the TCP adaptation, while the additional code in the MAC layer would provide an interface for querying the MAC layer's internal state.
We note that an ad hoc approach to cross-layer feedback has the following problems:
- Each additional cross-layer feedback code block would slow down the execution of a layer (e.g., TCP) and thus reduce the throughput of that layer. If a layer interacts with many other layers, this would lead to a large reduction in its throughput.
- The cross-layer feedback code will have to be rewritten for porting to other operating systems.
- Multiple cross-layer optimizations within a layer could lead to conflicts [5] and hence difficulty in ensuring correctness of the layer's algorithms.
- Once added to a layer, cross-layer feedback code would be difficult to update or remove, since the code would be intertwined with regular-layer code.
- Trial (fast prototyping) of new cross-layer feedback ideas would not be easy, since the layer code would need to be modified.
The above problems of an ad hoc approach highlight the need for an architecture for cross-layer feedback. From the above, we can derive the design goals of a cross-layer feedback architecture.
Design Goals for a Cross-Layer Feedback Architecture
- **Rapid prototyping**, which would enable easy development and deployment of new cross-layer feedback optimizations, independent of existing stack.
- **Minimum intrusion**, which would enable interfacing with existing stack without significant changes in the existing stack. Here significant changes means too many or large code modifications to the layer(s). This would aid in maintainability, that is, easy extension or reversal of the cross-layer optimization and protecting the correctness of the stack, with minimal efforts.
- **Portability**, which would enable easy porting to different systems.
- **Efficiency**, which would enable efficient (minimum execution overhead) implementation of cross-layer feedback.
The above goals motivated our study of the need for a cross-layer feedback architecture. In the next section, we discuss existing approaches to cross-layer feedback.
RELATED WORK
We discuss various architectures for cross-layer feedback within the mobile device.
One of the early proposals is the Physical Media Independence (PMI) [6] architecture. In PMI, cross-layer feedback is achieved through guard modules and adaptation modules. PMI is
aimed at monitoring the network interface availability. Guard modules monitor interface characteristics such as connected, powered, and so forth. Adaptation modules attached to each layer of the network stack receive policy-related information from higher-layer modules, and event indications from lower-layer modules. The adaptation modules adapt the respective layers using the operating system utilities. The information about interface events propagates layer by layer. For example, if the MAC layer receives an event for adaptation, it would adapt its behavior first and then propagate the information to the next higher layer.
An architecture that is focused on the network environment was proposed in [7]. In this architecture, cross-layer feedback is achieved through Internet Control Message Protocol (ICMP) messages. The physical/MAC layers, network layer, and application layer/user monitor the network for events such as bandwidth change, hand-off, and so on. When an event occurs, the information is propagated to the upper layers through ICMP messages (we refer to this as the ICMP-arch). These ICMP messages are generated by some module running on the system and contain all the event-related information. A special handler at the socket layer traps these messages, adapts protocols, and also propagates the information to the applications. The applications register for events using the Application Programming Interface (API) provided. The protocol adaptations are defined by the application developer using the API provided.
Cross-layer information can also be exchanged through an Interlayer Signaling Pipe (ISP) [8], that is, through packet headers. This is suitable for cases where some adaptation may be required at lower layers for each packet from higher layers. However, this requires that lower layers be able to read higher-layer headers. This necessitates modification to the layer code where adaptation is required. For cross-layer feedback from lower to higher layers, the lower layers would need to change the packet header, which could lead to packet errors.
Cross-Layer Signaling Shortcuts (CLASS) was proposed in [9]. CLASS allows direct interaction between the layers. For example, the application layer can directly interact with the link layer. However, CLASS has drawbacks similar to that of an ad hoc approach.
MobileMan [10] adds another stack component called Network Status. This component is a repository provided for network information sharing among the layers. The access to Network Status is standardized. MobileMan recommends replacing the standard protocol layer with a redesigned network-status-oriented protocol, so that the protocol can interact with Network Status. MobileMan has been deployed on experimental testbeds for ad hoc networks.
replacing the standard protocol layer with a redesigned network-status-oriented protocol, so that the protocol can interact with Network Status. MobileMan has been deployed on experimental testbeds for ad hoc networks.
The framework in [11] proposes a cross-layer manager. The protocol layers expose events and state variables to the cross-layer manager. Management algorithms are woken up by the events. The cross-layer manager uses the state variables to query/set the protocol internal state. Four interlayer coordination planes are identified, namely, security, quality of service, mobility, and wireless link adaptation. Internal details of this framework are not available.
The above examples of cross-layer feedback focus on improvements within the protocol stack. The Global Resource Adaptation through Cooperation (GRACE) framework [12] is aimed at cross-layer adaptation across the hardware, software (OS), and application layers. However, GRACE does not address adaptation of any of the protocol stack layers.
The cross-layer architectures proposed in the literature that focus on cross-layer interaction within the stack on the mobile device [6–11] do not address all the design goals identified above. These architectures do not fully address the goals of rapid prototyping, maintainability, portability, and efficiency. In MobileMan [10], it is recommended that the protocol layer be replaced by a redesigned protocol. This would lead to increased implementation and maintenance efforts. Further, the layers may need to be changed in case the Network Status component is enhanced. Efficiency would be lower in architecture such as ICMP-arch [7] since the information is wrapped in ICMP messages, which increases the event communication overheads. In PMI [6] as well the event information propagates layer by layer, which would decrease the cross-layer execution speed. In ISP [8] the overhead of scanning each packet and adaptation would slow down the execution of the lower layers and thus reduce throughput. Further, there is no provision for any-to-any layer-event communication in either PMI [6], ICMP-arch [7], or ISP [8].
We refer interested readers to [5] for useful caveats and principles related to cross-layer feedback design and to [3] for a survey of cross-layer feedback optimizations.
Thus far we have presented the existing approaches to cross-layer feedback. As discussed, these approaches do not fully address the design goals identified in the previous section. In the next section we present our proposal for a cross-layer feedback architecture — ECLAIR — which is based on the design goals presented above.
**ECLAIR Design**
For enabling rapid prototyping of new cross-layer feedback optimizations, ECLAIR is split into two subsystems: TLs and OSS. Figure 1 shows the details of ECLAIR.
**Tuning Layers** — The purpose of a TL is to provide an interface to protocol data structures that determine the protocol’s behavior. For example, a TCP tuning layer (TCPTL) is provided for TCP.
For ease of reference, we group the TLs according to their function. For example, Transport TL refers to the collection of transport protocol TLs such as TCPTL for TCP, UDPTL for UDP, and so forth.
A TL can read and update the protocol data-structures. A protocol implementation typically has data-structures for control and data. A protocol’s behavior is determined by its control data-structures. For example, in Linux, TCP control information is stored in a data structure struct tcp_opt embedded within the socket data structure struct sock. The interested reader can refer to standard texts on Linux TCP/IP internals for details.
For the purpose of portability, a TL is subdivided into a generic tuning sublayer and an implementation-specific sublayer [4] (Fig. 1).
**Optimizing Subsystem** — The OSS contains the algorithms and data-structures for cross-layer optimizations. The OSS contains many protocol optimizers (POs). A PO contains the algorithm for a particular cross-layer optimization. For an optimizing action (Fig. 1; solid line, solid arrowhead), a PO invokes a function in the TL, using the TL’s API. The PO (or POs) registers for events with TLs, using the register API (Fig. 1; dashed line, hollow arrowhead). The TLs notify the registered POs whenever an event occurs. The PO also uses TL APIs for querying the current state of the protocol layer which is to be modified (e.g., the TCP’s state could be congestion avoidance or slow start phase).
---
**Figure 2. Generic tuning sublayer: example APIs.**
The OSS executes concurrently with the existing protocol stack and does not increase the stack-processing overhead.
Some example APIs of the generic tuning sublayer are presented in Fig. 2, where MAC and Physical TL APIs for the 802.11 Wireless LAN standard are shown.
Besides meeting the design goals highlighted earlier, ECLAIR provides additional benefits. Since the cross-layer system is separate, it can be easily/dynamically enabled or disabled. Also, individual POs may be enabled or disabled. Besides the layer-specific TLs, ECLAIR also has a User TL (UTL). UTL allows a device user or an external entity (e.g., a distributed algorithm or a base station) to tune the device behavior. Lastly, ECLAIR allows any-to-any layer communication through the POs.
In the next section, we present a prototype implementation of ECLAIR. The prototype is based on user feedback to TCP [13, 14]. User feedback has also been proposed by other researchers in different contexts. However, we restrict the focus of this article to the architectural aspects of cross-layer feedback.
**ECLAIR IMPLEMENTATION: USER FEEDBACK**
Users can provide useful feedback for improving the performance of the stack or the user experience [13, 14]. One example is when a user may want to control the throughput of applications running on the device. For example, a user may want one file download to get more bandwidth than another.
One method of controlling the application’s bandwidth share is through manipulation of the receiver window of its TCP connection [13, 14]. TCP uses congestion- and flow-control mechanisms to avoid swamping the network or the receiver [1]. The receiver reflects its receive buffer status by the advertised window field in the acknowledgments to the sender. When the network losses are low, the send rate of a TCP sender is determined by this advertised window. This property can be exploited to intentionally restrict the throughput of some applications on the mobile device. This would lead to increased throughput for the rest of the applications.
**Algorithm:** The user assigns some priority number to each application. An application’s priority number is used to calculate its receiver window.
**Implementation:** The use of ECLAIR for the above PO, the receiver window control PO (RWC PO), is shown in Fig. 3.
The explanation of the sequence shown in Fig. 3 is as follows: (1) TCPTL reads data-structure location information at system start; (2a), (2b) the PO registers for user events and the user changes priorities for running applications; (3) application and respective priority information is passed to the RWC PO; (4a), (4b) current receiver window/buffer information is collected via TCPTL (this information is used to re-calculate the new receiver window values for the various applications, and it is assumed that the application can be identified by the sockets); (5a), (5b) the receiver window values are set for each application.
In Fig. 3, the dotted lines from `sock` repre-
sent the memory references from sock to other data structures.
We refer the interested reader to [4] for the design of Mobile-IP and TCP interaction, using ECLAIR.
Next, we present the implementation details of user feedback (RWC) based on the architecture presented above.
**User Feedback: Implementation Details** — We chose Linux for the implementation since its source code is freely available and modifiable. The relevant TCP data-structures are in the header file `sock.h`. `tcp_opt` is TCP’s control data-structure. `sock` is the socket data-structure. `window_clamp` and `rcv_ssthresh` are used for controlling the advertised window in TCP.
Figure 4 shows the call flow of the RWC prototype implementation using ECLAIR. Our current implementation largely has TL functionality only. In this prototype the RWC calculation is performed by the user. The parameters are passed to the TL to change the control parameters (receiver window) in the socket. The IP address parameter is used to identify the application’s TCP socket within which the receiver window value is to be changed.
The RWC PO and TL are coded in a single kernel-loadable module. No modification was required to the existing TCP layer code. Interested readers can refer to standard texts about Linux device drivers for details about writing Linux kernel modules.
Using the above implementation, on a Linux desktop, we conducted experiments over our department’s wireless LAN. The desktop was connected to our department LAN using 802.11 wireless LAN equipment. We started two http file-transfer sessions from the desktop to two Web servers on our department LAN. The desktop was the receiver. Figure 5 shows the result when RWC was not invoked. The flow that starts first (flow 1) gets most of the bandwidth. In another set of experiments, during the transfer, RWC was invoked on the desktop in order to reduce the receiver window of flow 1. The resulting graph (Fig. 6) shows the decrease in throughput of the session (flow 1) controlled by RWC and an increase in throughput of the other session (flow 2). These experiments validate our ECLAIR implementation.
In this section, we discussed ECLAIR design, its prototype implementation, and validation. In the next section we present a cross-layer feedback design guide.
**CROSS-LAYER FEEDBACK DESIGN GUIDE**
**ARCHITECTURE SELECTION**
To ensure correctness and efficiency, one of the primary criteria for selecting a cross-layer feedback architecture is the type of adaptation. The adaptation can be synchronous or asynchronous. In synchronous adaptation, whenever a layer receives some cross-layer feedback, it proceeds with its regular execution only after executing the adaptation required. For example, assume that a network disconnection event is detected and TCP adaptation is required. In the synchronous case, TCP’s regular execution would proceed only after the required adaptation is completed. In the asynchronous case, the control data-structures of TCP would be updated so as to effect a change in TCP behavior while TCP execution is in progress.
ECLAIR is suited for asynchronous adaptation, since it is separate from the existing stack. Cross-layer feedback correctness would be affected if an architecture suitable for asynchronous adaptation were used for synchronous adaptation. For example, cross-layer feedback adaptation, which is to be triggered by information contained in each packet, would fail if an asynchronous architecture like ECLAIR were
used. Furthermore, the efficiency would be reduced if a synchronous architecture were used for an adaptation, which can be done asynchronously.
To highlight the impact on efficiency, we consider RWC, as explained above. In this case, the primary requirement is to apportion application bandwidth, which can be done through asynchronous adaptation. It may not be essential to tune application bandwidth synchronously. In the implementation proposed in [14], each read() of the application invokes RWC algorithm, that is, the adaptation is synchronous with application execution. This would reduce the application execution speed hence throughput. If an asynchronous architecture (e.g., ECLAIR) is used, the application speed would not be reduced.
Subsequent to the architecture choice based on the type of cross-layer feedback, it is essential to minimize the overheads of the cross-layer feedback implementation. In the following subsections we discuss the design guide for ECLAIR implementations for single and multiple PO cases.
**ECLAIR Usage**
**Single Cross-Layer Optimization** — Separating POs and TLs into a separate cross-layer system, outside the stack, introduces the overhead of additional function calls. Hence, in case only a single cross-layer optimization is planned and the cross-layer system is not to be ported/deployed on multiple operating systems, it would be better to modify the layer code. This would reduce the overhead of multiple function calls between the PO and the TL and hence increase the efficiency of the implementation.
**Multiple Cross-Layer Optimizations** — In case of multiple cross-layer optimizations, POs and TLs should be implemented as specified in the ECLAIR architecture.
If multiple cross-layer optimizations or POs directly access the layers, then there is high dependency of the POs on the layer's code. Any change to the layer code will lead to a change in all the POs interacting with that layer. This would lead to the maintainability and portability issues highlighted above. Reducing such dependence is useful for easy maintainability of the cross-layer system. Introduction of a TL leads to reduction in the dependency between the layer code and POs. While a TL is dependent on the layer code, the impact of a change to the stack is reduced and localized to the TL’s implementation specific sublayer. Multiple protocol optimization modules that use the TL need not be changed, since the generic tuning sublayer interface remains unchanged. Similarly, when the cross-layer system needs to be ported to another operating system, only the implementation-specific sublayer needs to be changed.
In summary, ECLAIR should be used if the cross-layer type is asynchronous. Furthermore, POs and TLs should be implemented, as proposed in ECLAIR, if multiple cross-layer optimizations are to be implemented or if the cross-layer system is to be ported to multiple operating systems.
**CONCLUSION**
Layered protocol stacks are inefficient when deployed in wireless networks. Hence, cross-layer feedback is essential. However, ad hoc cross-layer feedback implementations lead to problems related to easy development/deployment, maintainability, portability, and execution efficiency. Thus, an appropriate architecture for cross-layer feedback is essential.
The key design goals for a cross-layer feedback architecture are rapid prototyping, minimum intrusion, portability, and efficiency. Our architecture ECLAIR satisfies these design goals by splitting the cross-layer system into TLs and an OSS. Our prototype implementation of user feedback (RWC) validated ECLAIR.
Cross-layer feedback can be asynchronous or synchronous. For ensuring the correctness and efficiency of cross-layer feedback, the right architecture needs to be selected. ECLAIR is suitable for asynchronous cross-layer feedback. ECLAIR POs and TLs would introduce some overheads; however, POs and TLs are useful for cross-layer feedback design, implementation, and evolution. ECLAIR and various other architectures proposed in the literature do not solve all the issues related to cross-layer feedback. For example, one of the important issues is cross-layer feedback conflict [5]. ECLAIR provides components that can be used for implementing conflict-resolution mechanisms.
**REFERENCES**
BIOGRAPHIES
VIJAY T. RAISINGHANI ([email protected]) is a Ph.D. student at the School of Information Technology at Indian Institute of Technology (IIT) Bombay. His Ph.D. is sponsored by TATA Infotech Ltd., Mumbai, where he is working as an associate consultant. His research interests include cross-layer feedback and seamless mobility. He received his M.Tech. from the School of Information Technology at IIT Bombay. Additional information is available at http://www.it.iitb.ac.in/~rvijay
SRIDHAR IYER ([email protected]) is presently an associate professor in the School of Information Technology at IIT Bombay. Prior to this, he was a faculty member in the Department of Computer Science and Engineering at IIT Guwahati. His research interests include networking protocols and multimedia tools for distance education, wireless networking, mobile computing frameworks, and some areas of program/protocol verification. He received his B.Tech., M.Tech., and Ph.D. degrees from the Department of Computer Science and Engineering at IIT Bombay. Additional information can be found at http://www.it.iitb.ac.in/~sri
|
{"Source-Url": "http://dspace.library.iitb.ac.in/jspui/bitstream/10054/1519/1/33395.pdf", "len_cl100k_base": 5241, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 26913, "total-output-tokens": 6399, "length": "2e12", "weborganizer": {"__label__adult": 0.0004019737243652344, "__label__art_design": 0.0004649162292480469, "__label__crime_law": 0.0003628730773925781, "__label__education_jobs": 0.0009102821350097656, "__label__entertainment": 0.00016939640045166016, "__label__fashion_beauty": 0.00018453598022460935, "__label__finance_business": 0.00033926963806152344, "__label__food_dining": 0.0003883838653564453, "__label__games": 0.0007166862487792969, "__label__hardware": 0.007381439208984375, "__label__health": 0.0008134841918945312, "__label__history": 0.0004897117614746094, "__label__home_hobbies": 8.845329284667969e-05, "__label__industrial": 0.0006995201110839844, "__label__literature": 0.00032401084899902344, "__label__politics": 0.0003094673156738281, "__label__religion": 0.0005688667297363281, "__label__science_tech": 0.450439453125, "__label__social_life": 9.888410568237303e-05, "__label__software": 0.026580810546875, "__label__software_dev": 0.50634765625, "__label__sports_fitness": 0.00042319297790527344, "__label__transportation": 0.0011959075927734375, "__label__travel": 0.0002906322479248047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28679, 0.01121]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28679, 0.6069]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28679, 0.90814]], "google_gemma-3-12b-it_contains_pii": [[0, 4496, false], [4496, 7326, null], [7326, 10123, null], [10123, 14679, null], [14679, 17690, null], [17690, 21189, null], [21189, 26179, null], [26179, 28679, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4496, true], [4496, 7326, null], [7326, 10123, null], [10123, 14679, null], [14679, 17690, null], [17690, 21189, null], [21189, 26179, null], [26179, 28679, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28679, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28679, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28679, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28679, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28679, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28679, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28679, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28679, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28679, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28679, null]], "pdf_page_numbers": [[0, 4496, 1], [4496, 7326, 2], [7326, 10123, 3], [10123, 14679, 4], [14679, 17690, 5], [17690, 21189, 6], [21189, 26179, 7], [26179, 28679, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28679, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
d3d328ae358f5e14a2ea43fa5a71565ca01d70de
|
A Selftuning Approach for Improving Composite Schema Matchers
Fabien Duchateau, Remi Coletta, Zohra Bellahsene
To cite this version:
HAL Id: lirmm-00271534
https://hal-lirmm.ccsd.cnrs.fr/lirmm-00271534
Submitted on 9 Apr 2008
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
A Selftuning Approach for Improving Composite Schema Matchers
Fabien Duchateau and Zohra Bellahsene and Remi Coletta
LIRMM - Université Montpellier 2
161 rue Ada 34000 Montpellier
{firstname.name}@lirmm.fr
Abstract. Most of the schema matching tools are assembled from multiple match algorithms, each employing a particular technique to improve matching accuracy and making matching systems extensible and customizable to a specific domain. Recently, it has been pointed out that the main issue is how to select the most suitable match algorithms to execute for a given domain and how to adjust the multiple parameters. The solutions provided by current schema matching tools consist in aggregating the results obtained by several match algorithms to improve the quality of the discovered matches. In this article, we present a novel method to replace this aggregation function and its drawbacks. Unlike other composite matchers, our matching engine makes use of a decision tree to combine the most appropriate match algorithms. As a first consequence, the performance of the system is improved since only a subset of match algorithms from a large library is used. The second advantage is the improvement of the quality of matches. Indeed, for a given domain, only the most suitable match algorithms are used. Our approach is also able to learn the most appropriate match algorithms for a given domain by relying on the expert feedback. It can also selftune some parameters like thresholds and the performance versus quality ratio.
Keywords: schema matching, machine learning, MatchPlanner, decision tree.
1 Introduction
Schema matching is the task of discovering correspondences between semantically similar elements of two schemas or ontologies [17, 19]. Syntax based schema matching has been formally discussed in the survey by Rahm and Bernstein [24], extended by Shvaiko and Euzenat in [25] with respect to semantic aspect. Many tools have been designed to fulfill this task, each employing several techniques to improve matching accuracy and extensibility for specific domains. This generation of schema matching tools often share two features: an aggregation function to combine the match algorithms, and some parameters which need to be tuned. Yet, these features suffer from several drawbacks. Due to the aggregation, the tools apply all their match algorithms between all couples of elements. Yet, a match algorithm may be very efficient for a certain set of schema but really unsuitable with another set, implying a time and resource consuming process. The aggregation function itself can lead to a poor matching quality, for example
* Supported by ANR Research Grant ANR-05-MMSA-0007
by favouring closely related match algorithms. Besides, the manual tuning can reveal difficult for the user, who does not know the impact of her changes. Thus setting up some thresholds and weights or editing a list of synonyms might not produce the expected results.
In this article, we present a novel method for combining schema matching algorithm, which enables to avoid the previously mentioned drawbacks: the aggregation function of the matching tools is replaced by a decision tree, which enables to match two elements by executing an appropriate plan of match algorithms instead of applying all match algorithms from a library. This results in better performance. Besides, we can learn the most appropriate match algorithms for a given domain thanks to the expert feedback. Thus, the second advantage is the improvement of the quality of matches. Indeed, for a given domain, only the most suitable match algorithms are used. Finally, our approach is also able to tune automatically the system for providing the optimal configuration for a given matching scenario.
Contributions. We designed a flexible and efficient method for the schema matching problem. The main interesting features of our approach are:
- Introducing the notion of planning in the schema matching process by using a decision tree.
- Learning the best strategy for a given domain.
- A tool has been designed based on the planning approach with selftuning capability.
- Experiments demonstrate that our tool provides good performance and quality of matches regarding the main matching tools.
Outline. The rest of the paper is organised as follows. Section 2 deals with drawbacks of current matching tools. In section 3, we present our idea for a matching plan based on a decision tree. Section 4 describes the capacity of our approach to learn the best decision tree for a given scenario. Section 5 contains an overview of our prototype. The results of experiments for showing the effectiveness of our approach are presented in section 6. Section 7 covers the related work. Finally, we conclude in section 8.
2 Motivations
Most matching tools are assembled from multiple match algorithms, which are then aggregated to improve matching accuracy and making matching systems extensible and customizable to a particular domain. Thus the aggregation function can be seen as the kernel of a matching tool. However, as pointed out in [16], the main issues are how to select and combine the most suitable match algorithms to execute for a given domain and how to adjust the multiple knobs (e.g. threshold, performance, quality, etc.). Two other important issues can be added: how to take into account the user expertise and the precision versus recall problem.
2.1 A Brutal Aggregation Function
Lots of semantic similarity measures have been proposed in the context of schema matching (refer to [24] for a survey of the different measures). And none of these measures outperforms all the others on all existing benchmarks. Therefore, most matching tools [1, 8, 9] aggregate the results obtained by several similarity measures to improve the quality of discovered matches. However, the aggregation function entails major drawbacks on three aspects.
**Performance.** A first drawback is to apply useless measures, involving a costly time and resource consumption. Indeed, let consider matching two schemas with \( n \) and \( m \) elements thanks to a matcher which uses \( k \) measures. Then \( n \times m \times k \) similarities will be computed and aggregated. Yet, there are many cases for which applying the \( k \) measures is not necessary. The following example shows that even a reliable match algorithm, like the use of a dictionary, may fail to discover even simple matches. Consider the two elements *name* and *named*. Applying a string matching technique like 3-grams between them provides a similarity value equal to 0.5. On the contrary, a dictionary technique (based on Wordnet\(^1\) for example) would result in a very low similarity value since no relationship between the two elements can be inferred. Thus, some techniques can either be appropriate in some cases or they can reveal totally useless. Applying all measures between every couple of elements involves a costly time and resource consumption.
**Quality.** The aggregation function may negatively influence the quality. First, it might give more weight to closely-related match algorithms: using several string matching techniques between the polysemous labels *mouse* and *mous* leads to a high similarity value, in spite of other techniques, like context-based, which could have identified that one label represents a computer device and the other an animal. Besides, the quality due to the aggregation does not necessarily increase when the number of similarity measures grows. Matching *mouse* and *mous* with one or two string matching algorithms already results in a high similarity value. Thus using more string matching algorithms would not have an interesting impact.
**Flexibility.** The aggregation function often requires manual tuning (thresholds, weights, etc.) in the way of combining the measures. This does not make it really flexible w.r.t. new similarity measures contributions. For instance, if a new measure is considered as reliable for a specific domain (based on an ontology for example), how would it be aggregated easily by an expert?
2.2 Too Many Parameters for the User
The user often has to manually tune the matching tool: edit a list of synonyms, set up some thresholds or weights, etc. This task can reveal tiresome or difficult, and the less the expert has to tune, the easier the matching system is. Some tools like eTuner [16] have been designed to automatically tune schema matching tools: a given matching
---
\(^1\) http://wordnet.princeton.edu
tool (e.g. COMA++ [1] or Similarity Flooding [18]) is applied against a set of expert matches in several configurations until an optimal one is discovered. However, eTuner heavily relies on the capabilities of the matching tool, especially for the available match algorithms and its aggregation function. Besides, it does not improve the performance since all match algorithms are computed for every couple of elements.
2.3 Is Expert Feedback Useless?
Obviously not. Yet, many schema matching tools do not take advantage of this expert feedback. The expert is often asked to validate the matches through a GUI. Unfortunately, this input is rarely used later on. We believe that it should be re-injected in the matching process to improve the results.
2.4 Recall vs Precision
In [14], the author underlines the problem of recall vs precision. Matching tools like COMA++ focus on a better precision, but this does not seem to be the best choice for an end-user in terms of post-effort: consider two schemas containing 100 elements each, there is potentially 10,000 matching possibilities (considering only 1:1 matchings), and the number of relevant matches is 25. Let us assume a first matcher discovers 10 relevant matches and achieves 100% precision, then the expert would have to find manually the 15 missing matches among 81,000 possibilities. On the contrary, another matcher returns a set of 300 matches, and it achieves a 100% recall. As all the relevant matches have been discovered, the expert has to remove the 275 irrelevant matches among the 300 ones. Thus favouring the recall seems a most appropriate choice. And note that technically speaking, it is easier to validate (or not) a discovered mapping than to manually browse two large schemas for adding new matches.
2.5 Another Way to Design Matching Tools?
To solve previously mentioned drawbacks, our approach aims at replacing the current kernel of matching tools by a decision tree, and it favours the recall to reduce user post-match effort. It also relies on the expert feedback which is re-injected in a machine learning process to improve the decision tree. Finally, we avoid a tiresome tuning task by automatically setting up parameter’s values during the machine learning process.
3 A Decision Tree Based Kernel
This section covers the use of a decision tree as the kernel of matching tools, in replacement of the aggregation function. We first explain the notion of decision tree, and give its interesting features for the matching context.
3.1 Decision Trees
The idea is to determine and apply, for a matching scenario, the most suitable matching techniques, by means of a decision tree [22]. In our context, a decision tree is a tree whose internal nodes are the similarity measures, and the edges stand for conditions on the result of the similarity measure. Thus the decision tree contains plans (i.e. ordered sequences) of match algorithms. We use well-known match algorithms from Second String\(^2\), and we add the neighbour context from [9], an annotation-based measure, a restriction measure and some dictionary-based techniques. The similarity value computed by a measure must satisfy the condition (continuous or discrete) on the edges to access a next node. Thus, when matching two schema elements with our decision tree, the first similarity measure, at the root node, is computed and returns a similarity value. According to this value, the edge for which its condition is satisfied leads to the next tree node. This process will iterate until a leaf node is reached, indicating whether the two elements should match or not. The final similarity value between two elements is the last one which has been computed, since we consider that the previous similarity values have only been computed to find the most appropriate measures.
Figure 1 illustrates two examples of decision tree. The first one (1(a)) focuses on the quality, it includes some costly measure (context, dictionary). The tree depicted by figure 1(b) aims at discovering some matches quickly, by using mainly string matching measures. Now let us see how the two couples of elements \((\text{author, writer})\) and \((\text{name, named})\) are matched with the quality-based decision tree (figure 1(a)): \((\text{author, writer})\) is first matched by equality which returns 0, then the label sum size is computed (value of 12), followed by the 3-grams algorithm. The similarity value obtained with 3-grams is low (0.11), implying the dictionary technique to be finally used to discover a synonym relationship. On the contrary, \((\text{name, named})\) is matched using equality, then label sum size, and finally 3-grams which provides a sufficient similarity value (0.5) to stop the process. Thus, only 7 match algorithms have been computed (4 for \((\text{author, writer})\) and 3 for \((\text{name, named})\)) instead of 12 (if all distinct match algorithms from the decision tree would have been used).
3.2 Advantages of this New Kernel
Decision trees appear especially well-suited for the task of combining similarity measures. Indeed, it is able to handle both numerical (3-gram, Levenstein, ...) and categorical (data restriction, being synonyms, ...) attributes (i.e. measures in our context). In addition, decision trees are robust to irrelevant attributes. Thus the quality of the similarity measure produced by a decision tree is strictly growing up with the number of similarity measure taken as input. We can also point out that the tree structure avoids testing some potentially costly similarity measures. Moreover, a learned decision tree is computed only once, and the learning process is performed in less than 1 second.
Another advantage of our approach deals with the **performance versus quality** aspect. With traditional matching tools, it is very difficult to favour one aspect by tuning
\(^2\)http://secondstring.sourceforge.net/
some parameters or the aggregation function. In our approach, a decision tree is linked to a performance/quality ratio. A user is able to quickly discover some matches or to emphasize on the matching quality, simply by selecting the decision tree which fulfills her needs.
We also note that the threshold is specific to each similarity measure. Thus, it is easy and meaningful to adjust the thresholds for a given similarity measure. On the contrary, the other approaches often have a global threshold, and tuning its value implies to favor the similarity measures with a homogeneous distribution of their values.
Such a kernel suffers from the fact that a decision tree might not be appropriate for all scenarios. However, we show in the next section that building a decision tree for a given scenario is an automatic process thanks to machine learning methods.
4 Learning a Decision Tree in the Schema Matching Context
Although the user validates some of the discovered matches with most matching tools, this expertise is rarely used then. Our approach enables the expert to validate some matches, which results in the learning of new decision trees thanks to the C4.5 algorithm. Another motivation is the number of parameters to be tuned for an expert. We aim at reducing this by providing several Pareto optimal trees. The expert can then easily select the most appropriate one according to her needs (performance or quality).
4.1 Learning New Decision Trees with C4.5
We propose to formulate the problem of determining, for a given schema matching scenario, the most appropriate decision tree as a machine learning task. Our approach is based on C4.5 [23], since it is able to combine both continuous and discrete attributes. It ensures other good properties: a high classification score, which results in a good matching quality, and it minimizes the height of the generated decision tree, implying
better performance. The machine learning classification problem consists in predicting the class of an object from a set of its attributes. In the context of schema matching, an object is a couple of elements and its class represents its validity in terms of mapping relevance. The match algorithms are the attributes of this couple. And as training data, we use the matches validated or rejected by the expert. The C4.5 algorithm is briefly described by algorithm 1.
**Algorithm 1:** Learning decision tree with C4.5 algorithm
\[
C \leftarrow \text{all couples classified by the expert}
\]
\[
\text{foreach similarity measure } m \text{ do}
\]
\[
\text{find the condition } \text{cond over } m \text{ that maximizes the information gain } g \text{ (i.e allows to correctly classify a maximum of couples of } C \text{ with condition } \text{cond)}
\]
\[
m_{\text{best}} \leftarrow m \text{ that maximize } g
\]
\[
\text{create a decision node } n \text{ that splits on } m_{\text{best}}
\]
\[
\text{recur the subsets of } C \text{ obtained by splitting on } m_{\text{best}} \text{ and add those nodes as children of node } n
\]
Machine Learning techniques have already been used in the context of reconciling schemas. In [5], the authors propose a full machine learning based approach called LSD, in which most of the computational effort is spent on the classifiers discovery. As a difference, our approach enables to reuse any existing similarity measures and it focuses on combining them. As a consequence, we avoid spending too much time into the learning process and we easily capture any previous and future work on similarity measure definition. This C4.5 technique is flexible, robust and self-tuned to combine several similarity measures.
4.2 Pareto
As the learner generates many different decision trees, only those who provide an advantage (either on the quality or on the performance) are kept. To select them, we apply the Pareto optimality [13].
**Pareto optimality:** Given a set of learned decision trees, a movement from one tuning to another that can make at least one decision tree better off without making any other decision tree worse off is called a Pareto improvement. A tuning is Pareto optimal when no further Pareto improvements can be made.
**Pareto frontier:** For a given system, the Pareto frontier is the set of all tunings which are all Pareto optimal. Figure 2 shows some learned decision trees (labelled points from A to G) and the line represents the Pareto frontier. The optimal decision trees (A, B and C) are kept while the other trees (D, E, F, and G) are discarded since they are dominated by at least another point. For instance, the point E is dominated by the point A on the performance and by the point B on the quality. We also notice that
tree A ensures good performance, but a low quality, since this tree mainly uses string matching measures. It could be the tree depicted by figure 1(b). On the contrary, the tree E emphasizes on the quality to the detriment of performance. Indeed, it includes some costly measures like dictionary-based or context. An example of such tree is depicted by figure 1(a).
Fig. 2. Pareto frontier to select optimal decision trees
4.3 Discussion
A constraint of the learning approach is the misclassification rate. Indeed, the learning process tends to decrease the number of errors, i.e. it aims at classifying correctly. However, when the number of relevant matches is not significant w.r.t. the total number of couples, the learning process generates a decision tree which always returns false, thus minimizing the misclassification rate. To solve this problem, we use the cost sensitive learning [4] which enables to assign a greater penalty to the irrelevant matches to the benefit of the relevant ones.
Contrary to other matching tools, our approach is flexible and it enables to easily integrate new measures: the learner takes it into account by computing all similarities between all couples of elements. Then the measure will be put (or not) in the decision tree according to its utility. To be short, there is no need to manually update the weights of an aggregation function or to set up thresholds.
5 Prototype
In order to demonstrate the benefits of our decision tree approach, a prototype named MatchPlanner has been implemented in Java. This section briefly describes its architecture and how it works.
5.1 Overview of the Architecture
Figure 3 shows the architecture of our prototype, with two main parts: (i) the schema matcher and (ii) the learner. Both parts share a main component, the decision trees, since the learner is in charge of generating them while the matcher uses them as a kernel.
As a schema matcher, it takes as input schemas and a decision tree. Note that the schemas are also stored in the knowledge base (KB). This KB can be seen as a repository for schemas and expert matches. The matching process mainly relies on the decision tree (see section 3) to generate a list of matches. The expert might decide to validate some discovered matches, which are then stored in the KB and used by the learner.
The learner part aims at generating one or more decision trees from information stored in the KB (schemas and some expert matches). It is based on the C4.5 machine learning algorithm as described in section 4. The learned decision trees can then be used by the matching process.
Fig. 3. Architecture of MatchPlanner
5.2 How MatchPlanner works
Here we briefly describe how the whole process is performed in our prototype. At first, the user selects a decision tree (some default trees are provided) and she quickly discovers a few matches. An expert then validates some of these matches, implying them to be stored in the KB. Thanks to this expert knowledge, new decision trees with different performance and quality ratios are generated by the learner. One of these trees can be
used to improve the matching quality and performance, and the discovered matches can be validated again. This process can iterate until the user is satisfied with the results.
6 Experiments
In this section, we demonstrate that MatchPlanner provides good results when compared to other schema matching tools, reputed to provide an acceptable matching quality: COMA++ and Similarity Flooding. COMA++ uses 17 similarity measures to build a matrix between every couple of elements to finally aggregate the similarity values and extract matches. As for Similarity Flooding, it converts the input schemas into graphs, then it discovers some initial matchings between graph elements thanks to string matching measures. And a propagation process enables to refine the matchings. To evaluate and compare MatchPlanner with these two schema matching tools, we first emphasize on the quality aspect, which is crucial in schema matching. Then, we show that MatchPlanner ensures good performance, an important aspect when dealing with large and/or numerous schemas.
6.1 Comparison with Other Matching Tools
This part shows the quality provided by our approach. To compare the matching quality of the three matching tools, we use common measures in the literature, namely precision, recall and F-measure. Precision calculates the proportion of relevant extracted matches among extracted matches. On the contrary, recall computes the proportion of relevant extracted matches among relevant matches. F-measure is a tradeoff between precision and recall. The matching tools were executed on 5 domains: book and university, widely used in the literature, courses from Thalia [15], travel, extracted from air-company web forms, and the last one is about person. Table 1 shows the resulting matching quality, $P$, $R$ and $F$ respectively standing for precision, recall and F-measure.
<table>
<thead>
<tr>
<th></th>
<th>MatchPlanner</th>
<th>COMA++</th>
<th>SF</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>$P$</td>
<td>$R$</td>
<td>$F$</td>
</tr>
<tr>
<td>book</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
</tr>
<tr>
<td>university</td>
<td>0.86</td>
<td>0.67</td>
<td>0.75</td>
</tr>
<tr>
<td>thalia</td>
<td>0.83</td>
<td>0.67</td>
<td>0.74</td>
</tr>
<tr>
<td>travel</td>
<td>0.57</td>
<td>0.8</td>
<td>0.67</td>
</tr>
<tr>
<td>person</td>
<td>0.58</td>
<td>1.0</td>
<td>0.73</td>
</tr>
</tbody>
</table>
Table 1. Matching quality of MatchPlanner, COMA++ and Similarity Flooding on the different domains.
With the book domain, MatchPlanner achieves a F-measure equal to 1.0. COMA++ discovers only one relevant mapping, resulting in a low F-measure. This is explained by the fact that COMA++ algorithms are mainly based on the aggregation of string matching measures and a list of synonyms. Similarity Flooding obtains an average quality, but it was able to discover most of the relevant matches (recall equal to 0.75), although
there were also many irrelevant ones (precision equal to 0.6).
The **university** domain has been widely used in the literature [7]. Its two schemas describe an Australian and a US university. They do not contain any data type or annotations. However, we demonstrate that this lack of information does not influence on MatchPlanner’s matching quality. We notice that MatchPlanner obtains the best results with a F-measure equal to 0.75. COMA++ has also an acceptable 0.72 F-measure. Similarity Flooding performed a little better, reaching a 0.58 F-measure, probably because of the small schema size, which limits the propagation efficiency. Note that it is possible that MatchPlanner obtains a higher recall with a different learned decision tree, but this heavily decreases the precision.
**Thalia** domain is difficult to match: although half of the expert matches are similar labels, it is very difficult to discover the other matches. COMA++ only discovers all similar matches thanks to string matching measures, but the 1.0 precision finally ensures a F-measure equal to 0.69. On the contrary, SF succeeds in discovering more relevant matches thanks to the propagation, but it obtains a lower F-measure. MatchPlanner does not achieve a high recall on this domain because most similarity measures does not enable to discover the other relevant matches. Thus, the learned decision tree does not change when the expert validates more matches. However, we obtain the highest F-measure (0.74).
**Travel** domain is also a difficult challenge for matching: the schemas have been extracted from various air-booking websites. COMA++ is not able to discover any mapping. Both SF and our prototype achieves a 0.67 F-measure. Contrary to SF, MatchPlanner emphasizes on the recall (0.8) to the detriment of precision (0.57). Indeed, as explained in section 2, favouring the recall seems the most appropriate choice to reduce the user post-match effort.
The **person** domain is composed of two schemas with strongly heterogeneous labels. Although COMA++ obtains the best F-measure (0.86), MatchPlanner is the only matching tool to discover all the relevant matches (recall to 1.0). As for SF, it achieves the best precision but it misses half of the relevant matches. With the results of COMA++ or SF, the user has to manually browse the input schemas to discover the rest of the matches. But her effort is reduced with MatchPlanner since she only removes some irrelevant discovered matches.
### 6.2 Performance Aspect
Finally, we emphasize on the time performance. Indeed, it is important to point out that MatchPlanner performs as quick as the other matching tools. Table 2 shows the performance aspect of MatchPlanner, COMA++ and Similarity Flooding. Note that for COMA++, we added both the time spent to parse the schemas and store their information in the database, and the matching time. Indeed, Similarity Flooding does not dissociate these two phases. As for MatchPlanner, we added both the time to generate a new decision tree and to perform the matching phase. On the 5 domains, the three matching tools perform well: they need less than 2 seconds to provide the set of matches. SF is fast but with detriment to the quality, as seen in section 6.1.
Table 2. Time performance of MatchPlanner, COMA++ and Similarity Flooding
<table>
<thead>
<tr>
<th></th>
<th>MP</th>
<th>COMA++</th>
<th>SF</th>
</tr>
</thead>
<tbody>
<tr>
<td>book</td>
<td>≤1s (60/280)</td>
<td>≤1s</td>
<td>≤1s</td>
</tr>
<tr>
<td>university</td>
<td>≤1s (214/2008)</td>
<td>2s</td>
<td>≤1s</td>
</tr>
<tr>
<td>thalia</td>
<td>2s (847/867)</td>
<td>2s</td>
<td>≤1s</td>
</tr>
<tr>
<td>travel</td>
<td>≤1s (222/520)</td>
<td>≤1s</td>
<td>≤1s</td>
</tr>
<tr>
<td>person</td>
<td>2s (191/360)</td>
<td>2s</td>
<td>≤1s</td>
</tr>
</tbody>
</table>
Table 2 gives another detail for MatchPlanner. The ratio in parenthesis represents the resource sparing due to the decision tree. The first number stands for the number of calls to match algorithms that has been performed to match the schemas, while the second is the total number of calls if all distinct match algorithms from the decision tree would have been used. For example, on the book domain, MatchPlanner computed 60 similarity values. If our tool used all match algorithms from the tree, 280 calls would have been executed. We notice that some cases enable to execute less than half of the match algorithms (book or university domains for instance). But it also occurs that the gain is minimal (thalia domain), because of the structure of the learned decision tree. This shows that MatchPlanner is able to spare some resource by computing only the appropriate match algorithms for a given couple of schema elements.
7 Related Work
This section covers the related work in schema/ontology matching. As many approaches have been proposed in these domains [2, 12, 17, 20, 21], we only detail the works which are closer to MatchPlanner.
Similarity Flooding [18] have been used with Relational, RDF and XML schemas. These schemas are initially converted into labeled graphs and SF approach uses fix-point computation to determine correspondences of 1:1 local and m:n global cardinality between corresponding nodes of the graphs. The algorithm has been implemented as a hybrid matcher, in combination with a name matcher based on string comparisons. First, the prototype does an initial element-level name mapping, and then feeds these matches to the structural SF matcher. The weight of similarity between two elements is increased, if the algorithm finds some similarity between the related elements of the pair of elements.
COMA++ [1] is a generic, composite matcher with very effective match results. It can process the relational, XML, RDF schemas as well as ontologies. Internally it converts the input schemas as trees for structural matching. Similarity of pairs of elements is calculated into a similarity matrix. At present, it uses 17 element level match algorithms with an user defined synonym and abbreviation table. For each source element, elements with similarity higher then than a threshold are displayed to the user for final selection. MatchPlanner does not use the whole set of match algorithms, but it is able to learn the best combination. And the expert feedback is re-injected in the process.
AUTOMATCH [3] is the predecessor of AUTOPLEX, and it uses schema instance data and machine learning techniques to find possible matches between two schemas.
It explicitly uses Naive Bayesian algorithm to analyse the input instances of relational schemas fields against previously built global schema. The match result consists of 1:1 correspondences and global cardinality. The major drawback of this work is the importance of the data instances. In most cases, there are not available for different reasons (security, lack, etc). The data instances can contain errors or some information might be missing, leading to a quality decrease. Although this approach is interesting on the machine learning aspect, it seems risky to have only one matching technique.
**eTuner** [16] aims at tuning schema matching tools. It proceeds as follows: a given matching tool (e.g., COMA++ or Similarity Flooding) is applied against a set of expert matches until an optimal configuration is found for the matching tool. However, eTuner heavily relies on the capabilities of the matching tool, especially for the available match algorithms and its aggregation function. On the contrary, MatchPlanner is aimed at learning the best combination of a subset of match algorithms (not schema tools). Moreover, it is able to self-tune important features like the performance and quality.
**GLUE** [6] is the extended version of LSD, which creates ontology/taxonomy mapping using machine learning techniques. The system takes as input a set of instances along with the taxonomies. Glue classifies and associates the classes of instances from source to target taxonomies and vice versa. It uses a composite approach, as in LSD, to do so, the computational effort is spent on the classifiers discovery. As a difference, our approach enables to reuse any existing similarity measures and it focuses on combining them. Furthermore, we avoid wasting time into the learning process and our planner tool can easily integrate additional matching techniques.
Decision trees have been used in ontology matching for discovering hidden matches among entities [11]. Their approach is based on learning rules for matching terms in Wordnet. In another work [10], decision trees have been used for learning parameters for semi-automatic ontology alignment method. This approach is aimed at optimizing the process of ontology alignment and supporting the user in creating the training examples. However, the decision trees were not used for choosing the best match algorithms. Moreover, our method is fully automated and provides self-tuning capability.
### 8 Conclusion and Future Work
In this paper, we presented a flexible and efficient approach for the next generation of schema matching tools. To replace the aggregation function, we propose to use a decision tree as the new kernel of schema matching tools: the main idea is to build a matching plan based on learning techniques. Thus our approach enables to combine the most appropriate match algorithms for a given domain, instead of applying the whole set. This results in a performance improvement. Another advantage deals with the matching quality. Finally, MatchPlanner is enhanced with self-tuning capability since the learned decision trees focus either on the quality, performance or a tradeoff between these two aspects. As future work, we plan to enrich the KB by adding more schemas and expert mappings, resulting in a more robust tool. We also intend to enrich the discovered matches with their relationship or to focus on complex mappings by relying on the reliable measures.
References
|
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/file/index/docid/271534/filename/RR-08010_MatchPlanner.pdf", "len_cl100k_base": 7635, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 34446, "total-output-tokens": 9636, "length": "2e12", "weborganizer": {"__label__adult": 0.0004477500915527344, "__label__art_design": 0.0008840560913085938, "__label__crime_law": 0.000640869140625, "__label__education_jobs": 0.00391387939453125, "__label__entertainment": 0.00021648406982421875, "__label__fashion_beauty": 0.00032329559326171875, "__label__finance_business": 0.0006499290466308594, "__label__food_dining": 0.0004382133483886719, "__label__games": 0.0010118484497070312, "__label__hardware": 0.0007710456848144531, "__label__health": 0.0009441375732421876, "__label__history": 0.0006470680236816406, "__label__home_hobbies": 0.0001766681671142578, "__label__industrial": 0.0006103515625, "__label__literature": 0.0013647079467773438, "__label__politics": 0.0004925727844238281, "__label__religion": 0.0007557868957519531, "__label__science_tech": 0.330810546875, "__label__social_life": 0.0002899169921875, "__label__software": 0.047027587890625, "__label__software_dev": 0.6064453125, "__label__sports_fitness": 0.0003561973571777344, "__label__transportation": 0.0005736351013183594, "__label__travel": 0.0003170967102050781}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39297, 0.04894]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39297, 0.20209]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39297, 0.89862]], "google_gemma-3-12b-it_contains_pii": [[0, 918, false], [918, 3620, null], [3620, 6355, null], [6355, 9465, null], [9465, 11987, null], [11987, 15385, null], [15385, 17296, null], [17296, 20094, null], [20094, 21711, null], [21711, 23214, null], [23214, 26107, null], [26107, 29365, null], [29365, 32452, null], [32452, 35904, null], [35904, 39297, null]], "google_gemma-3-12b-it_is_public_document": [[0, 918, true], [918, 3620, null], [3620, 6355, null], [6355, 9465, null], [9465, 11987, null], [11987, 15385, null], [15385, 17296, null], [17296, 20094, null], [20094, 21711, null], [21711, 23214, null], [23214, 26107, null], [26107, 29365, null], [29365, 32452, null], [32452, 35904, null], [35904, 39297, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39297, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39297, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39297, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39297, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39297, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39297, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39297, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39297, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39297, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39297, null]], "pdf_page_numbers": [[0, 918, 1], [918, 3620, 2], [3620, 6355, 3], [6355, 9465, 4], [9465, 11987, 5], [11987, 15385, 6], [15385, 17296, 7], [17296, 20094, 8], [20094, 21711, 9], [21711, 23214, 10], [23214, 26107, 11], [26107, 29365, 12], [29365, 32452, 13], [32452, 35904, 14], [35904, 39297, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39297, 0.08929]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
072b88666758e19f6cb44695325dad8868b42d2c
|
Dynamic Programming
Overview of dynamic programming
DP algorithm for pairwise alignment
References:
- Cormen, et al, Introduction to Algorithms [CIS 315 text]
- Giegerich & Wheeler, “Pairwise Sequence Alignment” (PDF on class web site)
Recursive Problems
- Self-similarity
- subproblems are smaller instances of main problem
- Base case
- smallest instance of the problem
- Recursive (inductive) case
- how to solve large problem as combination of smaller ones
- Examples from math:
- inductive reasoning
- recurrence relations
Example Recursive Problem
- Tree and graph problems often have a natural recursive definition
- tree = leaf(X) or node(L,R) where L, R are trees
- Algorithm to search for leaf A:
- def search(T,A)
- if T = leaf(X) /* base case? */
- print “yes” if X = A
- else /* T = tree(L,R) */
- search(L) /* recurse left */
- search(R) /* recurse right */
- end
- end
Efficiency of Recursive Algorithms
- In other problems, a top-down recursive algorithm is not a good choice
- def fib(i)
- if (i == 1 || i == 2)
return 1
- else
return fib(i-1) + fib(i-2)
- end
- end
- O(2^n) steps to compute fib(n)
Efficiency (cont’d)
- What distinguishes the binary tree search from the evaluation of the Fibonacci function?
- Both involve binary trees...
- In the Fibonacci function the same value is computed more than once
<table>
<thead>
<tr>
<th></th>
<th>4</th>
<th>2</th>
<th>3</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>1</td>
<td>2</td>
<td>1</td>
</tr>
</tbody>
</table>
Note: two calls to fib(2) in the evaluation of fib(4)
Efficiency (cont’d)
- That wasn’t too bad, but the program gets out of hand for larger values of n
<table>
<thead>
<tr>
<th>6</th>
</tr>
</thead>
<tbody>
<tr>
<td>4</td>
</tr>
<tr>
<td>2</td>
</tr>
<tr>
<td>3</td>
</tr>
<tr>
<td>0</td>
</tr>
</tbody>
</table>
Now fib(2) is called 5 times
Dynamic Programming
- Dynamic programming is a general technique for solving combinatorial optimization problems
- A problem is a candidate for solution with DP if:
- it has a recursive formulation (e.g. defined by a recurrence relation)
- it has overlapping subproblems (subproblems used in more than one problem)
- it has optimal substructure: the optimal solution can be computed by finding the optimal solution of its subproblems
Dynamic Programming (cont’d)
- Dynamic programming involves
- a recursive definition of the function to be optimized
- recurrence for defining similar subproblems
- base case
- use of a data structure (“table”) to hold results of recursive calls
- Abstract: if a subproblem result is not in the table, evaluate, store result to be reused for the next call
- Practice: systematic (bottom-up) construction of table
Dynamic Programming (cont’d)
- To solve a problem with DP, locate the table cell that corresponds to the solution
- Fill in the values of the necessary subproblems
- Return the value of the solution cell
- Example: fib(7)
- table = vector of n ≥ 7 elements
- solution is the value of V[7]
- value of fib(7) needs table entries for fib(6) and fib(5) so make recursive calls
- eventually fills in table entries V[1]...V[7]
- but note that fib(n) called only once for each n
Traceback
- For an optimization problem, the table entries give the value of the optimum
- To show how the solution was obtained, augment the table with traceback information
- traceback = reference to subproblem(s) used to compute a cell entry
- A traceback path from the solution cell to the base case will produce the steps used to construct the solution
Traceback (cont’d)
- Example: text formatting
- The goal is to figure out where to place line breaks in order to minimize the “raggedness” of a paragraph
- cost = function of amount of white space at ends of lines
- V[i] = cost of paragraph if word i starts a line
- bottom-up solution: compute V[n]..V[1]
- traceback from V[1] shows where to place the line breaks...
Aside: Operations Research
- Where does the “program” in “dynamic programming” come from?
- In operations research (OR), a program is a table that represents a solution to an optimization problem
- Different types of programs involve differing sets of constraints on the table entries
- linear programming
- integer programming
- ...
Pairwise Alignment
- From the previous lecture:
- an alignment is a mapping between characters of two strings
- each character in a string corresponds either to a character in the other string or to a space
- S: AA-TACCG
- T: AAACA-CG
- The original strings do not have to be the same length
- After a (global) alignment, the strings are the same length
Pairwise Alignment (cont’d)
- Each element in the mapping can be assigned a cost:
- 0 if $S_i = T_i$ (match)
- $m$ if $S_i \neq T_i$ (mismatch cost)
- $g$ if $S_i \Rightarrow \cdot$ or $T_i \Rightarrow \cdot$ (gap cost)
- The cost of an alignment is the sum of the costs of each position
- Example:
- S: AA-TACCG
- T: AAACA-CG
- cost = $m + 2g$
Pairwise Alignment (cont’d)
- A note about gaps:
- Two space characters are never aligned with each other
- A gap is a string of 1 or more consecutive space characters
- Gap costs can be linear (as in the previous slide) or affine:
- $p = g + n(e-1)$
- where $g$ is a “gap opening” cost and $e$ is a “gap extension” cost
- For the rest of this lecture (and on the project) we’ll use linear gap penalties
Dot Matrix
- A dot matrix can be used to produce an alignment
- label columns with chars from one string, rows with chars from the other string
- enter a dot where row, column label are the same
- A path from [1,1] to [n,m] defines an alignment
diagonal = mis(match)
down = delete
right = insert
DP Algorithm for Pairwise Alignment
- The dot matrix is the basis for the recursive formulation of the pairwise alignment problem
- The dynamic programming table will be a matrix D
- D[i,j] = minimum cost of aligning the first i chars of S (1 ≤ i ≤ n) and the first j chars of T (1 ≤ j ≤ m)
- D[0,0] is the cost of aligning S1..i with 0 chars from j, i.e. inserting i spaces in front of T
- D[0,j] defined similarly
- D[i,0] and D[0,j] define the base cases (there is only one way to construct each alignment)
DP Alignment (cont’d)
- Base case:
- D[0,0] = 0
- D[i,0] = i*g for 1 ≤ i ≤ n
- D[0,j] = j*g for 1 ≤ j ≤ m
- Recurrence: D[i,j] is the minimum of
- D[i-1,j-1] (Si = Tj)
- D[i-1,j-1] + m (Si ≠ Tj)
- D[i-1,j] + g (delete, i.e. align Si with '-' in T)
- D[i,j-1] + g (insert, i.e. align Tj with '-' in S)
Optimal Substructure
- For this formulation to work, we need to know D[i,j] can be decomposed into solutions of three adjoining cells
- Optimal substructure requirement of DP:
The solution of D[i,j] must be a function of optimal solutions of its subproblems
Optimal Substructure (cont’d)
- Formal proof: see Gusfield, 1997
- Using the idea of an edit transcript, show $D[i,j]$ must be either
- $D[i-1,j-1]$
- $D[i-1,j-1] + m$
- $D[i-1,j] + g$
- $D[i,j-1] + g$
(i.e. it is not necessary to look elsewhere in the table)
Method
- To align strings S and T:
- allocate an array $D$ with $n+1$ rows and $m+1$ columns
- initialize the top row and left column using the values defined for the base case of the recurrence
- fill in the interior rows using the recurrence:
$$
D[i,j] = \min( (D[i-1,j]+g), (D[i,j-1]+g), (D[i-1,j-1]+c) )
$$
Alignment Traceback
- When the table is filled in, $D[n,m]$ will be the cost of the optimal alignment of the two strings
- In order to know what alignment produces this minimal cost, record traceback information in each cell
- The traceback at $D[i,j]$ indicates which of the three terms in the min function can be used to compute $D[i,j]$
- Save more than one if there is a tie
Alignment Traceback (cont’d)
- Any traceback path from $D[n,m]$ to $D[0,0]$ will give the operations to construct an optimal alignment
- Write the aligned strings from right to left
- When in cell $D[i,j]$
- write $S_i$ and $T_j$
- write $S_i$ and '-'
- write '-' and $T_j$
- Q: do all paths lead to $D[0,0]$? Why?
Example
- S = ATGTA
- T = ATTCA
- Matrix using
- g = 1
- m = 1
- What are the initial entries (base case)?
```
<table>
<thead>
<tr>
<th></th>
<th>A</th>
<th>T</th>
<th>T</th>
<th>C</th>
<th>A</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>T</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>G</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>T</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>A</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
```
Example
- S = ATGTA
- T = ATTCA
- Matrix using
- g = 1
- m = 1
- What are the traceback entries for the base case?
```
<table>
<thead>
<tr>
<th></th>
<th>A</th>
<th>T</th>
<th>T</th>
<th>C</th>
<th>A</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>0</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
</tr>
<tr>
<td>T</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
</tr>
<tr>
<td>G</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td></td>
</tr>
<tr>
<td>T</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td></td>
<td></td>
</tr>
<tr>
<td>A</td>
<td>4</td>
<td>5</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
```
Example
- S = ATGTA
- T = ATTCA
- Matrix using
- g = 1
- m = 1
- What is the value for D[1,1]?
- Its traceback entry?
```
<table>
<thead>
<tr>
<th></th>
<th>A</th>
<th>T</th>
<th>T</th>
<th>C</th>
<th>A</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>0</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
</tr>
<tr>
<td>T</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
</tr>
<tr>
<td>G</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td></td>
</tr>
<tr>
<td>T</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td></td>
<td></td>
</tr>
<tr>
<td>A</td>
<td>4</td>
<td>5</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
```
Example
- S = ATGTA
- T = ATTCA
- Matrix using
- g = 1
- m = 1
- What are the remaining entries for row 1?
```
<table>
<thead>
<tr>
<th></th>
<th>A</th>
<th>T</th>
<th>T</th>
<th>C</th>
<th>A</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>0</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
</tr>
<tr>
<td>T</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
</tr>
<tr>
<td>G</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td></td>
</tr>
<tr>
<td>T</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td></td>
<td></td>
</tr>
<tr>
<td>A</td>
<td>4</td>
<td>5</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
```
Example
- **S** = ATGTA
- **T** = ATTCA
- Matrix using
- *g* = 1
- *m* = 1
- Fill in the rest of the rows...
<table>
<thead>
<tr>
<th></th>
<th>A</th>
<th>T</th>
<th>T</th>
<th>C</th>
<th>A</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td></td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
</tr>
<tr>
<td>T</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>G</td>
<td>3</td>
<td>2</td>
<td>1</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>T</td>
<td>4</td>
<td>3</td>
<td>2</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>A</td>
<td>5</td>
<td>4</td>
<td>3</td>
<td>2</td>
<td>2</td>
</tr>
</tbody>
</table>
Note two tracebacks from D[1,5]
Example
- **S** = ATGTA
- **T** = ATTCA
- What is the cost of a complete alignment?
- How many such alignments are there?
<table>
<thead>
<tr>
<th></th>
<th>A</th>
<th>T</th>
<th>T</th>
<th>C</th>
<th>A</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td></td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
</tr>
<tr>
<td>T</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>G</td>
<td>3</td>
<td>2</td>
<td>1</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>T</td>
<td>4</td>
<td>3</td>
<td>2</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>A</td>
<td>5</td>
<td>4</td>
<td>3</td>
<td>2</td>
<td>2</td>
</tr>
</tbody>
</table>
Example
- Test your understanding:
- is each value on a solution path less than or equal to the previous value?
- is any sequence of adjacent cells with continually equal or decreasing values a solution?
- i.e. (same question): why is there no arrow from D[5,5] to D[5,4]?
Example
- Test your understanding:
- what would the matrix look like if there were no matches?
e.g. align:
AAAAA
TTTTT
what would the alignment cost be?
<table>
<thead>
<tr>
<th></th>
<th>A</th>
<th>T</th>
<th>T</th>
<th>C</th>
<th>A</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>0</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
</tr>
<tr>
<td>T</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>G</td>
<td>3</td>
<td>2</td>
<td>1</td>
<td>3</td>
<td>3</td>
</tr>
<tr>
<td>T</td>
<td>4</td>
<td>3</td>
<td>2</td>
<td>1</td>
<td>3</td>
</tr>
<tr>
<td>A</td>
<td>5</td>
<td>4</td>
<td>3</td>
<td>2</td>
<td>2</td>
</tr>
</tbody>
</table>
Complexity
- This method is clearly $O(n \times m)$ in both space and time
- Not suitable for very long sequences (e.g. two chromosomes…)
- It is possible to compute scores only in $O(n)$ space
- the current row uses values from the previous row only
- There is a linear-space method that produces scores and alignments (Hirschberg, 1977)
- More on this and other improvements later in the term [?]
Project #3: Pairwise Alignment
- The next programming project this term will be to write a pairwise alignment program
- Input: two sequences
- recommend FASTA format, but any other “standard” format will do
- Output: optimal alignment, computed using a dynamic programming algorithm
- Runtime parameters:
- mismatch cost
- gap cost
Project #3 (cont’d)
- Some extra credit ideas:
- affine gap penalties
- end-gaps free
- no cost for one or more spaces on the end of either string
- character-specific mismatch penalties
- e.g. $20 \times 20$ matrix of amino acid substitution costs
- read matrix from a file when the program starts
Project #3 (cont’d)
- Project tar file will have:
- C++ outline of main program
- Array class for creating 2D matrices
designed for arrays of floating point numbers
- May be retrofit to be template for arrays of any objects
- Makefile
- Test sequences in FASTA format
|
{"Source-Url": "https://classes.cs.uoregon.edu/10W/cis454/lectures/alignment.pdf", "len_cl100k_base": 4722, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 26112, "total-output-tokens": 4302, "length": "2e12", "weborganizer": {"__label__adult": 0.00040841102600097656, "__label__art_design": 0.0003726482391357422, "__label__crime_law": 0.0005359649658203125, "__label__education_jobs": 0.003398895263671875, "__label__entertainment": 9.41753387451172e-05, "__label__fashion_beauty": 0.00024437904357910156, "__label__finance_business": 0.0003132820129394531, "__label__food_dining": 0.0006556510925292969, "__label__games": 0.0008172988891601562, "__label__hardware": 0.0014896392822265625, "__label__health": 0.0011796951293945312, "__label__history": 0.0004029273986816406, "__label__home_hobbies": 0.0002524852752685547, "__label__industrial": 0.0008916854858398438, "__label__literature": 0.0003142356872558594, "__label__politics": 0.0003964900970458984, "__label__religion": 0.0006852149963378906, "__label__science_tech": 0.115478515625, "__label__social_life": 0.0001800060272216797, "__label__software": 0.00679779052734375, "__label__software_dev": 0.86328125, "__label__sports_fitness": 0.0006628036499023438, "__label__transportation": 0.0009560585021972656, "__label__travel": 0.0003108978271484375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 11800, 0.03287]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 11800, 0.41011]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 11800, 0.79839]], "google_gemma-3-12b-it_contains_pii": [[0, 1182, false], [1182, 2130, null], [2130, 3776, null], [3776, 5268, null], [5268, 6683, null], [6683, 8004, null], [8004, 9166, null], [9166, 10085, null], [10085, 11522, null], [11522, 11800, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1182, true], [1182, 2130, null], [2130, 3776, null], [3776, 5268, null], [5268, 6683, null], [6683, 8004, null], [8004, 9166, null], [9166, 10085, null], [10085, 11522, null], [11522, 11800, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 11800, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 11800, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 11800, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 11800, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 11800, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 11800, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 11800, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 11800, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 11800, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 11800, null]], "pdf_page_numbers": [[0, 1182, 1], [1182, 2130, 2], [2130, 3776, 3], [3776, 5268, 4], [5268, 6683, 5], [6683, 8004, 6], [8004, 9166, 7], [9166, 10085, 8], [10085, 11522, 9], [11522, 11800, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 11800, 0.17901]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
f637ace6540bf591af2c474e089980b3559d9098
|
Overview of VTK (Pt 3)
Announcements
• Volume rendering lectures: next week
• VisIt lectures: next week
• Quiz: Nov 9th (isolines)
• Project proposal:
– 510: due Weds Nov 11th
– 410: due Tues Nov 17th
• Extra OH: Mon 1:30-2:30, Fri 12:30-1:30
• 6B now split:
– “MC cases” still due Sat, “MC implementation” due Tues
• SVN lecture online
Project 6B
- Stop when you encounter -1 in the tables
Final Project
• Two general flavors:
– Here is some data I find interesting and I want to visualize
• Data source:
– Find yourself
– I have some
• How to visualize:
– Use VTK
– Use VisIt
– Other
– Here is a visualization algorithm I want to implement
Review
Outline
• Quick introduction to VTK
• Foundational concepts
– Object-oriented programming
– Data flow networks
• Overview of key VTK modules
Example of data flow (image processing)
1. FileReader
2. Crop
3. Transpose
4. Invert
5. Color
6. Concatenate
7. FileWriter
Example of data flow (image processing)
- **Participants:**
- **Source:** a module that produces data
- It creates an output
- **Sink:** a module that consumes data
- It operates on an input
- **Filter:** a module that transforms input data to create output data
- **Pipeline:** a collection of sources, filters, and sinks connected together
Key abstract types in VTK
- `vtkDataObject / vtkDataSet`
- `vtkAlgorithm`
- Graphics modules
Key abstract type in VTK: vtkDataObject
While vtkDataObject allows VTK developers to add custom abstractions, almost all usage by new users of VTK is via vtkDataSet.
Key abstract type in VTK: `vtkDataObject`
I’ve gone 15 years using almost exclusively four concrete types of `vtkDataObject`.
Important derived types of vtkDataSet
- vtkStructuredGrid
- vtkUnstructuredGrid
- vtkRectilinearGrid
- vtkPolyData
Important methods associated with vtkDataSet
- int GetNumberOfCells();
- int GetNumberOfPoints();
- vtkCell *GetCell(int cellID);
- double *GetPoint(int pointID);
- vtkPointData *GetPointData();
- Gets fields defined on points (vertices) of mesh
- vtkCellData *GetCellData();
- Gets fields defined on cells (elements) of mesh
- vtkFieldData *GetFieldData();
- Gets fields defined not on cells or points
Fields are flexible in VTK, including scalars, vectors, tensors, and fields of arbitrary length.
Polymorphism! ... each derived type implements this interface.
But using this general interface can cost performance. Fixes?
Key abstract types in VTK
- `vtkDataObject` / `vtkDataSet`
- `vtkAlgorithm`
- Graphics modules
Key abstract type in vtk: vtkAlgorithm
• While data flow has clear concepts for “Source”, “Sink”, and “Filter”, VTK has a single class “vtkAlgorithm”
– Previously had differentiated types
• vtkAlgorithm:
– has zero, one, or more inputs
• void SetInputConnection(vtkAlgorithmOutput *); // port 0
• void SetInputData(int port, vtkAlgorithmOutput *);
– has zero, one, or more outputs
• vtkAlgorithmOutput *GetOutputPort(void); // port 0
• vtkAlgorithmOutput *GetOutput(int);
First program
```c
#include <vtkDataSetReader.h>
#include <vtkContourFilter.h>
#include <vtkDataSetWriter.h>
int main()
{
vtkDataSetReader *rdr = vtkDataSetReader::New();
rdr->SetFileName("noise.vtk");
// Contour the data.
vtkContourFilter *cf = vtkContourFilter::New();
cf->SetNumberOfContours(1);
cf->SetValue(0, 3.0);
cf->SetInputConnection(rdr->GetOutputPort());
vtkDataSetWriter *wrtr = vtkDataSetWriter::New();
wrtr->SetFileName("contour.vtk");
wrtr->SetInputConnection(cf->GetOutputPort());
wrtr->Write();
}
```
```
fawcett:VTK_ex child$ cat contour_no_graphics.C
fawcett:VTK_ex child$ make
g++ -g -o contour_no_graphics -I/Users/childs/visit/vtk/6.0.0/i386-apple-darwin10_gcc-4.2/include/vtk-6.0 contour_no_graphics.C -l/Users/childs/visit/vtk/6.0.0/i386-apple-darwin10_gcc-4.2/lib -lvtkRenderingFreeTypeOpenGL-6.0 -lvtkRenderingFreeType-6.0 -lvtkInteractionStyle-6.0 -lvtkRenderingOpenGL-6.0 -lvtkImage-6.0 -lvtkIOLegacy-6.0 -lvtkIOCore-6.0 -lvtkRenderingCore-6.0 -lvtkFiltersCore-6.0 -lvtkCommonDataModel-6.0 -lvtkCommonMisc-6.0 -lvtkCommonExecutionModel-6.0 -lvtkCommonCore-6.0
fawcett:VTK_ex child$ ./contour_no_graphics
fawcett:VTK_ex child$ ls -l contour.vtk
-rw-r--r-- 1 childs staff 1383911 Jul 6 15:47 contour.vtk
```
First program
Modules have many options for how they execute. These options are encoded as attributes in the module and modified using “Setter” functions.
First program
```c
#include <vtkDataSetReader.h>
#include <vtkContourFilter.h>
#include <vtkDataSetWriter.h>
int main()
{
vtkDataSetReader *rdr = vtkDataSetReader::New();
rdr->SetFileName("noise.vtk");
// Contour the data.
vtkContourFilter *cf = vtkContourFilter::New();
cf->SetNumberOfContours(1);
cf->SetValue(0, 3.0);
cf->SetInputConnection(rdr->GetOutputPort());
vtkDataSetWriter *wrtr = vtkDataSetWriter::New();
wrtr->SetFileName("contour.vtk");
wrtr->SetInputConnection(cf->GetOutputPort());
wrtr->Write();
}
```
VTK forces all VTK objects to be allocated using dynamic memory (the heap).
VTK memory management
• VTK uses reference counting for all objects (vtkAlgorithm, vtkDataObject, etc)
• Rules:
– All new objects have a reference count of 1
– Register() increments the reference count
– Delete() deletes the reference count
– When reference count hits 0, the object is deleted
• VTK shares arrays between vtkDataObjects, to save on memory...
– ... which means they can’t store arrays on stack, since the arrays could go out of scope (dangling pointer)
VTK has recently introduced a templated type, vtkSmartPointer, to assist with reference counting.
First program (leak free version)
```c
#include <vtkDataSetReader.h>
#include <vtkContourFilter.h>
#include <vtkDataSetWriter.h>
int main()
{
vtkDataSetReader *rdr = vtkDataSetReader::New();
rdr->SetFileName("noise.vtk");
// Contour the data.
vtkContourFilter *cf = vtkContourFilter::New();
cf->SetNumberOfContours(1);
cf->SetValue(0, 3.0);
cf->SetInputConnection(rdr->GetOutputPort());
vtkDataSetWriter *wrtr = vtkDataSetWriter::New();
wrtr->SetFileName("contour.vtk");
wrtr->SetInputConnection(cf->GetOutputPort());
wrtr->Write();
rdr->Delete();
cf->Delete();
wrtr->Delete();
}
```
First program
```c
#include <vtkDataSetReader.h>
#include <vtkContourFilter.h>
#include <vtkDataSetWriter.h>
int main()
{
vtkDataSetReader *rdr = vtkDataSetReader::New();
rdr->SetFileName("noise.vtk");
// Contour the data.
vtkContourFilter *cf = vtkContourFilter::New();
cf->SetNumberOfContours(1);
cf->SetValue(0.3, 0);
cf->SetInputConnection(rdr->GetOutputPort());
vtkDataSetWriter *wrtr = vtkDataSetWriter::New();
wrtr->SetFileName("contour.vtk");
wrtr->SetInputConnection(cf->GetOutputPort());
wrtr->Write();
}
```
The pipeline is constructed via SetInputConnection() and GetOutputPort() calls.
How does VTK control execution?
New Material
VTK’s Execution Model
• Key method: Update()
– Update() requests a module to get its output “up-to-date”, i.e., to calculate it
• But what if that modules inputs are not up-to-date?
– Part of an Update() is to call Update() on all the inputs to a module
• In the example program, “Write()” knows to request its input is up-to-date, which propagates up the pipeline
First program
```
fawcett:VTK_ex childs$ cat contour_no_graphics.C
#include <vtkDataSetReader.h>
#include <vtkContourFilter.h>
#include <vtkDataSetWriter.h>
int main()
{
vtkDataSetReader *rdr = vtkDataSetReader::New();
rdr->SetFileName("noise.vtk");
// Contour the data.
vtkContourFilter *cf = vtkContourFilter::New();
cf->SetNumberOfContours(1);
cf->SetValue(0, 3.0);
cf->SetInputConnection(rdr->GetOutputPort());
vtkDataSetWriter *wrtr = vtkDataSetWriter::New();
wrtr->SetFileName("contour.vtk");
wrtr->SetInputConnection(cf->GetOutputPort());
wrtr->Write();
}
```
1) wrtr asks cf to Update()
2) cf asks rdr to Update()
3) rdr reads from the file
4) cf calculates contour
5) wrtr writes file
VTK & Time Stamps
• VTK prevents unnecessarily re-calculation of the pipeline
– It uses time stamps to keep track of when a module or its input was modified, and when the last time was it calculated its outputed.
First program
```c
#include <vtkDataSetReader.h>
#include <vtkContourFilter.h>
#include <vtkDataSetWriter.h>
int main()
{
vtkDataSetReader *rdr = vtkDataSetReader::New();
rdr->SetFileName("noise.vtk");
// Contour the data.
vtkContourFilter *cf = vtkContourFilter::New();
cf->SetNumberOfContours(1);
cf->SetValue(0, 3.0);
cf->SetInputConnection(rdr->GetOutputPort());
vtkDataSetWriter *wrtr = vtkDataSetWriter::New();
wrtr->SetFileName("contour.vtk");
wrtr->SetInputConnection(cf->GetOutputPort());
wrtr->Write();
cf->SetValue(0, 3.5);
wrtr->SetFileName("contour2.vtk");
wrtr->Write();
rdr->Delete();
cf->Delete();
wrtr->Delete();
}
```
Topology of pipelines
• Each module can have multiple inputs, multiple outputs
• Multiple sinks are fine
– Call Update() on each
• Cycles are technically OK, but can be problematic
Key abstract types in VTK
- `vtkDataObject` / `vtkDataSet`
- `vtkAlgorithm`
- Graphics modules
Graphics Modules
- 90+% of VTK source code is sources, sinks, and filters.
- <10% is graphics / windowing.
- ... but ~50% of most “getting started” programs involve graphics / windows
5 Abstractions for Graphics / Windowing
1. **RenderWindow**: a window
2. **Renderer**: the place inside a window where you can render
- There can be multiple renderers within a window
3. **Actor**: something that can be placed into a renderer
4. **Mapper**: maps data to geometric primitives
- One mapper can be associated with multiple actors
5. **RenderWindowInteractor**: defines what button clicks, mouse movements, etc. should do
Example with graphics / windowing (pt 1)
```cpp
int main()
{
// The following lines create a sphere represented by polygons.
//
vtkSmartPointer<vtkSphereSource> sphere =
vtkSmartPointer<vtkSphereSource>::New();
sphere->SetThetaResolution(100);
sphere->SetPhiResolution(50);
// The mapper is responsible for pushing the geometry into the graphics
// library. It may also do color mapping, if scalars or other attributes
// are defined.
//
vtkSmartPointer<vtkPolyDataMapper> sphereMapper =
vtkSmartPointer<vtkPolyDataMapper>::New();
sphereMapper->SetInputConnection(sphere->GetOutputPort());
vtkSmartPointer<vtkActor> sphere1 =
vtkSmartPointer<vtkActor>::New();
sphere1->SetMapper(sphereMapper);
sphere1->GetProperty()->SetColor(1,0,0);
vtkSmartPointer<vtkActor> sphere2 =
vtkSmartPointer<vtkActor>::New();
sphere2->SetMapper(sphereMapper);
sphere2->GetProperty()->SetColor(0,1,0);
sphere2->AddPosition(1.25,0,0);
}
```
Adapted from SpecularSpheres.cxx in VTK source code
Example with graphics / windowing (pt 2)
```cpp
// Create the graphics structure. The renderer renders into the
// render window. The render window interactor captures mouse events
// and will perform appropriate camera or actor manipulation
// depending on the nature of the events.
//
// vtkSmartPointer<vtkRenderer> ren1 =
// vtkSmartPointer<vtkRenderer>::New();
// vtkSmartPointer<vtkRenderWindow> renWin =
// vtkSmartPointer<vtkRenderWindow>::New();
// renWin->AddRenderer(ren1);
// vtkSmartPointer<vtkRenderWindowInteractor> iren =
// vtkSmartPointer<vtkRenderWindowInteractor>::New();
// iren->SetRenderWindow(renWin);
// Add the actors to the renderer, set the background and size.
//
// ren1->AddActor(sphere1);
// ren1->AddActor(sphere2);
// ren1->SetBackground(0.1, 0.2, 0.4);
// renWin->SetSize(400, 200);
// ren1->GetActiveCamera()->SetFocalPoint(0,0,0);
// ren1->GetActiveCamera()->SetPosition(0,0,1);
// ren1->GetActiveCamera()->SetViewUp(0,1,0);
// ren1->GetActiveCamera()->ParallelProjectionOn();
// ren1->ResetCamera();
// ren1->GetActiveCamera()->SetParallelScale(1.5);
// This starts the event loop and invokes an initial render.
//
// iren->Initialize();
// iren->Start();
return EXIT_SUCCESS;
```
More Example Programs
• Many example programs in VTK download
• Some C++, some Python
• Challenge is typically figuring out how to map what you want to do to VTK modules
– How to find the right module?
– How to set up the module’s options?
– Good reference for these questions:
Summary
- VTK is open source, written in C++, and is supported by a large community.
- It employs the data flow paradigm.
- It has many modules (readers, filters, mappers), which makes it very powerful.
- It is well-suited to for many tasks including:
- foundation for visualization tools
- one-off visual explorations of data
- custom visualization tools, especially when considering the effort to incorporate it.
Additional Visualization Algorithms
Slicing
• Assume rectilinear mesh with
– $X=\{0,1,2,3,4,5,6,7,8,9\}$
– $Y=\{0,1,2,3,4,5,6,7,8,9\}$
– $Z=\{0,1,2,3,4,5,6,7,8,9\}$
How do we generate slice at $Y=5$?
Slice at Y=5
• Output mesh:
– X={0,1,2,3,4,5,6,7,8,9}
– Y={5}
– Z={0,1,2,3,4,5,6,7,8,9}
for (int z = 0 ; z < 10 ; z++)
for (int x = 0 ; x < 10 ; x++)
outF[z*10+x] = F[z*100+5*10+x];
Slicing
- Assume rectilinear mesh with
- $X=\{0,1,2,3,4,5,6,7,8,9\}$
- $Y=\{0,1,2,3,4,5,6,7,8,9\}$
- $Z=\{0,1,2,3,4,5,6,7,8,9\}$
How do we generate slice at $Y=5.3$?
Slice at Y=5.3
- Output mesh:
- $X=\{0,1,2,3,4,5,6,7,8,9\}$
- $Y=\{5.3\}$
- $Z=\{0,1,2,3,4,5,6,7,8,9\}$
```c
for (int z = 0 ; z < 10 ; z++)
for (int x = 0 ; x < 10 ; x++)
outF[z*10+x] =
(0.3)*(F[z*100+6*10+x]-F[z*100+5*10+x])
+ F[z*100+5*10+x];
```
Slicing
• Assume rectilinear mesh with
– $X=\{0,1,2,3,4,5,6,7,8,9\}$
– $Y=\{0,1,2,3,4,5,6,7,8,9\}$
– $Z=\{0,1,2,3,4,5,6,7,8,9\}$
How do we generate slice at plane $X+Y+Z=0$?
Answer: we will need “distance functions”
Distance Functions
• Distance function: measures how far a point is from surface
• Example: how far are you from the plane $X+Y+Z=0$?
• How far is the point $(3,0,0)$ from this plane?
Answer: 3 units
Distance Functions
- Distance function: measures how far a point is from surface
- Example: how far are you from the plane $X+Y+Z=0$?
- How far is the point $(1,1,0)$ from this plane?
Answer: $\sqrt{2}$ units
How to use distance functions to slice?
• Step #1: create distance function
• Step #2: isosurface with isovalue = 0
Revisiting $Y=5.3$
- Some cells straddle $Y=5.3$
- When classifying, all $Y=5$ get 0, all $Y=6$ get 1
Distance Functions: approximate version
• Distance function: measures how far a point is from surface
• Example: how far are you from the plane $X+Y+Z=0$?
• How far is the point $(x,y,z)$ from this plane?
Approx. answer: $x+y+z$ units
Why? Approximate answer: the overestimates (on both sides) cancel each other out.
Analogy
• UCD Math 127A: Introduction to Mathematical Analysis
• 10 week course
• First 8 weeks:
– Intermediate Value Theorem
• Last two weeks:
– All of derivatives and integrals
Threshold
• Keep cell if it meets some criteria, else discard
• Criteria:
– Pressure > 2
– 10 < temperature < 20
How to implement threshold
• Iterate over cells
• If a cell meets the criteria, then place that cell in the output
• Output is an unstructured mesh
Interval Volumes
Isolates portion of volume between two values, $V_{\text{low}}$ and $V_{\text{hi}}$.
Interval volumes vs isosurfaces
Interval volume between 2.5 and 2.7.
Isosurfaces at 2.5 and 2.7.
How to implement interval volumes
• Iterate over cells
• Like marching cubes, but making topologically 3D output (tetrahedrons, not triangles)
• Now 3 states: below, within interval, above
• Many, many cases to determine
Box
- Isolate portion of volume within a box
- $-8 < x < 8$
- $-9 < y < 5.7$
- $-3.2 < z < 6.4$
How to implement box
• Iterate over cells
• Three cases:
– Retain cell
– Discard cell
– Split cell (i.e., straddles box boundary)
• How to split cell?
– Box:Interval Volume as Slicing:Isosurfacing
• (set up 6 distance fields and use interval volumes)
• (why not 1 distance field?)
Clip by arbitrary functions
How to implement Clip
• Same as Box, but different spatial function
• Iterate over cells
• Three cases:
– Retain cell
– Discard cell
– Split cell (i.e., straddles clip boundary)
• How to split cell?
– Clip:Interval Volume as Slicing:Isosurfacing
• (possibly multiple clips)
Slicing by non-planes
How to non-planar slicing
• Set up distance function for spatial function (cone, sphere)
• Apply Marching Cubes
Isosurface by one variable, color by another
Isosurface by var1, color by var1
Isosurface by var1, color by var2
How to implement isosurface by var1, color by var2
• Marching Cubes based on var1.
• Need operation:
– As Marching Cubes calculates each triangle, evaluate var2 for each vertex of that triangle
– Create variable var2 on output triangle mesh
|
{"Source-Url": "https://classes.cs.uoregon.edu/15F/cis410visualization/lectures/CIS410F15_Lec15.pdf", "len_cl100k_base": 5020, "olmocr-version": "0.1.49", "pdf-total-pages": 59, "total-fallback-pages": 0, "total-input-tokens": 81643, "total-output-tokens": 7504, "length": "2e12", "weborganizer": {"__label__adult": 0.00031948089599609375, "__label__art_design": 0.00040602684020996094, "__label__crime_law": 0.0002567768096923828, "__label__education_jobs": 0.0017728805541992188, "__label__entertainment": 7.045269012451172e-05, "__label__fashion_beauty": 0.00012290477752685547, "__label__finance_business": 0.00012803077697753906, "__label__food_dining": 0.0003566741943359375, "__label__games": 0.0006208419799804688, "__label__hardware": 0.0008673667907714844, "__label__health": 0.0003614425659179687, "__label__history": 0.00020933151245117188, "__label__home_hobbies": 0.0001106858253479004, "__label__industrial": 0.0004112720489501953, "__label__literature": 0.00018906593322753904, "__label__politics": 0.0001615285873413086, "__label__religion": 0.0004444122314453125, "__label__science_tech": 0.01486968994140625, "__label__social_life": 0.00012505054473876953, "__label__software": 0.00946807861328125, "__label__software_dev": 0.9677734375, "__label__sports_fitness": 0.0002491474151611328, "__label__transportation": 0.0003790855407714844, "__label__travel": 0.0001634359359741211}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17165, 0.0282]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17165, 0.68975]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17165, 0.62057]], "google_gemma-3-12b-it_contains_pii": [[0, 23, false], [23, 344, null], [344, 399, null], [399, 693, null], [693, 700, null], [700, 846, null], [846, 970, null], [970, 1328, null], [1328, 1422, null], [1422, 1589, null], [1589, 1782, null], [1782, 1898, null], [1898, 2533, null], [2533, 2629, null], [2629, 3125, null], [3125, 4388, null], [4388, 4544, null], [4544, 5166, null], [5166, 5745, null], [5745, 6362, null], [6362, 7021, null], [7021, 7034, null], [7034, 7404, null], [7404, 8125, null], [8125, 8341, null], [8341, 9017, null], [9017, 9201, null], [9201, 9297, null], [9297, 9482, null], [9482, 9924, null], [9924, 11003, null], [11003, 12233, null], [12233, 12609, null], [12609, 13031, null], [13031, 13067, null], [13067, 13239, null], [13239, 13435, null], [13435, 13609, null], [13609, 13902, null], [13902, 14127, null], [14127, 14331, null], [14331, 14542, null], [14542, 14659, null], [14659, 14762, null], [14762, 15082, null], [15082, 15266, null], [15266, 15385, null], [15385, 15534, null], [15534, 15638, null], [15638, 15737, null], [15737, 15959, null], [15959, 16057, null], [16057, 16355, null], [16355, 16383, null], [16383, 16670, null], [16670, 16692, null], [16692, 16805, null], [16805, 16920, null], [16920, 17165, null]], "google_gemma-3-12b-it_is_public_document": [[0, 23, true], [23, 344, null], [344, 399, null], [399, 693, null], [693, 700, null], [700, 846, null], [846, 970, null], [970, 1328, null], [1328, 1422, null], [1422, 1589, null], [1589, 1782, null], [1782, 1898, null], [1898, 2533, null], [2533, 2629, null], [2629, 3125, null], [3125, 4388, null], [4388, 4544, null], [4544, 5166, null], [5166, 5745, null], [5745, 6362, null], [6362, 7021, null], [7021, 7034, null], [7034, 7404, null], [7404, 8125, null], [8125, 8341, null], [8341, 9017, null], [9017, 9201, null], [9201, 9297, null], [9297, 9482, null], [9482, 9924, null], [9924, 11003, null], [11003, 12233, null], [12233, 12609, null], [12609, 13031, null], [13031, 13067, null], [13067, 13239, null], [13239, 13435, null], [13435, 13609, null], [13609, 13902, null], [13902, 14127, null], [14127, 14331, null], [14331, 14542, null], [14542, 14659, null], [14659, 14762, null], [14762, 15082, null], [15082, 15266, null], [15266, 15385, null], [15385, 15534, null], [15534, 15638, null], [15638, 15737, null], [15737, 15959, null], [15959, 16057, null], [16057, 16355, null], [16355, 16383, null], [16383, 16670, null], [16670, 16692, null], [16692, 16805, null], [16805, 16920, null], [16920, 17165, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 17165, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17165, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17165, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17165, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17165, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17165, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17165, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17165, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 17165, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17165, null]], "pdf_page_numbers": [[0, 23, 1], [23, 344, 2], [344, 399, 3], [399, 693, 4], [693, 700, 5], [700, 846, 6], [846, 970, 7], [970, 1328, 8], [1328, 1422, 9], [1422, 1589, 10], [1589, 1782, 11], [1782, 1898, 12], [1898, 2533, 13], [2533, 2629, 14], [2629, 3125, 15], [3125, 4388, 16], [4388, 4544, 17], [4544, 5166, 18], [5166, 5745, 19], [5745, 6362, 20], [6362, 7021, 21], [7021, 7034, 22], [7034, 7404, 23], [7404, 8125, 24], [8125, 8341, 25], [8341, 9017, 26], [9017, 9201, 27], [9201, 9297, 28], [9297, 9482, 29], [9482, 9924, 30], [9924, 11003, 31], [11003, 12233, 32], [12233, 12609, 33], [12609, 13031, 34], [13031, 13067, 35], [13067, 13239, 36], [13239, 13435, 37], [13435, 13609, 38], [13609, 13902, 39], [13902, 14127, 40], [14127, 14331, 41], [14331, 14542, 42], [14542, 14659, 43], [14659, 14762, 44], [14762, 15082, 45], [15082, 15266, 46], [15266, 15385, 47], [15385, 15534, 48], [15534, 15638, 49], [15638, 15737, 50], [15737, 15959, 51], [15959, 16057, 52], [16057, 16355, 53], [16355, 16383, 54], [16383, 16670, 55], [16670, 16692, 56], [16692, 16805, 57], [16805, 16920, 58], [16920, 17165, 59]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17165, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
19bbbfef57f5c7563c81892a80f731e9fbd66bf3
|
INSTRUCTIONS
- You have 3 hours to complete the exam.
- The exam is closed book, closed notes, closed computer, closed calculator, except three hand-written 8.5" × 11" crib sheet of your own creation and the official CS 61A midterm 1, midterm 2, and final study guides.
- Mark your answers on the exam itself. We will not grade answers written on scratch paper.
<table>
<thead>
<tr>
<th>Last name</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>First name</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Student ID number</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>CalCentral email (@berkeley.edu)</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>TA</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Name of the person to your left</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Name of the person to your right</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
All the work on this exam is my own. (please sign)
POLICIES & CLARIFICATIONS
- If you need to use the restroom, bring your phone and exam to the front of the room.
- You may use built-in Python functions that do not require import, such as `min`, `max`, `pow`, `len`, `abs`, `sum`, `next`, `iter`, `list`, `tuple`, `map`, `filter`, `zip`, `all`, and `any`.
- You may not use example functions defined on your study guides unless a problem clearly states you can.
- For fill-in-the-blank coding problems, we will only grade work written in the provided blanks. You may only write one Python statement per blank line, and it must be indented to the level that the blank is indented.
- Unless otherwise specified, you are allowed to reference functions defined in previous parts of the same question.
- You may use the `Tree`, `Link`, and `BTree` classes defined on Page 2 (left column) of the Midterm 2 Study Guide.
1. (12 points) High Quality Air (All are in Scope: OOP, WWPD, Lambda, Python Lists, Mutation)
For each of the expressions in the table below, write the output displayed by the interactive Python interpreter when the expression is evaluated. The output may have multiple lines. The first row is completed for you.
- If an error occurs, write Error, but include all output displayed before the error.
- To display a function value, write Function.
- To display an iterator value, write Iterator.
- If an expression would take forever to evaluate, write Forever.
The interactive interpreter displays the contents of the repr string of the value of a successfully evaluated expression, unless it is None.
Assume that you have started python3 and executed the code shown on the left first, then you evaluate each expression on the right in the order shown. Expressions evaluated by the interpreter have a cumulative effect.
<table>
<thead>
<tr>
<th>Expression</th>
<th>Output</th>
</tr>
</thead>
<tbody>
<tr>
<td>print(None)</td>
<td>None</td>
</tr>
<tr>
<td>print(print(None), print)</td>
<td></td>
</tr>
<tr>
<td>z(4)</td>
<td></td>
</tr>
<tr>
<td>breath(breath(sub))(5, 3)</td>
<td></td>
</tr>
<tr>
<td>[Day().aqi, m.aqi]</td>
<td></td>
</tr>
<tr>
<td>[Week.aqi, t.aqi]</td>
<td></td>
</tr>
<tr>
<td>t.n</td>
<td></td>
</tr>
</tbody>
</table>
from operator import sub
z = (lambda x: lambda y: 2 * (y-x))(3)
def breath(f, count=1):
if count > 1:
print(count)
count += 1
return lambda x, y: f(x+1, y)
class Day:
aqi = 10
def __init__(self, aqi=0):
if aqi > self.aqi:
self.aqi = aqi
self.n = []
def mask(self, limit):
def f(aqi):
if aqi > limit:
self.n.append(aqi-limit)
return self.mask(aqi)
return f
class Week(Day):
aqi = 50
m, t = Day(), Week(199)
t.mask(200)(100)(150)(160)
Day.aqi = 140
t.aqi = 160
2. (8 points) **Diagram Horror** *(All are in Scope: Python Lists, Mutation, Environment Diagram, Lambda)*
Fill in the environment diagram that results from executing the code on the left until the entire program is finished, an error occurs, or all frames are filled. *You may not need to use all of the spaces or frames.*
A complete answer will:
- Add all missing names and parent annotations to all local frames.
- Add all missing values created or referenced during execution.
- Show the return value for each local frame.
- Use box-and-pointer diagrams for lists and tuples.
```python
def get(out):
out.pop()
out = scary(lambda movie: out)
return lambda: [out]
def scary(movie):
out.append(movie)
return movie(5)[:1]
out = [6]
get([7, 8])()
```
---

3. (16 points) Gainz
Definition. A sequence is near increasing if each element beyond the second is larger than all elements preceding its previous element. That is, element \( i \) must be larger than elements \( i - 2, i - 3, i - 4, \) etc.
(a) (3 pt) (All are in Scope: Python Lists, List Comprehensions, Recursion) Implement \texttt{is\_near}, which takes a sequence \( s \) and returns whether its elements form a near increasing sequence.
\begin{verbatim}
def is_near(s):
"""Return whether \( s \) is a near increasing sequence."
>>> is_near([]) and is_near([1]) and is_near([1, 2]) and is_near(range(10))
True
>>> is_near([4, 2]) and is_near([1, 4, 2, 5]) and is_near([1, 2, 4, 3])
True
>>> is_near([3, 2, 1]) # 1 <= 3
False
>>> isNear([1, 4, 2, 3, 5]) # 3 <= 4
False
>>> is_near([1, 4, 2, 5, 3]) # 3 <= 4
False
>>> is_near([1, 2, 4, 2, 5]) # 2 <= 2
False
""
return all([___________ > ______________________ for i in ___________________________])
\end{verbatim}
(b) (6 pt) (At least one of these is out of Scope: Exceptions, Iterators, Recursion) Implement \texttt{fast\_near}, which takes an iterable value and returns whether its elements form a near increasing sequence. \texttt{fast\_near} must run in \( \Theta(n) \) time and \( \Theta(1) \) space (not including the input itself) for an iterable input with \( n \) elements. Assume that \( s \) has a finite number of elements. You may \textbf{not} call \texttt{is\_near}.
\begin{verbatim}
def fast_near(s):
"""Return whether the elements in iterable \( s \) form a near increasing sequence."
>>> fast_near([2, 5, 3, 6, 6, 7, 7, 9, 8])
True
""
t, s = iter(s), None # Do not refer to s below this line.
try:
largest, last = ________________________________, ________________________________
except StopIteration:
return __________________________________________________________________________
for x in t:
if ________________________________________________________________________________:
return False
largest, last = ________________________________, ________________________________
return True
\end{verbatim}
Alternative Definition. (Equivalent to the one on the previous page, but stated in a more useful way for the problem below.) A sequence is near increasing if each element but the last two is smaller than all elements following its subsequent element. That is, element $i$ must be smaller than elements $i + 2$, $i + 3$, $i + 4$, etc.
(c) (6 pt) (All are in Scope: Lambda, Recursion) Implement `near`, which takes a non-negative integer $n$ and returns the largest near increasing sequence of digits within $n$ as an integer. The arguments `smallest` and `d` are part of the implementation; you must determine their purpose. You may not call `is_near` or `fast_near`. You may not use any values except integers and booleans (True and False) in your solution (no lists, strings, etc.).
```python
def near(n, smallest=10, d=10):
"""Return the longest sequence of near-increasing digits in n."
>>> near(123)
123
>>> near(153)
153
>>> near(1523)
153
>>> near(15123)
1123
>>> near(1111111)
11
>>> near(985357)
557
>>> near(14735476)
143576
>>> near(812348567)
1234567
"""
if n == 0:
return __________________________________________________________________________
no = near(n//10, smallest, d)
if smallest > _________________________________________________________________________:
yes = _____________________________________________________________________________
return __________________________________________________________________________(yes, no)
return ________________________________________________________________________________
```
(d) (1 pt) What is the largest possible integer that could ever be returned from the `near` function? Note: In general, integers in Python can be arbitrarily large.
4. (11 points) Tree Time
**Definition.** A runt node is a node in a tree whose label is smaller than all of the labels of its siblings. A sibling is another node that shares the same parent. A node with no siblings is a runt node.
(a) (7 pt) (All are in Scope: Tree Recursion, Tree Class, HOFs) Implement runts, which takes a Tree instance \( t \) in which every label is different and returns a list of the labels of all runt nodes in \( t \), in any order. Also implement apply_to_nodes, which returns nothing and is part of the implementation. Do not mutate any tree. The Tree class is on the Midterm 2 Guide.
```python
def runts(t):
"""Return a list in any order of the labels of all runt nodes in t."
>>> sorted(runts(Tree(9, [Tree(3), Tree(4, [Tree(5, [Tree(6)]), Tree(7)]), Tree(2)])))
[2, 5, 6, 9]
"""
result = []
def g(node):
if ________________________________________________________________:
result.append(______________________________________________________)
apply_to_nodes(_____________________________________________________________________
return ___________________________________________________________________________
def apply_to_nodes(f, t):
"""Apply a function f to each node in a Tree instance t.""
________________________________________________________________
for b in t.branches:
_______________________________________________________________________
```
```
(b) (4 pt) (All are in Scope: Tree Recursion, Tree Class, Lambda) Implement `max_label`, which takes a `Tree t` and returns its largest label. Do **not** mutate any tree.
```python
def max_label(t):
"""Return the largest label in t."
>>> max_label(Tree(4, [Tree(5), Tree(3, [Tree(6, [Tree(1), Tree(2)])])]))
6
"""
def f(node):
# __________ max(____________, ____________, key=lambda n: _________________________)
apply_to_nodes(f, t) # Assume that apply_to_nodes above is implemented correctly.
return t.label
```
5. (9 points) Run, Program, Run (All are in Scope: Scheme Lists)
Implement runs, a Scheme procedure that takes a list of integers \( s \) and returns a list of non-empty lists of integers \( t \). Together, the lists in \( t \) should contain all elements of \( s \) in order. The first element in each list in \( t \) must be less than the last element in the previous list, if there is one. The rest of the elements in each list in \( t \) must be greater than or equal to the previous element.
Also implement and use next-run in your solution, which takes a non-empty list of integers \( s \) and returns a pair of lists: the longest non-decreasing prefix of \( s \) and the rest of \( s \). Use the provided pair data abstraction. Your implementation should be correct even if the pair implementation were to change.
```
;; Return a list of non-decreasing lists that together contain the elements of \( s \).
;; scm> (runs '(3 4 7 6 6 8 1 2 5 5 4))
;; ((3 4 7) (6 6 8) (1 2 5 5) (4))
;; scm> (runs '(4 3 2 3))
;; ((4) (3) (2 3))
(define (runs s)
(if (null? s) ________________________________________________________________
(let ((p (next-run s))) ________________________________________________________________
(if (or ________________________________________________________________
(pair ________________________________________________________________
(begin ________________________________________________________________
(define p (next-run (cdr s)))
(pair ________________________________________________________________))))))
```
;; A data abstraction for a pair of a first run and the rest of a list.
(define (pair a b) (lambda (c) (if c a b)))
(define (first p) (p #t))
(define (rest p) (p #f))
;; Return a pair containing the first run in \( s \) (a list) and the rest of \( s \) (another list).
;; scm> (first (next-run '(4 5 1 3 2)))
;; (4 5)
;; scm> (rest (next-run '(4 5 1 3 2)))
;; (1 3 2)
(define (next-run s)
(if (or ________________________________________________________________
________________________________________________________________
(begin ________________________________________________________________
(define p (next-run (cdr s)))
(pair ________________________________________________________________))))
6. (9 points) Generation Z
(a) (4 pt) (All are in Scope: Generators, Linked List Class) Implement `rev`, a generator function that takes a `Link` instance and yields the elements of that linked list in reverse order. The `Link` class appears on Page 2 of the Midterm 2 Study Guide.
```
def rev(s):
"""Yield the elements in Link instance s in reverse order.
>>> list(rev(Link(1, Link(2, Link(3)))))
[3, 2, 1]
>>> next(rev(Link(2, Link(3))))
3
""
if
yield
```
(b) (2 pt) (All are in Scope: Scheme Lists, Scheme Streams) Using the provided `add` procedure, define `not-three`, an infinite stream of all positive integers that are not evenly divisible by 3. The `not-three` stream is increasing and begins with 1, 2, 4, 5, 7, 8, 10, 11, 13.
```
(define (add k s) (cons-stream (+ k (car s)) (add k (cdr-stream s))))
(define not-three ____________________________)
```
(c) (3 pt) (All are in Scope: Scheme Macros) Implement `infix`, a Scheme macro that evaluates infix expressions. An infix expression is either a number or a three-element list containing an infix expression, a procedure, and another infix expression. The value of a compound infix expression is the value of its second element applied to the values of its first and third elements. Note: The last line begins with a quasiquote. If you cross out the quasiquote and solve the problem without using quasiquote or unquote, you can receive up to 2 out of 3 points (not recommended).
```
;; A macro to evaluate infix expressions.
;; scm> (infix (2 * 3))
;; 6
;; scm> (infix ((1 + 1) * (1 + 2)))
;; 6
;; scm> (infix ((1 + (3 - 2)) * ((2 + 3) + 2)))
;; 14
(define-macro (infix e)
(if (number? e) e
`((______________________________))))
(define (cadr x) (car (cdr x)))
(define (caddr x) (car (cdr (cdr x))))
```
7. (10 points) SQL of Course (All are in Scope: SQL, SQL Aggregation)
The courses table describes the course name, start time hour (h) and minute (m), and length in minutes (len) for different lectures. For example, 61A starts at 13:00 and lasts 50 minutes. The locations table describes the course name and location (loc) of these courses. Assume that each course name appears exactly once in each table. Write your SQL statements so that they would still be correct if the table contents changed.
```
CREATE TABLE courses AS
SELECT "1" AS course, 14 AS h, 0 AS m, 80 AS len UNION
SELECT "2" , 13 , 30 , 80 UNION
SELECT "8" , 12 , 30 , 50 UNION
SELECT "10" , 12 , 30 , 110 UNION
SELECT "50AC" , 13 , 30 , 45 UNION
SELECT "61A" , 13 , 0 , 50;
CREATE TABLE locations AS
SELECT "1" AS name, "VLSB" AS loc UNION
SELECT "2" , "Dwinelle" UNION
SELECT "10" , "VLSB" UNION
SELECT "50AC" , "Wheeler" UNION
SELECT "61A" , "Wheeler";
```
(a) (2 pt) Select a one-column table that contains the course names of all courses that start before 13:30.
```
SELECT course FROM courses WHERE __________________________________________________________;
```
```
61A
8
10
```
(b) (4 pt) Select a two-column table with one row per location that contains the location, as well as the shortest length in minutes of any lecture held in that location.
```
SELECT loc, ______________________________________________________________________________
FROM _____________________________________________________________________________________
________________________________________________________________________________________;
```
```
Dwinelle 80
VLSB 80
Wheeler 45
```
(c) (4 pt) Select a three-column table where each row describes an earlier course, a later course, and the amount of time in minutes between the end time of the earlier course and the start time of the later course. Only include pairs of courses where the lectures do not overlap in time. *Note:* There are 60 minutes in an hour.
```
SELECT _________________________________________, _________________________________________
___________________________________________________________________________________ AS gap
FROM _____________________________________________________________________________________
________________________________________________________________________________________;
```
```
61A 1 10
8 1 40
8 2 10
8 50AC 10
```
8. (0 points) **Draw!** *(Optional)* Draw a picture of some function or procedure.
|
{"Source-Url": "https://cs61a.org/exam/fa18/final/61a-fa18-final_tagged.pdf", "len_cl100k_base": 4575, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 22017, "total-output-tokens": 5158, "length": "2e12", "weborganizer": {"__label__adult": 0.000797271728515625, "__label__art_design": 0.0011072158813476562, "__label__crime_law": 0.0007038116455078125, "__label__education_jobs": 0.2052001953125, "__label__entertainment": 0.00028133392333984375, "__label__fashion_beauty": 0.0004436969757080078, "__label__finance_business": 0.0007138252258300781, "__label__food_dining": 0.0015249252319335938, "__label__games": 0.00293731689453125, "__label__hardware": 0.002300262451171875, "__label__health": 0.0011739730834960938, "__label__history": 0.0009708404541015624, "__label__home_hobbies": 0.0007119178771972656, "__label__industrial": 0.0013141632080078125, "__label__literature": 0.0010061264038085938, "__label__politics": 0.0006093978881835938, "__label__religion": 0.0014057159423828125, "__label__science_tech": 0.0423583984375, "__label__social_life": 0.0008640289306640625, "__label__software": 0.0196533203125, "__label__software_dev": 0.7109375, "__label__sports_fitness": 0.001277923583984375, "__label__transportation": 0.0010862350463867188, "__label__travel": 0.0006661415100097656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18219, 0.06916]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18219, 0.48786]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18219, 0.72491]], "google_gemma-3-12b-it_contains_pii": [[0, 2253, false], [2253, 4702, null], [4702, 5514, null], [5514, 7741, null], [7741, 9567, null], [9567, 11039, null], [11039, 11600, null], [11600, 13924, null], [13924, 15738, null], [15738, 18137, null], [18137, 18219, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2253, false], [2253, 4702, null], [4702, 5514, null], [5514, 7741, null], [7741, 9567, null], [9567, 11039, null], [11039, 11600, null], [11600, 13924, null], [13924, 15738, null], [15738, 18137, null], [18137, 18219, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18219, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18219, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18219, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18219, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 18219, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18219, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18219, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18219, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 18219, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18219, null]], "pdf_page_numbers": [[0, 2253, 1], [2253, 4702, 2], [4702, 5514, 3], [5514, 7741, 4], [7741, 9567, 5], [9567, 11039, 6], [11039, 11600, 7], [11600, 13924, 8], [13924, 15738, 9], [15738, 18137, 10], [18137, 18219, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18219, 0.09494]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
63fce3c292729e990db58d81b66c9a4cce6abb94
|
[REMOVED]
|
{"Source-Url": "http://oro.open.ac.uk/19209/1/2009-06-24-ICSOC.pdf", "len_cl100k_base": 7126, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 36002, "total-output-tokens": 9344, "length": "2e12", "weborganizer": {"__label__adult": 0.0002956390380859375, "__label__art_design": 0.0004963874816894531, "__label__crime_law": 0.0003046989440917969, "__label__education_jobs": 0.000926971435546875, "__label__entertainment": 7.402896881103516e-05, "__label__fashion_beauty": 0.00015747547149658203, "__label__finance_business": 0.0006194114685058594, "__label__food_dining": 0.0003147125244140625, "__label__games": 0.0004706382751464844, "__label__hardware": 0.0005164146423339844, "__label__health": 0.00047707557678222656, "__label__history": 0.0002567768096923828, "__label__home_hobbies": 6.699562072753906e-05, "__label__industrial": 0.0004041194915771485, "__label__literature": 0.0003046989440917969, "__label__politics": 0.0002639293670654297, "__label__religion": 0.0003466606140136719, "__label__science_tech": 0.0306243896484375, "__label__social_life": 9.53078269958496e-05, "__label__software": 0.00933837890625, "__label__software_dev": 0.95263671875, "__label__sports_fitness": 0.0002388954162597656, "__label__transportation": 0.0004580020904541016, "__label__travel": 0.00019800662994384768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40655, 0.02013]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40655, 0.24723]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40655, 0.90964]], "google_gemma-3-12b-it_contains_pii": [[0, 741, false], [741, 3300, null], [3300, 6567, null], [6567, 8153, null], [8153, 11287, null], [11287, 13408, null], [13408, 15553, null], [15553, 17802, null], [17802, 20323, null], [20323, 23435, null], [23435, 26631, null], [26631, 29600, null], [29600, 31640, null], [31640, 33733, null], [33733, 36837, null], [36837, 40655, null]], "google_gemma-3-12b-it_is_public_document": [[0, 741, true], [741, 3300, null], [3300, 6567, null], [6567, 8153, null], [8153, 11287, null], [11287, 13408, null], [13408, 15553, null], [15553, 17802, null], [17802, 20323, null], [20323, 23435, null], [23435, 26631, null], [26631, 29600, null], [29600, 31640, null], [31640, 33733, null], [33733, 36837, null], [36837, 40655, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40655, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40655, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40655, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40655, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40655, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40655, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40655, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40655, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40655, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40655, null]], "pdf_page_numbers": [[0, 741, 1], [741, 3300, 2], [3300, 6567, 3], [6567, 8153, 4], [8153, 11287, 5], [11287, 13408, 6], [13408, 15553, 7], [15553, 17802, 8], [17802, 20323, 9], [20323, 23435, 10], [23435, 26631, 11], [26631, 29600, 12], [29600, 31640, 13], [31640, 33733, 14], [33733, 36837, 15], [36837, 40655, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40655, 0.09278]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
82f495034f06113697d27f7eb33b07da94673600
|
Models for Metasearch
Javed Aslam
The Metasearch Problem
Search for: chili peppers
Search Engines
- Provide a ranked list of documents.
- May provide relevance scores.
- May have performance information.
Search Engine: Alta Vista
Search Engine: Ultraseek
Search Engine: inq102 TREC3
Queryid (Num): 50
Total number of documents over all queries
Retrieved: 50000
Relevant: 9805
Rel_ret: 7305
Interpolated Recall - Precision Averages:
at 0.00 0.8992
at 0.10 0.7514
at 0.20 0.6584
at 0.30 0.5724
at 0.40 0.4982
at 0.50 0.4272
at 0.60 0.3521
at 0.70 0.2915
at 0.80 0.2173
at 0.90 0.1336
at 1.00 0.0115
Average precision (non-interpolated)
for all rel docs (averaged over queries)
0.4226
Precision:
At 5 docs: 0.7440
At 10 docs: 0.7220
At 15 docs: 0.6867
At 20 docs: 0.6740
At 30 docs: 0.6267
At 100 docs: 0.4902
At 200 docs: 0.3848
At 500 docs: 0.2401
At 1000 docs: 0.1461
R-Precision (precision after R
(= num_rel for a query) docs retrieved):
Exact: 0.4524
External Metasearch
Metasearch Engine
Search Engine A
Database A
Search Engine B
Database B
Search Engine C
Database C
Internal Metasearch
Search Engine
- Text Module
- URL Module
- Image Module
Metasearch core
HTML Database
Image Database
Outline
- Introduce problem
- Characterize problem
- Survey current techniques
- Describe new approaches
- decision theory, social choice theory
- experiments with TREC data
- Upper bounds for metasearch
- Future work
# Classes of Metasearch Problems
<table>
<thead>
<tr>
<th>Relevance Scores</th>
<th>No Training Data</th>
<th>Training Data</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ranks only</td>
<td>Borda, Condorcet, rCombMNZ</td>
<td>Bayes</td>
</tr>
<tr>
<td>CombMNZ</td>
<td>LC model</td>
<td></td>
</tr>
</tbody>
</table>
Outline
- Introduce problem
- Characterize problem
- Survey current techniques
- Describe new approaches
- decision theory, social choice theory
- experiments with TREC data
- Upper bounds for metasearch
- Future work
Classes of Metasearch Problems
- Borda, Condorcet, rCombMNZ
- Bayes
- CombMNZ
- LC model
CombsUM [Fox, Shaw, Lee, et al.]
- Normalize scores: [0,1].
- For each doc:
- sum relevance scores given to it by each system (use 0 if unretrieved).
- Rank documents by score.
- Variants: MIN, MAX, MED, ANZ, MNZ
CombMNZ [Fox, Shaw, Lee, et al.]
- Normalize scores: [0,1].
- For each doc:
- sum relevance scores given to it by each system (use 0 if unretrieved), and
- multiply by number of systems that retrieved it (MNZ).
- Rank documents by score.
How well do they perform?
- Need *performance metric*.
- Need *benchmark data*.
Metric: Average Precision
\[
\begin{align*}
R & \quad 1/1 \\
N & \quad 2/3 \\
R & \quad 3/5 \\
N & \quad 4/8 \\
R & \quad 0.6917
\end{align*}
\]
Benchmark Data: TREC
- Annual *Text Retrieval Conference*.
- Millions of documents (AP, NYT, etc.)
- 50 queries.
- Dozens of retrieval engines.
- Output lists available.
- Relevance judgments available.
## Data Sets
<table>
<thead>
<tr>
<th>Data set</th>
<th>Number systems</th>
<th>Number queries</th>
<th>Number of docs</th>
</tr>
</thead>
<tbody>
<tr>
<td>TREC3</td>
<td>40</td>
<td>50</td>
<td>1000</td>
</tr>
<tr>
<td>TREC5</td>
<td>61</td>
<td>50</td>
<td>1000</td>
</tr>
<tr>
<td>Vogt</td>
<td>10</td>
<td>10</td>
<td>1000</td>
</tr>
<tr>
<td>TREC9</td>
<td>105</td>
<td>50</td>
<td>1000</td>
</tr>
</tbody>
</table>
CombX on TREC5 Data
TREC 5: Combining the top i systems in order.
Experiments
- Randomly choose \( n \) input systems.
- For each query:
- combine, trim, calculate avg precision.
- Calculate mean avg precision.
- Note best input system.
- Repeat (statistical significance).
CombMNZ on TREC5
TREC 5: avg precision over 200 random sets of systems.
Outline
- Introduce problem
- Characterize problem
- Survey current techniques
- Describe new approaches
- decision theory, social choice theory
- experiments with TREC data
- Upper bounds for metasearch
- Future work
New Approaches [Aslam, Montague]
- Analog to *decision theory*.
- Requires only rank information.
- Training required.
- Analog to *election strategies*.
- Requires only rank information.
- No training required.
## Classes of Metasearch Problems
<table>
<thead>
<tr>
<th>Relevance Scores</th>
<th>No Training Data</th>
<th>Training Data</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ranks Only</td>
<td>Borda, Condorcet, rCombMNZ</td>
<td>Bayes</td>
</tr>
<tr>
<td></td>
<td>CombMNZ</td>
<td>LC model</td>
</tr>
</tbody>
</table>
Decision Theory
- Consider two alternative explanations for some observed data.
- Medical example:
- Perform a set of blood tests.
- Does patient have disease or not?
- Optimal method for choosing among the explanations: *likelihood ratio test.*
[Neyman-Pearson Lemma]
Metasearch via Decision Theory
- Metasearch analogy:
- *Observed data* – document rank info over all systems.
- *Hypotheses* – document is relevant or not.
- Ratio test:
\[ O_{rel} = \frac{\Pr[rel \mid r_1, r_2, \ldots, r_n]}{\Pr[irr \mid r_1, r_2, \ldots, r_n]} \]
Bayesian Analysis
\[ P_{rel} = \Pr[rel \mid r_1, r_2, \ldots, r_n] \]
\[ P_{rel} = \frac{\Pr[r_1, r_2, \ldots, r_n \mid rel] \cdot \Pr[rel]}{\Pr[r_1, r_2, \ldots, r_n]} \]
\[ O_{rel} = \frac{\Pr[r_1, r_2, \ldots, r_n \mid rel] \cdot \Pr[rel]}{\Pr[r_1, r_2, \ldots, r_n \mid irr] \cdot \Pr[irr]} \]
\[ O_{rel} \cong \frac{\Pr[rel] \cdot \prod_i \Pr[r_i \mid rel]}{\Pr[irr] \cdot \prod_i \Pr[r_i \mid irr]} \]
\[ LO_{rel} \sim \sum_i \log \frac{\Pr[r_i \mid rel]}{\Pr[r_i \mid irr]} \]
Bayes on TREC3
TREC 3: avg precision over 200 random sets of systems.
- Bayes-fuse
- CombMNZ
- The best input system
Avg precision
Number of randomly chosen input systems
Bayes on TREC5
TREC 5: avg precision over 200 random sets of systems.

Bayes on TREC9
TREC 9: avg precision over 200 random sets of systems.
- Bayes-fuse
- CombMNZ
- The best input system
Number of randomly chosen input systems
Avg precision
Beautiful theory, but...
*In theory, there is no difference between theory and practice; in practice, there is.*
—variably: Chuck Reid, Yogi Berra
**Issue:** *independence assumption*...
Naïve-Bayes Assumption
\[ O_{rel} = \frac{\Pr[r_1, r_2, \ldots, r_n \mid rel] \cdot \Pr[rel]}{\Pr[r_1, r_2, \ldots, r_n \mid irr] \cdot \Pr[irr]} \]
\[ O_{rel} \approx \frac{\Pr[rel] \cdot \prod_i \Pr[r_i \mid rel]}{\Pr[irr] \cdot \prod_i \Pr[r_i \mid irr]} \]
Bayes on Vogt Data
TREC 5 subset: avg precision over between 1 and 200 random sets of systems.
New Approaches [Aslam, Montague]
- Analog to decision theory.
- Requires only rank information.
- Training required.
- Analog to election strategies.
- Requires only rank information.
- No training required.
Classes of Metasearch Problems
<table>
<thead>
<tr>
<th></th>
<th>no training data</th>
<th>training data</th>
</tr>
</thead>
<tbody>
<tr>
<td>ranks only</td>
<td>Borda, Condorcet, rCombMNZ</td>
<td>Bayes</td>
</tr>
<tr>
<td>relevance scores</td>
<td>CombMNZ</td>
<td>LC model</td>
</tr>
</tbody>
</table>
CombMNZ LC model
Election Strategies
- Plurality vote.
- Approval vote.
- Run-off.
- Preferential rankings:
- instant run-off,
- Borda count (positional),
- Condorcet method (head-to-head).
Metasearch Analogy
- Documents are *candidates*.
- Systems are *voters* expressing preferential rankings among candidates.
Condorcet Voting
- Each ballot ranks all candidates.
- Simulate head-to-head run-off between each pair of candidates.
- Condorcet winner: candidate that beats all other candidates, head-to-head.
Condorcet Paradox
- Voter 1: A, B, C
- Voter 2: B, C, A
- Voter 3: C, A, B
- Cyclic preferences: cycle in Condorcet graph.
- Condorcet consistent path: Hamiltonian.
- For metasearch: any CC path will do.
Condorcet Consistent Path
Hamiltonian Path Proof
Base Case:
Inductive Step:
Condorcet-fuse: Sorting
- Insertion-sort suggested by proof.
- Quicksort too; $O(n \log n)$ comparisons.
- $n$ documents.
- Each comparison: $O(m)$.
- $m$ input systems.
- Total: $O(m \, n \log n)$.
- Need not compute entire graph.
Condorcet-fuse on TREC3
TREC 3: avg precision over 200 random sets of systems.
- CombMNZ
- CombMNZ (relevance scores simulated with ranks, unret: 0)
- Quicksort Condorcet
Avg precision vs. Number of randomly chosen input systems
Condorcet-fuse on TREC5
TREC 5: avg precision over 200 random sets of systems.
Condorcet-fuse on Vogt
Condorcet-fuse on TREC9
TREC 9: avg precision over 200 random sets of systems.
- CombMNZ
- CombMNZ (relevance scores simulated with ranks, unret: 0)
- Quicksort Condorcet
Number of randomly chosen input systems vs. Avg precision
Breaking Cycles
SCCs are properly ordered.
How are ties within an SCC broken? (Quicksort)
Outline
- Introduce problem
- Characterize problem
- Survey current techniques
- Describe new approaches
- decision theory, social choice theory
- experiments with TREC data
- Upper bounds for metasearch
- Future work
Upper Bounds on Metasearch
- How good can metasearch be?
- Are there fundamental limits that methods are approaching?
- Need an analog to running time lower bounds...
Upper Bounds on Metasearch
- Constrained oracle model:
- omniscient metasearch oracle,
- constraints placed on oracle that any reasonable metasearch technique must obey.
- What are “reasonable” constraints?
Naïve Constraint
- **Naïve constraint:**
- Oracle may only return docs from underlying lists.
- Oracle may return these docs in any order.
- Omniscient oracle will return relevant docs above irrelevant docs.
TREC5: Naïve Bound
The graph shows the average precision over 200 random sets of systems. The x-axis represents the number of randomly chosen input systems, ranging from 2 to 12. The y-axis represents the average precision, ranging from 0.3 to 1.1. The graph includes three lines:
- **Naïve Bound**: Dashed line with black dots.
- **Condorcet-fuse**: Dotted line with gray dots.
- **The best input system**: Solid line with black squares.
The Naïve Bound line starts at 0.3 and increases to approximately 0.8 as the number of input systems increases. Condorcet-fuse starts at a lower average precision and increases more gradually compared to the Naïve Bound. The best input system line shows a consistent increase, reaching close to 1.1 at the highest number of input systems.
Pareto Constraint
- Pareto constraint:
- Oracle may only return docs from underlying lists.
- Oracle must respect *unanimous* will of underlying systems.
- Omniscient oracle will return relevant docs above irrelevant docs, subject to the above constraint.
TREC5: Pareto Bound
Majoritarian Constraint
- **Majoritarian constraint:**
- Oracle may only return docs from underlying lists.
- Oracle must respect *majority* will of underlying systems.
- Omniscient oracle will return relevant docs above irrelevant docs and break cycles optimally, subject to the above constraint.
TREC5: Majoritarian Bound
Upper Bounds: TREC3
Upper Bounds: Vogt
Upper Bounds: TREC9
TREC8: Avg Prec vs Feedback
TREC8: System Assessments vs TREC
Metasearch Engines
- Query multiple search engines.
- May or may not combine results.
Metasearch: Dogpile
Search engine: Looksmart found 117 results.
The query string sent was "chili -peppers"
1. The Red Hot Chili Peppers
Find photos, lyrics, updates, tour info, and news on alternative-funk-rock band the Red Hot Chili Peppers.
Looksmart category – Red Hot Chili Pepper
2. Red Hot Chili Peppers Audio and Video
Watch videos and listen to music by this rock/funk band.
Looksmart category – Red Hot Chili Peppers
3. Chili and Hot Sauces
Shop for mouth-burning chili sauces, Tabasco, hot salas and other pepper-inspired sauces.
Looksmart category – Chili & Hot Sauces
4. Chili and Hot Sauces
Find chili and other hot sauce recipes, including salsas, dips, spices, and rubs, and visit the Pepper Fool.
Looksmart category – Chili & Hot Sauces
5. Red Hot Chili Peppers – Screens and Themes
Promotional screensaver for the funk-rock band features falling chili peppers.
LookSmart category – Red Hot Chili Peppers Multimedia
Search engine: GoTo.com found 10 or more results.
Metasearch: Metacrawler
Metasearch: Profusion
Characterizing Metasearch
- Three axes:
- common vs. disjoint database,
- relevance scores vs. ranks,
- training data vs. no training data.
Axis 1: DB Overlap
- High overlap
- data fusion.
- Low overlap
- collection fusion (distributed retrieval).
- Very different techniques for each...
- This work: data fusion.
CombMNZ on TREC3
TREC 3: avg precision over 200 random sets of systems.
CombMNZ on Vogt
TREC 5 subset: avg precision over between 1 and 200 random sets of systems.
CombMNZ on TREC9
TREC 9: avg precision over 200 random sets of systems.
- CombSUM
- CombMNZ
- The best input system
Number of randomly chosen input systems
Borda Count
- Consider an \( n \) candidate election.
- For each ballot:
- assign \( n \) points to top candidate,
- assign \( n-1 \) points to next candidate,
- ...
- Rank candidates by point sum.
Borda Count: Election 2000
- Ideological order: Nader, Gore, Bush.
- Ideological voting:
- Nader voter: Nader, Gore, Bush.
- Gore voter:
- Gore, Bush, Nader.
- Gore, Nader, Bush.
\[
\begin{aligned}
\{ & 50/50, 100/0 \\
\end{aligned}
\]
Election 2000: Ideological Florida Voting
<table>
<thead>
<tr>
<th></th>
<th>Gore</th>
<th>Bush</th>
<th>Nader</th>
</tr>
</thead>
<tbody>
<tr>
<td>50/50</td>
<td>14,734,379</td>
<td>13,185,542</td>
<td>7,560,864</td>
</tr>
<tr>
<td>100/0</td>
<td>14,734,379</td>
<td>14,639,267</td>
<td>6,107,138</td>
</tr>
</tbody>
</table>
Gore Wins
Borda Count: Election 2000
- Ideological order: Nader, Gore, Bush.
- Manipulative voting:
- Gore voter: Gore, Nader, Bush.
- Nader voter: Nader, Gore, Bush.
Election 2000: Manipulative Florida Voting
<table>
<thead>
<tr>
<th></th>
<th>Gore</th>
<th>Bush</th>
<th>Nader</th>
</tr>
</thead>
<tbody>
<tr>
<td>Votes</td>
<td>11,825,203</td>
<td>11,731,816</td>
<td>11,923,765</td>
</tr>
</tbody>
</table>
Nader Wins
Future Work
- Bayes
- approximate dependence.
- Condorcet
- weighting, dependence.
- Upper bounds
- other constraints.
- Meta-retrieval
- Metasearch is approaching fundamental limits.
- Need to incorporate user feedback: learning...
|
{"Source-Url": "http://www.ccs.neu.edu/home/jaa/CSG339.06F/Lectures/metasearch.pdf", "len_cl100k_base": 4340, "olmocr-version": "0.1.50", "pdf-total-pages": 76, "total-fallback-pages": 0, "total-input-tokens": 99274, "total-output-tokens": 7016, "length": "2e12", "weborganizer": {"__label__adult": 0.0006084442138671875, "__label__art_design": 0.000713348388671875, "__label__crime_law": 0.0010251998901367188, "__label__education_jobs": 0.00475311279296875, "__label__entertainment": 0.0009388923645019532, "__label__fashion_beauty": 0.00023484230041503904, "__label__finance_business": 0.0026721954345703125, "__label__food_dining": 0.0008015632629394531, "__label__games": 0.0025348663330078125, "__label__hardware": 0.0011272430419921875, "__label__health": 0.0004935264587402344, "__label__history": 0.0008745193481445312, "__label__home_hobbies": 0.0002601146697998047, "__label__industrial": 0.0004215240478515625, "__label__literature": 0.0020084381103515625, "__label__politics": 0.0011453628540039062, "__label__religion": 0.0004532337188720703, "__label__science_tech": 0.1927490234375, "__label__social_life": 0.0006499290466308594, "__label__software": 0.29052734375, "__label__software_dev": 0.494140625, "__label__sports_fitness": 0.0003008842468261719, "__label__transportation": 0.00046896934509277344, "__label__travel": 0.0003387928009033203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14223, 0.04576]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14223, 0.19304]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14223, 0.70212]], "google_gemma-3-12b-it_contains_pii": [[0, 35, false], [35, 85, null], [85, 207, null], [207, 233, null], [233, 258, null], [258, 973, null], [973, 1099, null], [1099, 1225, null], [1225, 1448, null], [1448, 1716, null], [1716, 1949, null], [1949, 2039, null], [2039, 2255, null], [2255, 2498, null], [2498, 2580, null], [2580, 2726, null], [2726, 2930, null], [2930, 3328, null], [3328, 3395, null], [3395, 3606, null], [3606, 3679, null], [3679, 3912, null], [3912, 4133, null], [4133, 4402, null], [4402, 4685, null], [4685, 4959, null], [4959, 5444, null], [5444, 5619, null], [5619, 5762, null], [5762, 5937, null], [5937, 6127, null], [6127, 6390, null], [6390, 6486, null], [6486, 6704, null], [6704, 7076, null], [7076, 7256, null], [7256, 7380, null], [7380, 7576, null], [7576, 7781, null], [7781, 7807, null], [7807, 7859, null], [7859, 8096, null], [8096, 8328, null], [8328, 8408, null], [8408, 8431, null], [8431, 8663, null], [8663, 8755, null], [8755, 8978, null], [8978, 9146, null], [9146, 9359, null], [9359, 9574, null], [9574, 10355, null], [10355, 10618, null], [10618, 10638, null], [10638, 10943, null], [10943, 10969, null], [10969, 10989, null], [10989, 11008, null], [11008, 11028, null], [11028, 11056, null], [11056, 11090, null], [11090, 11177, null], [11177, 12166, null], [12166, 12190, null], [12190, 12212, null], [12212, 12359, null], [12359, 12541, null], [12541, 12614, null], [12614, 12707, null], [12707, 12866, null], [12866, 13071, null], [13071, 13356, null], [13356, 13580, null], [13580, 13777, null], [13777, 13980, null], [13980, 14223, null]], "google_gemma-3-12b-it_is_public_document": [[0, 35, true], [35, 85, null], [85, 207, null], [207, 233, null], [233, 258, null], [258, 973, null], [973, 1099, null], [1099, 1225, null], [1225, 1448, null], [1448, 1716, null], [1716, 1949, null], [1949, 2039, null], [2039, 2255, null], [2255, 2498, null], [2498, 2580, null], [2580, 2726, null], [2726, 2930, null], [2930, 3328, null], [3328, 3395, null], [3395, 3606, null], [3606, 3679, null], [3679, 3912, null], [3912, 4133, null], [4133, 4402, null], [4402, 4685, null], [4685, 4959, null], [4959, 5444, null], [5444, 5619, null], [5619, 5762, null], [5762, 5937, null], [5937, 6127, null], [6127, 6390, null], [6390, 6486, null], [6486, 6704, null], [6704, 7076, null], [7076, 7256, null], [7256, 7380, null], [7380, 7576, null], [7576, 7781, null], [7781, 7807, null], [7807, 7859, null], [7859, 8096, null], [8096, 8328, null], [8328, 8408, null], [8408, 8431, null], [8431, 8663, null], [8663, 8755, null], [8755, 8978, null], [8978, 9146, null], [9146, 9359, null], [9359, 9574, null], [9574, 10355, null], [10355, 10618, null], [10618, 10638, null], [10638, 10943, null], [10943, 10969, null], [10969, 10989, null], [10989, 11008, null], [11008, 11028, null], [11028, 11056, null], [11056, 11090, null], [11090, 11177, null], [11177, 12166, null], [12166, 12190, null], [12190, 12212, null], [12212, 12359, null], [12359, 12541, null], [12541, 12614, null], [12614, 12707, null], [12707, 12866, null], [12866, 13071, null], [13071, 13356, null], [13356, 13580, null], [13580, 13777, null], [13777, 13980, null], [13980, 14223, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14223, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14223, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14223, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14223, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14223, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14223, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14223, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14223, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14223, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14223, null]], "pdf_page_numbers": [[0, 35, 1], [35, 85, 2], [85, 207, 3], [207, 233, 4], [233, 258, 5], [258, 973, 6], [973, 1099, 7], [1099, 1225, 8], [1225, 1448, 9], [1448, 1716, 10], [1716, 1949, 11], [1949, 2039, 12], [2039, 2255, 13], [2255, 2498, 14], [2498, 2580, 15], [2580, 2726, 16], [2726, 2930, 17], [2930, 3328, 18], [3328, 3395, 19], [3395, 3606, 20], [3606, 3679, 21], [3679, 3912, 22], [3912, 4133, 23], [4133, 4402, 24], [4402, 4685, 25], [4685, 4959, 26], [4959, 5444, 27], [5444, 5619, 28], [5619, 5762, 29], [5762, 5937, 30], [5937, 6127, 31], [6127, 6390, 32], [6390, 6486, 33], [6486, 6704, 34], [6704, 7076, 35], [7076, 7256, 36], [7256, 7380, 37], [7380, 7576, 38], [7576, 7781, 39], [7781, 7807, 40], [7807, 7859, 41], [7859, 8096, 42], [8096, 8328, 43], [8328, 8408, 44], [8408, 8431, 45], [8431, 8663, 46], [8663, 8755, 47], [8755, 8978, 48], [8978, 9146, 49], [9146, 9359, 50], [9359, 9574, 51], [9574, 10355, 52], [10355, 10618, 53], [10618, 10638, 54], [10638, 10943, 55], [10943, 10969, 56], [10969, 10989, 57], [10989, 11008, 58], [11008, 11028, 59], [11028, 11056, 60], [11056, 11090, 61], [11090, 11177, 62], [11177, 12166, 63], [12166, 12190, 64], [12190, 12212, 65], [12212, 12359, 66], [12359, 12541, 67], [12541, 12614, 68], [12614, 12707, 69], [12707, 12866, 70], [12866, 13071, 71], [13071, 13356, 72], [13356, 13580, 73], [13580, 13777, 74], [13777, 13980, 75], [13980, 14223, 76]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14223, 0.06127]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
2fa83aec3bb7c9b9969e5f90ee064cbc40229d32
|
Comparative Evaluation of Converged Service-Oriented Architectures
Davy Preuveneers, Julien Pauty, Dimitri Van Landuyt,
Yolande Berbers, Wouter Joosen
Department of Computer Science
Katholieke Universiteit Leuven
Celestijnenlaan 200A, Leuven, Belgium
{firstname.lastname}@cs.kuleuven.be
Abstract
In this paper we evaluate and compare five commercial off-the-shelf platforms for building converged service-oriented architectures in an ambient computing environment. To this end, we have identified several categories of requirements that support the always-connected digital lifestyle of the mobile user. Key requirements are providing convergence – joining the worlds of web services and telecom services – and offering context awareness in order to tailor services better to the user’s environment, preferences and needs. Our evaluation shows that currently none of the investigated architectures completely fulfills the two aforementioned requirements in order to deploy a variety of future converged mobile and context-aware services. However, the resulting comparison can be used as a practical guideline for decision-makers to select a platform to build a service-oriented architecture adjusted to the specific requirements within their application domain.
1 Introduction
A service-oriented architecture (SOA) represents the current state of the art in software architecture for the rapid deployment of new services. It enables the creation of new services or applications by connecting together existing services and proposes functions to manage the service lifecycle, such as the deployment and updating of services. The main motivation behind SOA for a company is to create a business-aligned architecture to better react to changing customers’ needs. If the market changes, new services can be created by reusing existing services and by developing new services, where needed. An SOA also enables a company to leverage previous infrastructure investments, by exposing legacy services as traditional services in the architecture: an SOA increases code reuse and modularity.
The goal of this paper is to paint an accurate and up-to-date picture of the current state of the art in service-orientation by comparing the most representative commercial solutions in this field: the BEA WebLogic SIP Server [3], the jNetX Open Convergent Feature Server [10], the IBM Service Provider Delivery Environment [7], the Cape Clear Enterprise Service Bus [4], and the Microsoft Connected Services Framework [12]. The criteria for inclusion were twofold: (1) factors such as commercial relevancy, availability of technical information, use of a standardized component model, and many more; (2) adequate and sufficiently wide overview of the entire spectrum of established service-oriented solutions. The motivation of this study is to assist the service provider to choose the best platform for building service-oriented architectures for offering converged web services and telecom services adapted to the always-connected digital and mobile lifestyle of the future.
The evaluation of these commercial off-the-shelf platforms was carried out in terms of several key requirements that were identified in a concrete real-life usage scenario in the targeted advertising and publishing domain. In summary, this scenario is as follows: Instead of building a standard news website, a news publisher decides to base his business on the service-oriented paradigm. This allows him to offer the same service on a wide variety of devices (web server, cellphone, TV, giant billboard in the city), and more flexibility to manage and integrate with other services. The news service is combined with a service that allows readers to enrich news articles with comments or new material, or to start an online discussion with others. Targeted publishing is made possible by showing advertisements based on the very concrete location or context of the user.
In Section 2 we provide a brief overview of the evaluated service platforms and describe how they are used. Section 3 discusses the requirements that an SOA must fulfill to deliver converged services to mobile users. We evaluate and compare the architectures in Section 4 before concluding in Section 5.
2 Main characteristics of the architectures
This section briefly describes the service architectures under investigation in this paper. The list of architectures is not meant to be exhaustive. Instead, we wanted to include architectures relying on different technologies (such as JAIN SLEE, SIP servlets, HTTP servlets, web services, Enterprise Java Beans) in order to show the different approaches currently followed in the service oriented world.
The BEA WebLogic SIP Server: With the BEA WebLogic SIP Server [3], developers can deploy HTTP servlets, SIP servlets, EJBs on a single J2EE platform. Telecommunication services are executed inside the SIP servlet container, IT services in the HTTP servlet or EJB container.
The Cape Clear Enterprise Service Bus: The architecture provided by Cape Clear [4] is based on web services and open standards. This architecture relies on the Eclipse development environment for service creation and provides a tool to model business process graphically for non-technical users to create new services.
The jNetX Open Convergent Feature Server: The jNetX OCFS [10] is a service architecture implementing the JAIN SLEE specification [14, 1]. JAIN SLEE is the abbreviation for the Java APIs for Integrated Networks Service Logic Execution Environment, a high throughput, low latency event processing application environment.
The IBM Service Provider Delivery Environment: IBM SPD [7] combines the IBM WebSphere MQ Integrator (a message broker that routes and transforms messages coming from one service in order to adapt them to another service) and the IBM WebSphere Everyplace Server for Telecom, which corresponds to the well known IBM WebSphere J2EE application server and a telecommunication toolkit to access telecommunication services.
The BEA WebLogic SIP Server: With the BEA WebLogic SIP Server [3], developers can deploy HTTP servlets, SIP servlets, EJBs on a single J2EE platform. Telecommunication services are executed inside the SIP servlet container, IT services in the HTTP servlet or EJB container.
The Scenario: The always-connected digital lifestyle of the mobile and ubiquitous computing are new emerging computing paradigms that change the way applications are designed, implemented and consumed. They create new opportunities for making the applications more intelligent and supportive to the user in a service-oriented world.
Therefore, we identified several requirements with a setting as outlined in the usage scenario in mind and compare five service architectures covering the telecommunications and web service domains with respect to these requirements. They are grouped in the following categories: convergence between IT and telecom, service management, context awareness, compliance with industry standards (e.g. w.r.t. identity management), and non-functional requirements such as scalability and availability.
3.1 Converged service architecture
A converged architecture enables the creation of services that emerge through the combination of telecommunications services, such as VoIP conversations, and web services, such as online bookstores. The success of a converged architecture can be measured by its support for interoperability. Interoperability between IT and telecommunication-oriented architectures is a complex issue. Both propose similar though domain specific standards to which their services adhere in order to ensure interoperability within their respective domains. Another issue is the difference in architectural style of these service-oriented architectures. An IT web service is essentially transaction-based and relies on an architecture that often builds upon the functionality of an enterprise service bus to provide message brokering, routing, data translation and transformation. A telecommunication service architecture needs to deal with a multitude of point-to-point connections and short-lived events that must be generated, propagated and processed with a low latency [13] to guarantee a minimal quality of service level. Integrating both kinds of service is hard without sacrificing the performance of the telecommunication services.
We distinguish two kinds of bridging mechanisms: (1) loose coupling between IT and telecommunication services, such as a gateway (i.e. a one-way adapter which exposes a telecommunication service as a web service, or a web service as a telecommunication service); (2) mechanisms which create a tight coupling between IT and telecommunication services. The first bridging mechanism complies with service orientation.
3 Requirements
The growing presence of mobile devices, such as laptops, PDAs and smartphones, along with advances in wireless network communication technologies, promises to change drastically the human-computer interaction. Mobile and ubiquitous computing are new emerging computing paradigms that change the way applications are designed, implemented and consumed. They create new opportunities for making the applications more intelligent and supportive to the user in a service-oriented world.
Therefore, we identified several requirements with a setting as outlined in the usage scenario in mind and compare five service architectures covering the telecommunication and web service domains with respect to these requirements. They are grouped in the following categories: convergence between IT and telecom, service management, context awareness, compliance with industry standards (e.g. w.r.t. identity management), and non-functional requirements such as scalability and availability.
3.1 Converged service architecture
A converged architecture enables the creation of services that emerge through the combination of telecommunications services, such as VoIP conversations, and web services, such as online bookstores. The success of a converged architecture can be measured by its support for interoperability. Interoperability between IT and telecommunication-oriented architectures is a complex issue. Both propose similar though domain specific standards to which their services adhere in order to ensure interoperability within their respective domains. Another issue is the difference in architectural style of these service-oriented architectures. An IT web service is essentially transaction-based and relies on an architecture that often builds upon the functionality of an enterprise service bus to provide message brokering, routing, data translation and transformation. A telecommunication service architecture needs to deal with a multitude of point-to-point connections and short-lived events that must be generated, propagated and processed with a low latency [13] to guarantee a minimal quality of service level. Integrating both kinds of service is hard without sacrificing the performance of the telecommunication services.
We distinguish two kinds of bridging mechanisms: (1) loose coupling between IT and telecommunication services, such as a gateway (i.e. a one-way adapter which exposes a telecommunication service as a web service, or a web service as a telecommunication service); (2) mechanisms which create a tight coupling between IT and telecommunication services. The first bridging mechanism complies with service orientation.
Scenario: The always-connected digital lifestyle of the mobile user demands new ways for service provision. Next generation communication networks enable services that go beyond wireless and wireline voice communication. A successful news provider will deliver on-demand content and services by both mobile and fixed access.
**Discussion and evaluation:** Some of the architectures under investigation provide a loosely coupled bridging mechanism by means of an OSA/Parlay gateway. An OSA/Parlay gateway exposes telecommunication services as traditional web services. Despite being service-oriented, such a bridging mechanism is often limited because only those functionalities of telecommunication services are exposed whose semantics are compatible with those of web services, typically configuration functionalities. IBM SPDE and Microsoft CSF integrate an OSA/Parlay gateway in their architecture. BEA WebLogic SIP Server and jNetX/OCSF propose a bridging mechanism that tightly couples IT and telecommunication services. jNetX/OCSF is a JAIN SLEE platform, which proposes a bridging mechanism with a J2EE container that enables an Enterprise Java Bean (EJB) to trigger an event in the JAIN SLEE container and that also allows a JAIN SLEE component to invoke an EJB. Such a bridging is done at the Java code level, creating a strong coupling between services. BEA WebLogic SIP Server is an HTTP and SIP servlets container. BEA's bridging mechanism relies on a shared session mechanism, meaning that the internals of services can be exposed to other services via this shared session. The Cape Clear ESB does not provide any mechanism for bridging the gap between IT and telecom. The following table presents the general ranking scale for this requirement:
<table>
<thead>
<tr>
<th>Ranking</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>++</td>
<td>Bridging mechanism that complies to service orientation</td>
</tr>
<tr>
<td>+</td>
<td>Bridging mechanism that does not comply with service orientation</td>
</tr>
<tr>
<td>0</td>
<td>No bridging mechanism</td>
</tr>
</tbody>
</table>
### 3.2 Support for service life-cycle management
Life-cycle management is a mandatory feature for any business-aligned service architecture. This requirement involves support for the deployment and updating of services (offline and online) of services, service dependency management, service dispensal, maintenance and testing. As most of these classic life-cycle management requirements are fulfilled by every platform that supports service-oriented architectures, and as time to market and a quick response to changing needs are crucial success factors, we specifically focus on the ease of service management and creation by composition of converged services. Indeed, a service-oriented architecture enables the creation of a service by composing existing services. SOA providers have created graphical tools that enable non-technical users to compose and adapt services. Such tools should enable companies to create services more quickly while reusing existing services and assets. In order to evaluate the architecture from the service creation point of view, we categorize service managers based on their level of technical expertise:
- **Non-experts:** non-experts do not have any kind of programming experience or telecommunication knowledge, but they can design services (e.g. graphically) and test them;
- **Integrators:** integrators primarily participate in development related to the integration with other elements, as well as in network emulation testing;
- **Developers:** professional developers that have advanced programming skills.
**Scenario:** A new publisher should not be familiar with telecommunication systems to deliver news articles to clients. Neither should he require programming skills to add new services. A simple click on the button should be enough to link a poll service to a news article service.
**Discussion and evaluation:** Among the contenders, only IBM SPDE and Cape Clear explicitly provide tools for non-technical people to graphically create and design services. The graphical tools help to model business processes, which do not require high-level programming skills at all. jNETx also offers graphical tools, in the form of a variety of graphical workspaces, reusable components, and tools for all the steps in the service delivery process (design, development, testing and integration). Despite being graphical, these tools clearly require telecommunication knowledge and programming skills to create an application. The BEA WebLogic SIP Server and Microsoft CSF architectures only provide tools for experienced programmers.
<table>
<thead>
<tr>
<th>Ranking</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>++</td>
<td>Full life-cycle management, minimal technical expertise</td>
</tr>
<tr>
<td>+</td>
<td>Full life-cycle management, highly trained professional</td>
</tr>
<tr>
<td>0</td>
<td>Minimal life-cycle management without support for service creation or composition</td>
</tr>
</tbody>
</table>
### 3.3 Context awareness
Context awareness is particularly important to propose relevant services to mobile users. By storing and analyzing context data, such as the user’s location and his preferences, services can be selected and adapted according to the current situation of the user. Therefore, the architecture must provide mechanisms to capture and store user context, either as part of the architecture or as an enabling service. Indeed, context data can be extracted from different sources such as the network infrastructure (location, cell-id), from the user terminal (battery level, preferences), from a presence server (online, busy, away) [15]. The architecture must also make this context available to service developers, respecting any existing privacy regulations.
**Scenario:** Service provision can be localized, personalized to the preferences of the user or adapted to the capabilities of the end-user device. Dealing with this contextual information is key to deliver services in a mobile network environment to a heterogeneous population.
**Discussion and evaluation:** In general, the architectures we investigated have little support for context awareness. Most of them support presence, location and user profiles, except Cape Clear which has no support for context awareness at all. However, none of them supports context reasoning [16, 20], which is an essential feature to go beyond pure location- or presence-based services. Context reasoning is particularly important in the service domain, because the way services may use various kinds of derived context data is not known in advance.
<table>
<thead>
<tr>
<th>Ranking</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>++</td>
<td>Advanced context-aware system, including context reasoning, distribution and sharing</td>
</tr>
<tr>
<td>+</td>
<td>Basic context awareness (location and/or user profile)</td>
</tr>
<tr>
<td>0</td>
<td>No context awareness</td>
</tr>
</tbody>
</table>
### 3.4 Federated identity management and policy enforcement
A service architecture targeting mobile users typically has a large number of concurrent users. The management of multiple versions of user identities across multiple services makes policy enforcement, without proper user administration and identity management, a daunting task. For this reason, the architecture must provide access management software (user management together with authentication techniques) in order to centrally control user access and enable single sign-on (SSO) through a policy server that grants authorization rights to each application.
Policy enforcement is a way to check an individual, a computing system or another service before allowing access to a particular end-user (business) service. Policies may define access restrictions (access control policies), privacy guarantees, billing mechanisms or quality of service provision. The architecture must be able to enforce these policies at runtime. For example, single sign-on through a policy server may grant end-users authorization rights to each service deployed on the architecture without the user going through a cumbersome login/password authentication process for each application independently. Policies of interest to a converged service architecture for mobile users include:
- User authentication and access control (authorization) to services;
- Privacy policies, for services that need access to private user information and sensitive context data;
- Service Level Agreements (SLAs), guaranteed quality of service provision, e.g. for communication services;
- Real-time billing of service use.
**Scenario:** For customized service delivery, the identity of the client plays an important role. Policies and federated identity management help the news provider to centralize user management and personalize content across all its news services. Single sign-on is particularly interesting for mobile services, because the user terminal has limited interaction capabilities, which can make the authentication even more tedious.
**Discussion and evaluation:** For this requirement we do not define a ranking scale, but we list for each architecture the relevant supported standards and the federated identity management system that is used. Many architectures comply to industry standards by using the WS-Policy [8], WS-Federation [6] and derivative web service standards, as well as the Liberty [11] and other single sign-on standards.
### 3.5 Non-functional requirements
Since the targeted service-oriented architecture is to be accessed by a large number of users, scalability is a key requirement for the architecture. Some aspects related to scalability of an SOA include the management of Service Level Agreements, load-balancing of services and dynamic routing of service requests.
**Performance** monitoring is crucial in an SOA. Previous work [5] has shown that some SOA implementations rely on large (in the order of megabytes) and complex XML message formats causing insufficient throughput, terrible performance and scalability results. Since an SOA is a set of very loosely coupled and reusable components, it is important to trace interactions when degradation of performance for one service break other services within the SOA.
As most services are transported over HTTP, reliability in a service-oriented architecture includes a.o. reliable messaging. Reliable message delivery means the ability to ensure delivery of a message with the desired level of quality of service: (1) Message sent at least once (guaranteed delivery), (2) Message sent at most once (guaranteed duplicate elimination) or (3) Message sent exactly once (guaranteed delivery and duplicate elimination). There are several specifications that are supposed to address this: SOA-Reliability for HTTP [21], HTTPR [18], Web Services Reliable Messaging [2], Web Services Reliability [9].
**Scenario:** Quality of Service (QoS) is a key concern for all service providers. A news provider must ensure that the client gets what he paid for. Scalability to thousands of users and guaranteed message delivery without delays will be crucial for the success of the news publisher.
Discussion and evaluation: Most of today’s SOA projects are depending on an enterprise service bus (ESB) to provide message reliability, exception handling, and publication-subscription model capabilities. FCAPS (fault-management, configuration, accounting, performance, and security) is the ISO Telecommunications Management Network model and framework for network management. Vertical scalability refers to the fact that a node in the system can be upgraded by adding more resources to process more transactions. Horizontal scalability means that more nodes can be added to the system to better handle the growing amount of transactions. For example, the JAIN SLEE specification has been designed with scalability and performance in mind:
- **Vertical**: Vertical scalability enables parallel service logic execution utilizing SLEE multi-threading that allows splitting processing over different CPU units.
- **Horizontal**: Distributing software horizontally over multiple hosts ensures that a single fault cannot bring the whole system down. It provides network and gateway load balancing.
For these requirements, we do not provide a ranking scale. Indeed, evaluation of these requirements is usually very difficult, because evaluating these architectures empirically is practically impossible. Therefore, for each architecture we provide a summary of the main points for these requirements as outlined in the available documentation.
4 Evaluation
In the second section of this paper we have introduced different requirements for an architecture for mobile services. In the preceding section, we have studied different architectures with respect to these requirements. In this section, we provide a side-by-side comparison of these architectures with respect to these requirements. Table 1 presents the evaluation.
<table>
<thead>
<tr>
<th>Service architecture</th>
<th>Converged services</th>
<th>Life-cycle management</th>
<th>Context-awareness</th>
<th>Standard compliance</th>
<th>Non-functional requirements</th>
</tr>
</thead>
<tbody>
<tr>
<td>BEA WebLogic SIP Server</td>
<td>+</td>
<td>+</td>
<td>+</td>
<td>SLA, Billing, QoS</td>
<td>Clustering, Load balancing</td>
</tr>
<tr>
<td>jNetX/OCFS</td>
<td>+</td>
<td>+</td>
<td>-</td>
<td>Billing</td>
<td>Vertical and horizontal scalability</td>
</tr>
<tr>
<td>Cape Clear ESB</td>
<td>0</td>
<td>++</td>
<td>0</td>
<td>WS-Policy, SSO</td>
<td>Load balancing and high availability</td>
</tr>
<tr>
<td>IBM SPDE</td>
<td>++</td>
<td>++</td>
<td>+</td>
<td>WS-Policy, WS-Federation, Liberty</td>
<td>Load balancing and replication</td>
</tr>
<tr>
<td>Microsoft CSF</td>
<td>++</td>
<td>+</td>
<td>+</td>
<td>Privacy, Billing, QoS, Biztalk SSO, WS-Federation</td>
<td>Clustering, FCAPS</td>
</tr>
</tbody>
</table>
Table 1. Comparative evaluation of commercial off-the-shelf service-oriented architectures
We can place several side remarks next to this table. Firstly, only IBM SPDE and Microsoft CSF provide a bridging mechanism that complies with service orientation. Nevertheless, this mechanism is limited to telecommunication services that are semantically compatible with web services. Secondly, life-cycle management is well implemented in these architectures, with a majority of architectures providing functionality for updating services at runtime. Thirdly, the WS-Policy is the standard that seems to be adopted by the majority of the architectures that are based on web services, e.g. IBM SPDE, Cape Clear.
Although none of the services architectures under investigation in this paper received the lowest score for life-cycle management, we have evaluated other service architectures with only very limited support in this area. Due to the limited scope and intentions of some of these architectures, comparison with other full-fledged architectures would be unfair. Also due to space constraints, we have left out a description of the capabilities of these architectures in the previous discussions and in the final evaluation.
An architecture for mobile services needs a bridging mechanism to leverage IT and telecommunication services, and a context-aware system to propose relevant services to mobile users. If we look at the table, no single architecture provides a bridging mechanism which is compliant to service orientation, and an advanced context awareness system. The field where these commercial systems lack most support is context awareness: often only very basic context awareness functionality is introduced. This suggests that interesting research could be done to create such an architecture. One architecture of interest from the research community in this area is the one provided by the IST Amigo [17] project providing research in Ambient Intelligence for the networked home environment. Though it lacks support for converged services, it has developed an open, standardized, interoperable middleware and attractive context-aware user services. It also supports interoperability between equip-
ment and services within the networked home environment by using standard technology when possible.
The evaluated architectures are rather different. Therefore, we cannot use this table to compare the architectures in order to claim that one particular architecture is the best for all purposes. However, the table can be used to compare similar architectures, particular features or the standards compliance of SOA providers such as IBM SPDE and Microsoft CSF. For example, according to the evaluation table, we can state that IBM SPDE is more advanced than Microsoft CSF in terms of support for full life-cycle management. Furthermore, these architectures are not always seen as direct competitors. However we presented them independently, in practice, architectures providing different functionalities can usually be combined. An example is the BEA WebLogic SIP Server with a JAIN SLEE server, or IBM SPDE with Ubiquity A/S [19], another SIP-based application server.
5 Conclusion
In this paper we have studied and evaluated several commercial platforms for building off-the-shelf service-oriented architectures for converged services. We have chosen the architectures outlined in this paper so that most of the current approaches for service architecture in the IT and telecommunications world have been presented. We have discussed architectures based on SIP servlets, JAIN SLEE, HTTP servlets, EJB and Web Services. To evaluate the architectures we have identified several categories of requirements: (1) convergence between IT and telecom; (2) service management; (3) context awareness; (4) compliance with industry standards for policy enforcement and federated identity management; and (5) support for varying non-functionals, such as availability and scalability. Two major requirements for a service architecture that are necessary to support the always-connected digital lifestyle of the mobile user are “convergence” and “context awareness”. According to the evaluation, currently no architecture optimally fulfills these two requirements. Our future work will be dedicated to the study, implementation and integration of such support in a new or existing architecture.
References
|
{"Source-Url": "http://davy.preuveneers.be/publications/socne07.pdf", "len_cl100k_base": 5689, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18508, "total-output-tokens": 6715, "length": "2e12", "weborganizer": {"__label__adult": 0.0003478527069091797, "__label__art_design": 0.0005712509155273438, "__label__crime_law": 0.0003178119659423828, "__label__education_jobs": 0.0006914138793945312, "__label__entertainment": 0.0001080632209777832, "__label__fashion_beauty": 0.0001608133316040039, "__label__finance_business": 0.0007643699645996094, "__label__food_dining": 0.0003261566162109375, "__label__games": 0.00045680999755859375, "__label__hardware": 0.0021839141845703125, "__label__health": 0.0005412101745605469, "__label__history": 0.00042128562927246094, "__label__home_hobbies": 6.437301635742188e-05, "__label__industrial": 0.00048613548278808594, "__label__literature": 0.00024700164794921875, "__label__politics": 0.0002918243408203125, "__label__religion": 0.00043845176696777344, "__label__science_tech": 0.10015869140625, "__label__social_life": 6.765127182006836e-05, "__label__software": 0.021453857421875, "__label__software_dev": 0.86865234375, "__label__sports_fitness": 0.0002290010452270508, "__label__transportation": 0.00055694580078125, "__label__travel": 0.0002608299255371094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33055, 0.01787]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33055, 0.07843]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33055, 0.90135]], "google_gemma-3-12b-it_contains_pii": [[0, 4245, false], [4245, 11806, null], [11806, 17306, null], [17306, 22615, null], [22615, 27718, null], [27718, 33055, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4245, true], [4245, 11806, null], [11806, 17306, null], [17306, 22615, null], [22615, 27718, null], [27718, 33055, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33055, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33055, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33055, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33055, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33055, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33055, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33055, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33055, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33055, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33055, null]], "pdf_page_numbers": [[0, 4245, 1], [4245, 11806, 2], [11806, 17306, 3], [17306, 22615, 4], [22615, 27718, 5], [27718, 33055, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33055, 0.18803]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
0aa0f4e684f37171f0a2bf78a1f3d727a7c61826
|
21.4 SQL
We now overview SQL in the context of the Books database. Though LINQ to SQL and the Visual C# IDE hide the SQL used to manipulate databases, it is nevertheless important to understand SQL basics. Knowing the types of operations you can perform will help you develop more advanced database-intensive applications.
Figure 21.10 lists some common SQL keywords used to form complete SQL statements—we discuss these keywords in the next several subsections. Other SQL keywords exist, but they are beyond the scope of this text.
<table>
<thead>
<tr>
<th>SQL keyword</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>SELECT</td>
<td>Retrives data from one or more tables.</td>
</tr>
<tr>
<td>FROM</td>
<td>Specifies the tables involved in a query. Required in every query.</td>
</tr>
<tr>
<td>WHERE</td>
<td>Specifies optional criteria for selection that determine the rows to be retrieved, deleted or updated.</td>
</tr>
<tr>
<td>ORDER BY</td>
<td>Specifies optional criteria for ordering rows (e.g., ascending, descending).</td>
</tr>
<tr>
<td>INNER JOIN</td>
<td>Specifies optional operator for merging rows from multiple tables.</td>
</tr>
<tr>
<td>INSERT</td>
<td>Inserts rows in a specified table.</td>
</tr>
<tr>
<td>UPDATE</td>
<td>Updates rows in a specified table.</td>
</tr>
<tr>
<td>DELETE</td>
<td>Deletes rows from a specified table.</td>
</tr>
</tbody>
</table>
Fig. 21.10 | Common SQL keywords.
21.4.1 Basic SELECT Query
Let us consider several SQL queries that retrieve information from database Books. A SQL query "selects" rows and columns from one or more tables in a database. Such selections are performed by queries with the SELECT keyword. The basic form of a SELECT query is
### 21.4 SQL
**SELECT * FROM tableName**
in which the asterisk (*) indicates that all the columns from the `tableName` table should be retrieved. For example, to retrieve all the data in the `Authors` table, use
**SELECT * FROM Authors**
Note that the rows of the `Authors` table are not guaranteed to be returned in any particular order. You will learn how to specify criteria for sorting rows in Section 21.4.3.
Most programs do not require all the data in a table—in fact, selecting all the data from a large table is discouraged, as it can cause performance problems. To retrieve only specific columns from a table, replace the asterisk (*) with a comma-separated list of the column names. For example, to retrieve only the columns `AuthorID` and `LastName` for all the rows in the `Authors` table, use the query
**SELECT AuthorID, LastName FROM Authors**
This query returns only the data listed in Fig. 21.11.

#### 21.4.2 WHERE Clause
When users search a database for rows that satisfy certain selection criteria (formally called predicates), only rows that satisfy the selection criteria are selected. SQL uses the optional **WHERE clause** in a query to specify the selection criteria for the query. The basic form of a query with selection criteria is
**SELECT columnName1, columnName2, ... FROM tableName WHERE criteria**
For example, to select the `BookTitle`, `EditionNumber` and `Copyright` columns from table `Titles` for which the `Copyright` date is more recent than 2007, use the query
**SELECT BookTitle, EditionNumber, Copyright FROM Titles WHERE Copyright > '2007'**
Note that string literals in SQL are delimited by single quotes instead of double quotes as in C#. In SQL, double quotes are used around table and column names that would otherwise be invalid—names containing SQL keywords, spaces, or other punctuation characters. Figure 21.12 shows the result of the preceding query.
The WHERE-clause criteria can contain the comparison operators <, >, <=, >=, = (equality), <> (inequality) and LIKE, as well as the logical operators AND, OR and NOT (discussed in Section 21.4.6). Operator LIKE is used for pattern matching with wildcard characters percent (%) and underscore (_). Pattern matching allows SQL to search for strings that match a given pattern.
A pattern that contains a percent character (%) searches for strings that have zero or more characters at the percent character’s position in the pattern. For example, the following query locates the rows of all the authors whose last names start with the letter D:
```
SELECT AuthorID, FirstName, LastName
FROM Authors
WHERE LastName LIKE 'D%'
```
The preceding query selects the two rows shown in Fig. 21.13, because two of the four authors in our database have a last name starting with the letter D (followed by zero or more characters). The % in the WHERE clause’s LIKE pattern indicates that any number of characters can appear after the letter D in the LastName column. Note that the pattern string is surrounded by single-quote characters.
An underscore (_) in the pattern string indicates a single wildcard character at that position in the pattern. For example, the following query locates the rows of all the authors whose last names start with any character (specified by _), followed by the letter y, followed by any number of additional characters (specified by %):
```
SELECT AuthorID, FirstName, LastName
FROM Authors
WHERE LastName LIKE '_y%'
```
The preceding query produces the row shown in Fig. 21.14, because only one author in our database has a last name that contains the letter y as its second letter.
21.4.3 ORDER BY Clause
The rows in the result of a query can be sorted into ascending or descending order by using the optional **ORDER BY clause**. The basic form of a query with an **ORDER BY** clause is
```
SELECT columnName1, columnName2, ... FROM tableName ORDER BY column ASC
SELECT columnName1, columnName2, ... FROM tableName ORDER BY column DESC
```
where **ASC** specifies ascending order (lowest to highest), **DESC** specifies descending order (highest to lowest) and **column** specifies the column on which the sort is based. For example, to obtain the list of authors in ascending order by last name (Fig. 21.15), use the query
```
SELECT AuthorID, FirstName, LastName
FROM Authors
ORDER BY LastName ASC
```
The default sorting order is ascending, so **ASC** is optional in the preceding query. To obtain the same list of authors in descending order by last name (Fig. 21.16), use
```
SELECT AuthorID, FirstName, LastName
FROM Authors
ORDER BY LastName DESC
```
 The only author from the Authors table whose last name contains y as the second letter.
 Authors from table Authors in ascending order by LastName.
 Authors from table Authors in descending order by LastName.
Multiple columns can be used for sorting with an ORDER BY clause of the form
ORDER BY column1 sortingOrder, column2 sortingOrder, ...
where sortingOrder is either ASC or DESC. Note that the sortingOrder does not have to be identical for each column. For example, the query
```sql
SELECT BookTitle, EditionNumber, Copyright
FROM Titles
ORDER BY Copyright DESC, BookTitle ASC
```
returns the rows of the Titles table sorted first in descending order by copyright date, then in ascending order by title (Fig. 21.17). This means that rows with higher Copyright values are returned before rows with lower Copyright values, and any rows that have the same Copyright values are sorted in ascending order by title.
The WHERE and ORDER BY clauses can be combined. If used, ORDER BY must be the last clause in the query. For example, the query
```sql
SELECT ISBN, BookTitle, EditionNumber, Copyright
FROM Titles
WHERE BookTitle LIKE '%How to Program'
ORDER BY BookTitle ASC
```
returns the ISBN, BookTitle, EditionNumber and Copyright of each book in the Titles table that has a BookTitle ending with “How to Program” and sorts them in ascending order by BookTitle. The query results are shown in Fig. 21.18.
<table>
<thead>
<tr>
<th>ISBN</th>
<th>BookTitle</th>
<th>EditionNumber</th>
<th>Copyright</th>
</tr>
</thead>
<tbody>
<tr>
<td>0132404168</td>
<td>C How to Program</td>
<td>5</td>
<td>2007</td>
</tr>
<tr>
<td>0136152503</td>
<td>C++ How to Program</td>
<td>6</td>
<td>2008</td>
</tr>
<tr>
<td>0131752421</td>
<td>Internet & World Wide Web How to Program</td>
<td>4</td>
<td>2008</td>
</tr>
</tbody>
</table>
**Fig. 21.17** | Data from Titles in descending order by Copyright and ascending order by BookTitle.
<table>
<thead>
<tr>
<th>ISBN</th>
<th>BookTitle</th>
<th>EditionNumber</th>
<th>Copyright</th>
</tr>
</thead>
<tbody>
<tr>
<td>0132404168</td>
<td>C How to Program</td>
<td>5</td>
<td>2007</td>
</tr>
<tr>
<td>0136152503</td>
<td>C++ How to Program</td>
<td>6</td>
<td>2008</td>
</tr>
<tr>
<td>0131752421</td>
<td>Internet & World Wide Web How to Program</td>
<td>4</td>
<td>2008</td>
</tr>
</tbody>
</table>
**Fig. 21.18** | Books from table Titles whose BookTitles end with How to Program in ascending order by BookTitle. (Part 1 of 2.)
Database designers typically normalize databases—i.e., split related data into separate tables to ensure that a database does not store redundant data. For example, the Books database has tables Authors and Titles. We use an AuthorISBN table to store “links” between authors and titles. If we did not separate this information into individual tables, we would need to include author information with each entry in the Titles table. This would result in the database storing duplicate author information for authors who have written more than one book.
Redundant data in a database increases the likelihood of errors when manipulating the data. Figure 21.1 contains redundant information between the Department and Location columns—for each department number, there is a single location and vice versa. This relationship is not enforced by the table’s structure. Normalization eliminates redundant data and allows the DBMS to prevent problems that could arise if queries depend on the one-to-one mapping between Department and Location.
Often, it is desirable to merge data from multiple tables into a single result—this is referred to as joining the tables. There are several kinds of joins, but the most common one is specified by an INNER JOIN operator in the query. An INNER JOIN merges rows from two tables by testing for matching values in a column that is common to the tables (though the column names can differ among the tables). The basic form of an INNER JOIN is:
```
SELECT columnName1, columnName2, ...
FROM table1 INNER JOIN table2
ON table1.columnName = table2.columnName
```
The ON clause of the INNER JOIN specifies the columns from each table that are compared to determine which rows are merged. For example, the following query produces a list of authors accompanied by the ISBNs for books written by each author:
```
SELECT FirstName, LastName, ISBN
FROM Authors INNER JOIN AuthorISBN
ON Authors.AuthorID = AuthorISBN.AuthorID
ORDER BY LastName, FirstName
```
The query combines the FirstName and LastName columns from table Authors and the ISBN column from table AuthorISBN, sorting the results in ascending order by LastName and FirstName. Note the use of the syntax `tableName.columnName` in the ON clause. This syntax (called a qualified name) specifies the columns from each table that should be compared to join the tables. The “`tableName`” syntax is required if the columns have the same
name in both tables. The same syntax can be used in any query to distinguish columns that have the same name in different tables.
**Common Programming Error 21.4**
In a SQL query, failure to qualify names for columns that have the same name in two or more tables is an error.
As always, the query can contain an ORDER BY clause. Figure 21.19 depicts the results of the preceding query, ordered by LastName and FirstName.
<table>
<thead>
<tr>
<th>FirstName</th>
<th>LastName</th>
<th>ISBN</th>
</tr>
</thead>
<tbody>
<tr>
<td>Greg</td>
<td>Ayer</td>
<td>0136053033</td>
</tr>
<tr>
<td>Harvey</td>
<td>Deitel</td>
<td>0131752421</td>
</tr>
<tr>
<td>Harvey</td>
<td>Deitel</td>
<td>0132222205</td>
</tr>
<tr>
<td>Harvey</td>
<td>Deitel</td>
<td>0132404168</td>
</tr>
<tr>
<td>Harvey</td>
<td>Deitel</td>
<td>0136053033</td>
</tr>
<tr>
<td>Harvey</td>
<td>Deitel</td>
<td>013605305X</td>
</tr>
<tr>
<td>Harvey</td>
<td>Deitel</td>
<td>0136151574</td>
</tr>
<tr>
<td>Harvey</td>
<td>Deitel</td>
<td>0136152503</td>
</tr>
<tr>
<td>Paul</td>
<td>Deitel</td>
<td>0131752421</td>
</tr>
<tr>
<td>Paul</td>
<td>Deitel</td>
<td>0132222205</td>
</tr>
<tr>
<td>Paul</td>
<td>Deitel</td>
<td>0132404168</td>
</tr>
<tr>
<td>Paul</td>
<td>Deitel</td>
<td>0136053033</td>
</tr>
<tr>
<td>Paul</td>
<td>Deitel</td>
<td>013605305X</td>
</tr>
<tr>
<td>Paul</td>
<td>Deitel</td>
<td>013605322X</td>
</tr>
<tr>
<td>Paul</td>
<td>Deitel</td>
<td>0136151574</td>
</tr>
<tr>
<td>Paul</td>
<td>Deitel</td>
<td>0136152503</td>
</tr>
<tr>
<td>Dan</td>
<td>Quirk</td>
<td>0136151574</td>
</tr>
</tbody>
</table>
**Fig. 21.19** Authors and ISBNs for their books in ascending order by LastName and FirstName.
### 21.4.5 INSERT Statement
The **INSERT statement** inserts a row into a table. The basic form of this statement is
```
INSERT INTO tableName (columnName1, columnName2, ..., columnNameN)
VALUES (value1, value2, ..., valueN)
```
where `tableName` is the table in which to insert the row. The `tableName` is followed by a comma-separated list of column names in parentheses. The list of column names is followed by the SQL keyword **VALUES** and a comma-separated list of values in parentheses. The values specified here must match up with the columns specified after the table name in both order and type (e.g., if `columnName1` is supposed to be the `FirstName` column, then `value1` should be a string in single quotes representing the first name). Although the
list of column names is not required if the INSERT operation specifies a value for every table column in the correct order, you should always explicitly list the columns when inserting rows—if the order of the columns in the table changes, using only VALUES may cause an error. The INSERT statement
```
INSERT INTO Authors ( FirstName, LastName )
VALUES ( 'Sue', 'Smith' )
```
inserts a row into the Authors table. The statement indicates that the values 'Sue' and 'Smith' are provided for the FirstName and LastName columns, respectively.
Some database tables allow NULL columns—that is, columns without values. Though the capitalization is different, NULL in SQL is similar to the idea of null in C#. All of the columns in the Books database are required, so they must be given values in an INSERT statement.
We do not specify an AuthorID in this example, because AuthorID is an identity column in the Authors table (see Fig. 21.3). For every row added to this table, SQL Server assigns a unique AuthorID value that is the next value in an autoincremented sequence (i.e., 1, 2, 3 and so on). In this case, Sue Smith would be assigned AuthorID number 5. Figure 21.20 shows the Authors table after the INSERT operation.
**Common Programming Error 21.5**
It is an error to specify a value for an identity column in an INSERT statement.
**Common Programming Error 21.6**
SQL uses the single-quote (') character to delimit strings. To specify a string containing a single quote (e.g., O’Malley) in a SQL statement, there must be two single quotes in the position where the single-quote character appears in the string (e.g., 'O''Malley'). The first of the two single-quote characters acts as an escape character for the second. Not escaping single-quote characters in a string that is part of a SQL statement is a syntax error.
<table>
<thead>
<tr>
<th>AuthorID</th>
<th>FirstName</th>
<th>LastName</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Harvey</td>
<td>Deitel</td>
</tr>
<tr>
<td>2</td>
<td>Paul</td>
<td>Deitel</td>
</tr>
<tr>
<td>3</td>
<td>Greg</td>
<td>Ayer</td>
</tr>
<tr>
<td>4</td>
<td>Dan</td>
<td>Quirk</td>
</tr>
<tr>
<td>5</td>
<td>Sue</td>
<td>Smith</td>
</tr>
</tbody>
</table>
**Fig. 21.20** | Table Authors after an INSERT operation.
### 21.4.6 UPDATE Statement
An UPDATE statement modifies data in a table. The basic form of the UPDATE statement is
```
UPDATE tableName
SET columnName1 = value1, columnName2 = value2, …, columnNameN = valueN
WHERE criteria
```
where `tableName` is the table to update. The `tableName` is followed by keyword `SET` and a comma-separated list of column name/value pairs in the format `columnName = value`. The optional `WHERE` clause provides criteria that determine which rows to update. While it is not required, the `WHERE` clause is almost always used, in an `UPDATE` statement because omitting it updates all rows in the table—an uncommon operation. The `UPDATE` statement
```sql
UPDATE Authors
SET LastName = 'Jones'
WHERE LastName = 'Smith' AND FirstName = 'Sue'
```
updates a row in the `Authors` table. Keyword `AND` is a logical operator that, like the C# `&&` operator, returns `true` if and only if both of its operands are true. Thus, the preceding statement assigns to `LastName` the value `Jones` for the row in which `LastName` is equal to `Smith` and `FirstName` is equal to `Sue`. [Note: If there are multiple rows with the first name “Sue” and the last name “Smith,” this statement modifies all such rows to have the last name “Jones.”] Figure 21.21 shows the `Authors` table after the `UPDATE` operation has taken place. SQL also provides other logical operators, such as `OR` and `NOT`, which behave like their C# counterparts `||` and `!`.
<table>
<thead>
<tr>
<th>AuthorID</th>
<th>FirstName</th>
<th>LastName</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Harvey</td>
<td>Deitel</td>
</tr>
<tr>
<td>2</td>
<td>Paul</td>
<td>Deitel</td>
</tr>
<tr>
<td>3</td>
<td>Greg</td>
<td>Ayer</td>
</tr>
<tr>
<td>4</td>
<td>Dan</td>
<td>Quirk</td>
</tr>
<tr>
<td>5</td>
<td>Sue</td>
<td>Jones</td>
</tr>
</tbody>
</table>
**Fig. 21.21** Table Authors after an UPDATE operation.
### 21.4.7 DELETE Statement
A `DELETE` statement removes rows from a table. Its basic form is
```sql
DELETE FROM tableName WHERE criteria
```
where `tableName` is the table from which to delete. The optional `WHERE` clause specifies the criteria used to determine which rows to delete. As with the `UPDATE` statement, the `DELETE` applies to all rows of the table if the `WHERE` clause is omitted. The `DELETE` statement
```sql
DELETE FROM Authors
WHERE LastName = 'Jones' AND FirstName = 'Sue'
```
deletes the row for Sue Jones in the `Authors` table. `DELETE` statements can delete multiple rows if the rows all meet the criteria in the `WHERE` clause. Figure 21.22 shows the `Authors` table after the `DELETE` operation has taken place.
### SQL Wrap-Up
This concludes our SQL introduction. We demonstrated several commonly used SQL keywords, formed SQL queries that retrieved data from databases and formed other SQL statements that manipulated data in a database. Next, we introduce LINQ to SQL, which allows...
C# applications to interact with databases. As you will see, LINQ to SQL translates LINQ queries like the ones you wrote in Chapter 9 into SQL statements like those presented here.
|
{"Source-Url": "http://www.deitel.com/bookresources/vcsharp2010htp/IntroToSQL.pdf", "len_cl100k_base": 4840, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 27908, "total-output-tokens": 4911, "length": "2e12", "weborganizer": {"__label__adult": 0.0002343654632568359, "__label__art_design": 0.0001926422119140625, "__label__crime_law": 0.00016880035400390625, "__label__education_jobs": 0.001583099365234375, "__label__entertainment": 4.869699478149414e-05, "__label__fashion_beauty": 9.334087371826172e-05, "__label__finance_business": 0.000141143798828125, "__label__food_dining": 0.00028634071350097656, "__label__games": 0.0004019737243652344, "__label__hardware": 0.000659942626953125, "__label__health": 0.0002880096435546875, "__label__history": 0.00013434886932373047, "__label__home_hobbies": 8.20159912109375e-05, "__label__industrial": 0.00024187564849853516, "__label__literature": 0.0001671314239501953, "__label__politics": 9.107589721679688e-05, "__label__religion": 0.00029158592224121094, "__label__science_tech": 0.006683349609375, "__label__social_life": 4.965066909790039e-05, "__label__software": 0.0167083740234375, "__label__software_dev": 0.970703125, "__label__sports_fitness": 0.00018930435180664065, "__label__transportation": 0.0002384185791015625, "__label__travel": 0.00015342235565185547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18070, 0.04955]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18070, 0.66643]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18070, 0.80146]], "google_gemma-3-12b-it_contains_pii": [[0, 1524, false], [1524, 3516, null], [3516, 5224, null], [5224, 6483, null], [6483, 8619, null], [8619, 11045, null], [11045, 13035, null], [13035, 15321, null], [15321, 17890, null], [17890, 18070, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1524, true], [1524, 3516, null], [3516, 5224, null], [5224, 6483, null], [6483, 8619, null], [8619, 11045, null], [11045, 13035, null], [13035, 15321, null], [15321, 17890, null], [17890, 18070, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18070, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18070, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18070, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18070, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18070, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18070, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18070, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18070, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18070, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18070, null]], "pdf_page_numbers": [[0, 1524, 1], [1524, 3516, 2], [3516, 5224, 3], [5224, 6483, 4], [6483, 8619, 5], [8619, 11045, 6], [11045, 13035, 7], [13035, 15321, 8], [15321, 17890, 9], [17890, 18070, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18070, 0.265]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
1c70ca615b1f391c62597098a2e1e637ed469c12
|
Multi-User Collaborative Graphical User Interfaces
Inventors: Chia Shen, Lexington, MA (US); Frédéric D. Vernier, Romans (FR); Clifton L. Forlines, Cambridge, MA (US)
Assignee: Mitsubishi Electric Research Laboratories, Inc., Cambridge, MA (US)
Notice: Subject to any disclaimer, the term of this patent is extended or adjusted under 35 U.S.C. 154(b) by 990 days.
Appl. No.: 10/613,683
Filed: Jul. 3, 2003
Prior Publication Data
Related U.S. Application Data
Continuation-in-part of application No. 10/177,004, filed on Jan. 21, 2002, now Pat. No. 6,894,703, which is a continuation-in-part of application No. 10/053, 652, filed on Jan. 21, 2002, now Pat. No. 6,791,550, which is a continuation-in-part of application No. 09/651,002, filed on Aug. 29, 2000, now Pat. No. 6,545,600.
Int. Cl.
G09G 5/00 (2006.01)
G06F 17/00 (2006.01)
G06F 3/00 (2006.01)
G06F 3/048 (2006.01)
U.S. Cl. ................. 345/676; 345/418; 345/619; 345/672; 715/751; 715/788
Field of Classification Search .................. 345/619, 345/660-6, 418, 676, 173, 649, 672; 715/764, 715/750, 7, 700, 751, 788; 382/296
See application file for complete search history.
References Cited
Other Publications
Shen, Chia et al., “Around the Table”, MERL-CRL, Mitsubishi Electric Research Labs, pp. 1-4.*
* cited by examiner
Primary Examiner—Chante Harrison
Attorney, Agent, or Firm—Dirk Brinkman; Clifton D. Mueller; Gene V. Vinokur
Abstract
A multi-user collaborative graphical user interface has a display area with a horizontal orientation, the display surface is positioned between the multiple users. The display area also has a centroid and a circumference. The display area is partitioned into work areas so that there is one working area for each user of the multiple users. An item is displayed in a particular working area using a global polar coordinate system centered on the centroid.
10 Claims, 12 Drawing Sheets
<table>
<thead>
<tr>
<th>451</th>
<th>452</th>
<th>453</th>
<th>454</th>
<th>455</th>
<th>456</th>
</tr>
</thead>
<tbody>
<tr>
<td>a layer of pop-up items or top-level menus</td>
<td>a layer with selected images</td>
<td>a layer with the control or menu bar</td>
<td>a layer with all the images except one</td>
<td>a grid in a deformation mode</td>
<td>a layer for the background</td>
</tr>
</tbody>
</table>
Fig. 4B
MULTI-USER COLLABORATIVE
GRAPHICAL USER INTERFACES
CROSS-REFERENCE TO RELATED APPLICATION
This is a continuation-in-part of U.S. patent application Ser. No. 10/177,004 filed on Jun. 21, 2002 now U.S. Pat. No. 6,894,703, by Vernier et al., which is a continuation-in-part of U.S. patent application Ser. No. 10/053,652 "Circular Graphical User Interface" filed by Lesh et al. on Jan. 21, 2002 now U.S. Pat. No. 6,791,530, which is a continuation-in-part of U.S. patent application Ser. No. 09/651,002 "Multi-User Interactive Picture Presentation System," filed by Shen et al. on Aug. 29, 2000 now U.S. Pat. No. 6,545,660.
FIELD OF THE INVENTION
The present invention relates generally to graphical user interfaces, and more particularly to multi-user collaborative graphical user interfaces.
BACKGROUND OF THE INVENTION
Presentations are an important aspect of many professional and social settings. Executives make presentations to directors, managers conduct meetings with staff, salespersons make presentations to potential customers, doctors conduct meetings with nurses and patients, lawyers make presentations to juries, and families and friends present and share photographs of occasions in their lives.
Frequently, much effort goes into generating and delivering effective presentations. With specialized software, conventional personal computer systems can provide effective platforms for generating and conducting presentations. Currently available presentation program modules can turn a personal computer into a customized presentation system for generating and delivering picture presentations using display terminals or digital projectors.
Generally described, these prior art presentation systems provide a specially designed, user-friendly, set of tools to assist in the construction of a presentation that can be displayed subsequently to an audience. Those presentation systems also allow images to be presented sequentially to an audience, picture-by-picture, with color, animation, audio, and transition effects that enrich and enliven the presentation.
Conventional presentation systems do not provide an effective means for interacting with the content of the presentation during the course of the presentation. This drawback arises because these conventional presentation systems have only two modes of operation, an edit mode and a show mode. A single user often constructs the presentation, and a single user delivers the presentation to an audience. During the course of the presentation, the single user can interact with the content of the presentation only by invoking the edit mode, which primarily allows the user to rearrange the order in which the presentation is arranged.
A significant drawback arises when using these conventional presentation systems because all other participants of the presentation cannot concurrently interact with the content of the presentation.
Conventional systems are designed for use by a single presenter to a passive audience, and not for a setting where all participants of the presentation interact with the presentation on an equal footing. The prior art presentation is typically conducted in a linear setting. The presenter faces the audience, and the audience views the presentation behind the presenter. The presenter can either look at the audience or the presentation, but not at both at the same time.
Furthermore, a conventional presentation system only has a single set of controls. To allow any one other than the presenter to control the presentation can prove disruptive and cumbersome. Also, most computer implemented presentation systems that concurrently display multiple images use the same rectangular format as used by mechanical slide-sorter. Those require that the typical single user has a specific orientation with respect to the displayed presentation. These types of systems are not suited for situations where multiple participants are facing each other and the displayed presentation, in a highly interactive and multi-dimensional manner.
An alternative presentation system can use a circular display surface, such as a tabletop. There are many advantages of tabletop displays over traditional presentation systems, such as white boards, projection screen, desktops computers, or handheld devices, particularly for collaborative tasks where multiple users need to work both with each other and access computer resources.
Users can sit around a table and thus easily face each other, rather than try to crowd around a computer screen, or a small handheld device. A tabletop provides shared space and also allows users to have their own personal, if not entirely private, space to work on. Finally, whether it is an electronic display or not, a tabletop affords a convenient space where users can spread out and organize images.
The DigitalDesk is a physical desk augmented with vision and projector capabilities so that the physical and electronic desktops are merged into one. DigitalDesk is designed for a single user. The InteracTable in the i-LAND project provides a rectangular surface for multiple users. However, most of these tabletop user interfaces organize images in a rectangular manner. It is desired to provide a circular graphical user interface.
Collaborative circular graphical user interfaces present special problems, which cannot be addressed by conventional event-driven "window" architectures, such as Microsoft Windows™, where a single "desktop" interface is entirely constrained by Cartesian coordinates, and a single user. The problems with circular graphical interfaces stem from three unique characteristics of a collaborative user interface that is circular and is on a tabletop.
First, polar locations and polar orientations of displayed icons, documents, and images, generally “items,” must be handled in a special way that is different from conventional rectangular formats.
Second, the number and variety of items that can be displayed is much larger than one would normally find on the traditional “desktop.” Also, the items can be organized in multiple layers and views associated with concurrent users.
Third, events that drive the interface originate from collaborations between the multiple users. None of these issues are addressed by conventional windows-based architectures.
SUMMARY OF THE INVENTION
The invention provides visualization and layout schemes for a graphical user interface. Because the interface uses polar coordinate systems to display images, prior techniques, which typically use Cartesian coordinate systems, are inapplicable.
It is an object of the invention to give the user of the interface the full capability to relocate, re-orient, scale and layout images in the circular interface in real-time.
It is another object of the invention, to allow multiple users to collaborate display and manipulate images from multiple points of view.
A multi-user collaborative graphical user interface has a display area with a horizontal orientation, the display surface is positioned between the multiple users. The display area also has a centroid and a circumference. The display area is partitioned into work areas so that there is one working area for each user of the multiple users. An item is displayed in a particular working area using a global polar coordinate system centered on the centroid.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is an oblique view of a multi-user circular graphical interface according to the invention;
FIG. 2 is a top view of a control bar of the interface of FIG. 1;
FIG. 3 is a side view of the circular graphical user interface of FIG. 1;
FIG. 4a is a block diagram of the user interface of FIGS. 1 and 3;
FIG. 4b is a block diagram of rendering layers used by the invention;
FIG. 5 is a diagram of polar coordinate systems used by the invention;
FIG. 6a is a block diagram of rendering a pile; and FIG. 6b is a block diagram of rendering a pyramid;
FIG. 7 shows a display area partitioned into three work areas according to the invention;
FIG. 8 shows a display area partitioned into three work areas according to the invention;
FIG. 9 shows an item with control points;
FIG. 10 shows a display area partitioned into five work areas according to the invention; and
FIG. 11 shows a display area partitioned into three work areas according to the invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
System Structure
FIG. 1 shows multiple users 101-103 in the vicinity of a circular graphical user interface 100 operating according to the invention. The users share and interact with a picture presentation in a dynamic and collaborative manner. The system according to the invention displays images 110 on a display surface, i.e., the horizontal tabletop 130 of a circular table 125. The images can be of photographs, videos, computer generated images, icons, documents, or any other displayable source material, hereinafter generally “items.”
In the preferred embodiment, the tabletop 130 surface is touch sensitive.
The interface 100 includes an orientation area 140, and a plurality of control panels (menus) 200. In the preferred embodiment, the orientation area 140 is an annular ring at the periphery of the images. The control panels are positioned within the annular ring. There is one control panel or top-level menu for each user. Additional pop-up menus can be added as needed. Pop-up menus are generally temporary.
The control panels 200 are displayed in a region of the display surface 130 in front of the user. A camera 360, see FIG. 3, can be used to track the users 101-103 so that as the users move around the display table, their respective control panels 200 follow. Alternatively, the users can employ a pointing device to indicate where their respective control panels should appear on the tabletop.
FIG. 2 shows icons of the control panel 200 in greater detail. Each user control panel includes the following icons: inkpad 210, keyboard 220, people 230, calendar 240, work space 250, new 260, location 270, events 280, show 290, and summary 295 icons. A mouse or a touch sensitive technique can be used to activate the icons of the control panels 200. Initially, the icons are displayed as black on a white background, but when an icon is activated or selected, the icon is displayed in full color.
The people, calendar, location, and events icons can be associated with corresponding “views.” In the traditional window-based desktop, there is only one associated view. However, here, each user can construct one or more views of what can be displayed on the tabletop, and users can select any of these views as an active view, i.e., the active view is the one that is currently displayed. For example, an “event view” organizes clusters images according to events, a “calendar view” clusters images acquired in the same time frame, and a “location view” clusters images according to geographic location. In essence, a view is a set of images having some logical relationship.
As shown in FIG. 3, images of items are composited by a processor 310 executing a software architecture 400 according to the invention. The composited images are displayed onto the display surface. The displayed images are composited in response to user input commands (events).
User input can be via a touch surface 320, keyboard, mouse 330, and the like. As an advantage, the present system can be operated concurrently by multiple users. In the preferred embodiment, the display surface is circular. For tabletop display, the images are displayed via a projector 340, and mirror 350. The projector could be vertically mounted, or back-projection can be used to eliminate the need for the mirror. The fact that a single projector is used is significant, because this requires that the output image is potentially composited from a large number of individual images.
As stated above, the camera can be used to establish the relative locations of the users 101-103 with respect to the interface 100.
Some of the Figures also show a coffee mug 1 in the top of the table. The coffee mug 1 is not part of the invention, but often coffee mugs are key items present during presentation, professional or social. As an advantage, the present invention gracefully admits integration of coffee mugs or
other physical discussion items with the presentation. In fact, using the camera 360 coupled to a vision system of the processor 310, the displayed images can be composited in such a way that physical items that are not part of the user interface do not obscure significant portions of the images.
The main purpose of the architecture 400 according to the invention is to manipulate and present photographs, slides, text, videos, the “items.” The items are manipulated by the users using the control panels and other input devices that generate “events.” The images can be associated with soundtracks so that when images are selected, the sound-track can also be played. The images can also be annotated with text.
The item can be organized in a database (DB) 370. The database can be local, or remote. The items can be in the form of digital images, e.g., files with bmp, jpg, mpg, gif, pdf, or eps extensions, to name but a few. The text files form the source data from which images are formed. Images can have associated audio files in wav files, for example. The items can also be annotated by name, date, location, etc. Items are selected from the database 370, and the selected items are composited into the displayed images as “views”, as described below. Multiple users can interact with compositing process in a concurrent and interactive manner. The orientation area 140 is used to orient the “content” of the presentation image 110 or active view. When the orientation area is circular, then the displayed image can be rotated like a lazy Susan. The rotation is achieved by the processes that composites the image with a selected orientation. The ring can be projected onto the touch sensitive surface of the tabletop.
The images of the items are generally shown with an orientation towards the control panel from where the selection took place, i.e., generally facing the user that selected the item. Should another user subsequently want to view the same image, selection can rearrange and reorient the image in the overall image accordingly, as described in further detail below.
In order to support individual user viewing preferences and group shared viewing needs, the interface provides two general user interface functions. First, the entire displayed image can be freely rotated in either direction. This operation is a very convenient way to pass around a global layout of the interface for each individual user’s viewing angle. In addition, users can control panels to be positioned along the perimeter of the tabletop wherever a user is sitting.
Interface and Image Orientations
Traditional rectangular interfaces, such as by windows-based architectures, typically assume that the user or users always view the interface from roughly the same direction and angle, namely from directly in front of a terminal or screen. Prior art interfaces typically use a rectangular (Cartesian) coordinate system to display images. For example, the images are almost always aligned according to the rows and columns of pixels, which can sometimes further define rectangular windows that partition the display area or screen. When pixels an images are aligned, transformations such as affine translation and scaling are straightforward.
In contrast, our invention enables face-to-face collaborations where the interface is situated between the users, and thus we must consider issues of rotation and re-orientation of the entire display interface, including the images that are displayed there. Thus, we provide an architecture for visualizing and collaborative interacting in order to facilitate the convenient re-orientation of any or all images on the interface surface, the passing images around the interface surface, and the size of the user interface and the images.
Architecture Overview of Circular Graphical User Interface
FIG. 4 shows architecture and method 400 for collaborative circular graphical user interfaces. The architecture includes a transformation engine 410, an asynchronous rendering engine 420, and a thread-switching engine 430 coupled to each other. The operation of the engines is in response to external events 450, such as mouse clicks, drag&drop events, free-form stroke events, touch events and keyboard events.
In response to the events 450, the transformation engine 410 generates polar coordinates for a transformation matrix 411 of graphics context and input events. The rendering engine 420 coordinates multi-layer, multiple depth rendering, functions, and the switching engine 430 coordinates multiple execution threads, multiple image layers, and multiple tabletop views. With these three engines, correct and efficient correspondence between input events and output rendering is assured.
The architecture 400 operates on layers of images. A set of layers can be collected to form a view. Multiple views can be maintained concurrently. A view is formed by compositing the set of layers associated with the view in a predetermined order. An active view is the presentation image 110 that is currently displayed on the tabletop. The types of layers can include item layers 401, view layers 402, and a background layer 403. To ensure all of the pixels of the final image have some value, the background layer has no transparent pixels. For example, the pixels in the background layers are initially all set to blue.
The number of layers can change over time. During rendering, the items 401 are composited into the view layers 402, which are then composited onto the background layer 403. Associated with each layer is an image buffer. Thus, any layers that have not changed since the last refresh can be copied directly to a display or video buffer during rendering. In a preferred embodiment, a double buffering technique is used. While a first buffer is displayed, a second buffer is filled with pixels. Then, the second buffer is displayed, and the first is filled with pixels, and so forth.
Layers
FIG. 5 shows one possible set of layers that can be composited into a view. For the purpose of merging and rendering, the layers can be numbered, e.g., top-to-bottom 0, 1, 2, 3, 4, etc, where one layer is always defined as the “top” layer. A compositing operation can be carried out at any layer with all of the lower layers. For example, layer 3 is a compositing of layers 0+1+2+4, and layer 4 is a compositing of layers 1+2+3+4.
For example, the layers can include the following layers in a top-to-bottom order. A layer 451 of pop-up items or top-level menus, which are always on top, if it exists. Generally, pop-up menus are temporary. A layer 452 of selected images, which is on top layer if layer 451 does not exist. A layer 453 layer with the control or menu bar 200, which is the top layer if none of the above layers exist. A layer 454 with all the images except the selected images. A layer 455 for a deformation grid. A deformation grid assists the users in visualizing how a view can be deformed. For example, items near the center of the view can be spaced closer and appear smaller than those near the edges of the view to give a “black-hole” type of effect. At the very bottom there is a background layer 456.
Transformation Engine
With our architecture, the users 101-103 can rotate the entire active view 110, or move individual images within the view. Individual images can be moved using affine trans-
formations, i.e., translation, scaling, and rotation. Because the interface is primarily circular, two polar coordinate systems are maintained. A global polar coordinate system is assigned to an entire view, and a local polar coordinate system is assigned to individual images within a view. The moving of the images is responsive to the events 450.
The transformation engine 410 handles all of the necessary primitives to build a circular interface based on these two polar coordinate systems. In a traditional GUI, it is very common to use a hierarchy of components to partition the screen layout (desktop) into smaller regions. This is possible because in a rectangular interface, a rectangle can be partitioned into smaller rectangles with each region operating only on a local coordinate system, and where there is only one common direction of orientation for each displayed visual object. For example, on a desktop interface, all images are vertically aligned and rotation is not possible. In contrast, a polar coordinate based interface has no predominant direction for displayed items. Thus, it is not possible to partition the screen and resolve the smaller problems in a local frame coordinate system, and then assembling the global layout from the local layouts as in windows-based desktop architectures.
In the polar coordinate system, there is one and only one center that is meaningful. All the items must know where this center is at all times. Therefore, it is necessary to describe every item to be displayed with a polar location and a polar orientation at the same time.
Polar Coordinate System
As shown in FIG. 5, the architecture according to the invention uses two polar coordinate systems to determine three variables: a radial distance r 501 from a center 504 of each image 505 to the center 500 of the display surface, i.e., a “view,” an angle α 502 of rotation around the center of the view, and an angle β 503 of rotation around the center of each image. The angle α is with respect to some virtual reference line 510 of the display surface, and the angle β is an offset from angle α to a central axis 520 of each image. For comparison, the item 505 labeled “AB” has an angle β greater than an angle α, and the item 505 labeled “CD” has a very small angle β and an angle α that is close to 90°. In addition, there is a global angle φ 510, which determines how much the entire view is rotated, with respect to some arbitrary reference position.
Even when the β angles are zero, the α angles are different for these two documents, and the documents will have different orientation. This problem does not exist in a Cartesian framework. With the introduction of the 3rd degree of freedom, the angle β 503, it is possible to rotate every item around the item’s own center.
To manage the relative position of each item, element, the transformation engine 410 translates a position (r, α, ϕ) of an item into the transformation matrix 411, and the local angle β 503. For example, the transformation uses β(α+ϕ) to rotate all the elements displayed on the tablet top to face the same direction towards a user’s location at the table, defined by the angle ϕ 510, which is the global angle used to rotate the entire view. It is also possible to use intermediary values between β(α+ϕ) and β=0 to re-orient documents in a continuum.
Multi-Layer Multiple-Depth Asynchronous Repaint Engine
The circular graphical interfaces, as described herein, allows users to “pile,” “shuffle,” “pass,” and “spread” items on the tablet top (view), see FIG. 1. Scaling (zooming) to various resolutions is also permitted. Therefore, it is necessary to display and refresh potentially a very large number of items in a particular view, perhaps as many as a thousand or more. This is a couple of orders of magnitude larger than the number of windows one would have “open” in a conventional desktop display.
Because each individual item itself can have a large number of pixels, the total number of pixels to be processed for a single refreshed composition of an active view can be extremely large.
For this reason, multi-layers are used by the rendering engine 420. Whenever, the “content” of the active view changes in position, orientation, or size, the rendering engine 420 determines which layers that need to be rendered, and the order of the rendering of the layers. This determination is based on the events 450, e.g., rotate the entire view, resize and reorient selected items, move an item from one layer to another, construct a composite image for a particular layer, update item attributes, and so forth.
Each item is potentially part of a displayable image with attributes describing its properties such as size, current location, and angle of orientation, and a pointer to the actual file in the database 370 that forms the source data for the item. It is also possible to associate application specific attributes with an item. For example, for digital images, users can add attributes for shadow generation, paint quality/resolution parameterization, and information about whether the item can be rotated, or not.
In the multi-layer representation, the item layer 452 usually includes one or more selected items. Selected items are being actively controlled by the users and can be events 450, e.g., the item is being rotated, passed to another users, etc. To reflect a change in the display of a selected item, it is sufficient to composite a new version of the item with the view layer 454 that is to contain the item, and then to composite that layer with the background layer 456. Activating a different view, merely causes that view to be composited with the background layer, the individual items of that view do not need to be composited until they are selected.
In other words, composing is a bottom-to-top order. The background (deepest) layer 456 is relatively static, e.g., an initial blue color that is then overwritten with a deformation grid, map, or a tablecloth texture. In the case where multiple views are used, a different background can be used to distinguish each view.
This layer is composited first. However, changing the background requires a recompositing of all layers on top of the background. Layering, reduces the number of times layers or views need to be recomposed. The top layer is always the last layer to be composited on top of previous layers.
As shown respectively in FIGS. 6a and 6b, two rendering strategies can be used. In the first strategy, images 601-604 are generated 611-614 (left arrows) for all items in each layer. The generation is from source data of each item according to parameters such as size and orientation. The layers can then be composited (up arrows) in a bottom-to-top order to render a view. In the second strategy, each layer 621-624 includes itself as well as all layers below it. These two strategies are called “pile” and “pyramid” respectively. To be useful, the pyramid has a smaller number of items in the top layers that change more frequently than items in deeper layers. These two strategies can be used in conjunction, e.g., a pile layer can be composited with a pyramid layer. The pyramid layers 621-624 can be generated from the pile layers 601-604 to factorize the generation process.
Thread Switching
The rendering engine 420 according to the invention executes multiple threads concurrently and asynchronously. The asynchronous rendering is accomplished by maintaining an independent rendering thread for each layer.
The multi-layer representation enables selective rendering of parts of the view displayed on the tabletop. Most of the time, only a single image of a selected item needs to be regenerated. For example, one user is "passing" a photograph to another user, or zooming on the photograph. However, if a user rotates the entire view on the tabletop, then all layers may need to be composited into a single new rotated view. In other cases, some parts of the view on tabletop remain stationary, for example, a user's control panel, while the rest of the image rotates.
The threads are executed so that the number of images that need to be generated from source data is minimized. Also, latency is minimized. For example, if a user moves an item, and then rotates the entire view, the entire view is updated, and a rendering pass for the single item is discarded.
The architecture also includes threads for timers. Timers can be used to animate items. Other threads are used to acquire source data from the database.
Multiple Views and Multiple Control Bars
As stated above, the architecture supports multiple views. One simple use of this feature is to provide multiple virtual tables but all using the same space in the user environment. More importantly, it allows different users to view the same items in different ways to get a different perspective. For example, users can discuss photographs based on who is in them, when they were taken, or where they were taken, each representing a different view.
When switching to a different view in a multiple view system, the list of the items in the multi-layer representation can be different in different views. The set of items, the size of the items, the background, and how other information is displayed can vary for different views.
The architecture provides control panels that allow users to switch between views and control the application in various ways. The architecture supports multiple curved control panels, one per user, that can be dragged freely along the border of the circular view displayed on the tabletop.
In the traditional Windows GUI, it is common to have pop-up menus for selected items. These are always displayed with the same vertical orientation. This is not the case with the circumferential interface, where pop-up menus are aligned with the users facing the periphery of the interface. In this case, rotating the display may confuse the users. Therefore, the system can leave a "shadow" of the pop-up menu at its original location associated with the selected item.
Multiple Work Areas
As shown in FIG. 10, up to now, the display area 100 is configured as a single contiguous work area. However, as shown in FIGS. 7-8 and 10-11, it is also possible to partition the display area 100 into multiple (N) work areas 701 and 801, for a circular table 700 or rectangular table top, where there is one work area for each user, e.g., three or four. A single work area can also be associated with multiple cooperating users. The partitioning is effected, for example, by radii extending from the centroids 710, 810 to the perimeter of the display areas. FIGS. 10 and 11 show how the display area is partitioned into work areas with other shapes. Note in the work area the menus face the users.
Note: the size of the work areas can vary for each user, as shown in FIG. 7. In this embodiment, the items, e.g., images, documents, videos, icons, are orientated within each work area so that they face the corresponding user, generally, using the polar transformation based techniques described above.
In this case, each work area can include a user specific control panel or menus 702 and 802 also oriented towards the user within the work area. In this embodiment, menu actions and item manipulation are coordinated according to the work areas. Users can relinquish control of an item by passing it to another work area. As an item leaves one area and enters another, the polar orientation is adjusted to correspond to the new work area.
We claim:
1. A multi-user collaborative graphical user interface for displaying items, comprising:
- a rectangular display area having a horizontal orientation, the display surface positioned between the multiple users, the display area having a centroid and a circumference;
- a plurality of work areas partitioned from the display area, there being one working area for each user of the multiple users; and
- means for orienting a displayed item in a particular working area using a local polar coordinate system centered on the centroid.
2. The interface of claim 1, further comprising:
- means for selecting a control point for a particular displayed item; and
- means for orienting the displayed item in the particular working area using a local polar coordinate system centered on the a control point.
3. The interface of claim 1, further comprising:
- means for displaying a control panel in each work areas, there being one control panel for each of the plurality of users; and
- means for orienting the control panel in the particular working area using the global polar coordinate system centered on the centroid.
In the general case, items are oriented according to multiple parameters, e.g., the number of users, the location of the users around the display area. The variable control point 902 of the particular item 900 is determined dynamically when the user "touches" or points at a displayed document for the first time. In the way, we never have to handle a double constraint, i.e., the position of the control point relative to the centroid of the item and the position of the centroid of the document in a complex multi-user interface. The invention also provides a mechanism where the user can "grab" an item by at the control point.
There also is a resize mechanism which can have multiple levels of sensitivity. A resize area 903 is located at a corner of the item; a resizing scale factor is based on a distance 904 from the corner associated with the resize area to the variable control point 902. Another corner 905 can be used to rotate the item about the centroid 901.
It should be noted, that each work area can have a set of appearance and operation attributes. The appearance attributes control how the work area appears, e.g., size, orientation, background color, size and color of documents inside area, font, etc. The operation attributes indicate access privileges, read/write/modify parameters, how items are manipulated, and the like.
Although the invention has been described by way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications may be made within the spirit and scope of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.
4. The interface of claim 1, in which the display area is circular.
5. The interface of claim 1, in which the control panel only controls items displayed in the corresponding work area.
6. The interface of claim 1, in which the item includes a title, a hyperlink, and an image.
7. The interface of claim 1, in which a corner of the item includes a resize area, and wherein a resizing scale factor is based on a distance from the resizing area to the control point.
8. The interface of claim 1, in which the display area is partitioned by radii extending from a centroid of the work area to a perimeter of the work area.
9. The interface of claim 1, in which each work area has associated appearance attributes.
10. The interface of claim 1, in which each work area has associated operation attributes.
* * * * *
|
{"Source-Url": "https://image-ppubs.uspto.gov/dirsearch-public/print/downloadPdf/7327376", "len_cl100k_base": 7851, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 21793, "total-output-tokens": 9539, "length": "2e12", "weborganizer": {"__label__adult": 0.0006489753723144531, "__label__art_design": 0.0212249755859375, "__label__crime_law": 0.0011453628540039062, "__label__education_jobs": 0.00469207763671875, "__label__entertainment": 0.00042819976806640625, "__label__fashion_beauty": 0.00033164024353027344, "__label__finance_business": 0.001865386962890625, "__label__food_dining": 0.0005974769592285156, "__label__games": 0.00165557861328125, "__label__hardware": 0.025604248046875, "__label__health": 0.0006403923034667969, "__label__history": 0.0009436607360839844, "__label__home_hobbies": 0.0004000663757324219, "__label__industrial": 0.001674652099609375, "__label__literature": 0.0006170272827148438, "__label__politics": 0.0002655982971191406, "__label__religion": 0.0005645751953125, "__label__science_tech": 0.23876953125, "__label__social_life": 9.60230827331543e-05, "__label__software": 0.2010498046875, "__label__software_dev": 0.49560546875, "__label__sports_fitness": 0.00019216537475585935, "__label__transportation": 0.0007376670837402344, "__label__travel": 0.00027823448181152344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38832, 0.04617]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38832, 0.46131]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38832, 0.8953]], "google_gemma-3-12b-it_contains_pii": [[0, 2587, false], [2587, 2587, null], [2587, 2587, null], [2587, 2587, null], [2587, 2587, null], [2587, 2890, null], [2890, 2890, null], [2890, 2890, null], [2890, 2890, null], [2890, 2890, null], [2890, 2890, null], [2890, 2890, null], [2890, 2890, null], [2890, 9610, null], [9610, 16221, null], [16221, 23606, null], [23606, 31129, null], [31129, 38019, null], [38019, 38832, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2587, true], [2587, 2587, null], [2587, 2587, null], [2587, 2587, null], [2587, 2587, null], [2587, 2890, null], [2890, 2890, null], [2890, 2890, null], [2890, 2890, null], [2890, 2890, null], [2890, 2890, null], [2890, 2890, null], [2890, 2890, null], [2890, 9610, null], [9610, 16221, null], [16221, 23606, null], [23606, 31129, null], [31129, 38019, null], [38019, 38832, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38832, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38832, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38832, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38832, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38832, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38832, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38832, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38832, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38832, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38832, null]], "pdf_page_numbers": [[0, 2587, 1], [2587, 2587, 2], [2587, 2587, 3], [2587, 2587, 4], [2587, 2587, 5], [2587, 2890, 6], [2890, 2890, 7], [2890, 2890, 8], [2890, 2890, 9], [2890, 2890, 10], [2890, 2890, 11], [2890, 2890, 12], [2890, 2890, 13], [2890, 9610, 14], [9610, 16221, 15], [16221, 23606, 16], [23606, 31129, 17], [31129, 38019, 18], [38019, 38832, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38832, 0.01899]]}
|
olmocr_science_pdfs
|
2024-12-12
|
2024-12-12
|
e635528f42493d9d65933ebf7478fa62af8378d7
|
The Impact of Rate of Change on Object Oriented Paradigm
Mohammed GH. I. AL Zamil and Eman AL Momani
Department of Computer Information Systems,
Yarmouk University,
Irbed, Jordan
[email protected], [email protected]
Abstract
Object oriented technology accommodates a set of relationships that affect the quality of coding and, therefore, output programs. In this paper, we present a hybrid technique that relies on the paradigm of dependency graph to detect the impact of change over time on object oriented relationships such as: inheritance, class size, coupling and cohesion. Our goal is to provide a systematic framework to measure how rate of change affects such features in a concurrent environment. The answer of such question takes place in code analysis and model checking of concurrent programs. The contribution of this research is to study the static and dynamic aspects that contribute in enhancing the quality of coded programs, which implies increasing maintainability.
In order to satisfy our research goal, we performed experiments on a collection of concurrent Java programs, analyzed them, and then concluded the behavior of these programs over time. The results indicated that there is a significant positive correlation between some of these features and rate of change.
Keywords: Rate of Change; Object Oriented Paradigm; Model Checking
1. Introduction
Program understanding and analysis depends on its internal dependencies; the connectivity among all constituents of the program [1]. Several software engineering applications such as reverse engineering and software maintenance take into account the structure of the software program. Object oriented paradigm defines relationships to relate internal entities such as classes and attributes. As time moves on, code might be changed to cope with user needs and the natural evolution of a software product, which could affect the quality of resulted program.
Studying internal dependencies has attracted researchers for the purpose of enhancing the coding task. Recent techniques focused on modeling programs in order to provide information about the relationships among its components [2]. However, one of the most interesting visual descriptions was the dependency graphs, which depict the connectivity among different program parts. Previous researches implement dependency graphs in order to extract and quantify certain characteristics of programs under analysis. For instance, it could provide useful information at tasks such as debugging, testing, fault detection and embedded security issues.
Such information is hard to detect in a concurrent environment. Many object oriented programming languages support multithreading programming paradigm, in which many threads run at the same time. Such feature might lead the execution to an unpredictable behavior. In fact, concurrent programs are much complicated as compared to sequential ones. Therefore, there is a need to intelligent techniques to check some programs’ features using a sort of static analysis. However, static analysis does not provide reliable conclusions as dynamic behavior is crucial in such situations. Thus, a hybrid framework of both code and
model analysis would be of great interest to achieve our goal as model checking mimic the dynamic behavior of concurrent systems.
In this paper, we proposed a framework to analyze and understand the behavior of Java programs for the purpose of enhancing their quality. The goal is to measure the effect of change over time on different object oriented relationships. We used to model Java programs using Code Metrics tool, which provides the static aspects of our analysis. Furthermore, we extracted the dependency graph from the resulted models to analyze the static aspects of such programs. Moreover, to ensure that Java concurrent programs preserve its temporal properties, model checking techniques have been applied using UPPAL tool.
In order to evaluate the proposed framework, we performed experiments on a collection of Java classes in order to elicit information about the relationship among inheritance, size, coupling, cohesion, and rate of change. Then, we reported the results, which included a statistical analysis to support our conclusion.
The rest of the paper is organized as follows: Section 2 discusses the related work to our method. Section 3 offers a background of techniques used in this research and clearly defines the research questions. Section 4 explains the framework of our proposed method. Section 5 illustrates our experiments on the benchmark collection. Section 6 comments on the results and the limitations of this work. Finally, Section 7 summarizes the contributions and the conclusion of this research.
2. Related Work
Program analysis has been studied by many researchers for the purpose of enhancing coding quality. Recent researches have focused on advances in model checking techniques [3, 4, and 5] and program slicing [6, 7] in order to verify some systems’ properties. Separation of code and model analyses does not provide enough information to handle the relationships among some emergent quality attributes. In fact, studying such attributes requires combining knowledge from both sources as behavioral aspects emerge to show how a system acts and reacts to a particular event or input. Moreover, static analysis of code gives an indicator of coding quality and organization of code as well as dependencies among different entities within programs.
Further, object oriented relationships, such as cohesion, affect program testability. In other words, it affects the process of generating test cases automatically as concluded in [8]. This study highlight the importance of studying object oriented relationships in order to discover its impact on specific development tasks such as program correctness. The study also provides a detailed review of many metrics that contribute in quantifying and analyzing the impact of some attributes on the complexity of object oriented programs.
In the literature, there are several tools to analyze programs that provide variety of valuable information to understand the code. The Java System Dependency Graph (JSysDG) has been proposed by Neil Walkinshaw [9] to provide query-like manipulations on the dependency graph. DA4Java [10] is another example of static tools to analyze Java code that relies on combination of top-down and bottom-up approaches to extract detailed information about programs under analysis. Indeed, such tools contribute to the development of useful coded projects as they give the opportunity to developers to discover design errors and enhance the overall quality of programs.
Such tools are still insufficient to act as a mature framework that guarantees producing reliable software. Changes that could be suggested as a result to static analysis do not take into account the dynamic and temporal properties of concurrent programs. Such properties are crucial and essential for multithreaded and concurrent activities. While static analysis gives a value on the right hand, we, in this research, give equal value to the model analysis on the other hand. For this reason, we believe that a hybrid framework of both techniques would be of great interest for producing high quality code.
Program structuring is a well known static property. The main goal in this research is to experiment how we could enhance such static property while preserve its dynamic consequences. IBM’s Structural Analysis for Java (SA4J) [11] is an example of structural analysis tools. Its distinguishing feature is to detect anti-patterns in Java programs. However, like other static analysis tools, AS4J provides no guarantee that changes according to program analysis would preserve dynamic properties of code.
Recent software engineering researches have focused on formal methods to verify software dynamic properties using model checking [12]. The idea behind model checking is that the application can be normalized to check every possible state that could be generated during the life of a given program or design. Specifically, constructing a formal model that describes the behavior of the system during runtime as finite-state automata, and then, checking the existence of some properties in this model. Although, formal methods might not guarantee the performance of programs [13], we applied it for verifying correctness only.
3. Background and Problem Definition
In this section, we will provide background information including brief mathematical definitions of dependency graphs and model checking. Furthermore, we will end the section with a clear set of research questions that would be answered through the proposed methodology and experiments.
3.1. Dependency Graph
A dependency graph is a set of entities depicted as nodes within the graph and connected by edges. An edge depicts a relationship between two nodes which represents dependency between two entities. In this context, we define the dependency graph as follows:
Given a set of nodes \( N = \{n_1, n_2, \ldots, n_k\} \) and a set of edges \( E = \{e_1, e_2, \ldots, e_n\} \), a dependency graph \( G \) is defined as \( G \subseteq (N \times E) \) in which the following conditions MUST hold:
1. \( e \rightarrow \rho(n_i, n_j) \)
2. \( i \neq j \)
Where \( \rho \) is a mapping function between the two nodes \( n_i \) and \( n_j \) that represents a relationship between the two nodes. Such relationship should represent a sort of dependency in which any change in the entity \( n_i \) might affect the structure or the behavior, for instance, of entity \( n_j \). Furthermore, the second condition assert that circulation on the same entity is not allowed in dependency graphs. Figure 1 shows a dependency graph example.

3.2. Model Checking
Given a system modeled in some specification language as M with starting state s, model checking problem is defined as the automatic extensive checking of whether this model satisfies a given specification or property p.
\[ M, s \models p \] \hspace{1cm} (1)
In order to find an objective solution to this problem, the model and its specifications are formulated using a formal language (mathematical bases). However, the concept of model checking is general and could be suitable to handle several kinds of structures. In this context, we propose to model programs under analysis using this technique for the purpose of verifying that the coded program is still preserving its designated properties after reengineering changes.
3.3. Research Questions
Followings are the research questions that we will try to answer through this research. Is the information provided by the dependency graph enough to handle the ripple effect of changing some entities in a given system? What if the system has a transitive dependency in which an entity change might affect the correctness of the overall system state? What about the specifications? How to ensure that the target system is still preserving its designated specification after implementing changes?
These questions are crucial and, in many systems, are critical. In fact, there is no way to answer such questions without checking every possible state. Model checking provides the means of answering these questions by generating every possible state that a given system might enters during runtime.
In this research, we are going to evaluate a framework that combines both static analyses using dependency graph and model analysis by model checking target systems written in Java.
4. Framework
In this section, we describe the phases of our proposed framework to detect the impact of change over program attributes and predefined specifications. First, we will discuss the code analysis phase. Then, provide brief information and description regarding model checking technique.
4.1. Code Analysis
Dependency information is typically collected from the source codes during the static analysis phase. Locating dependencies among programs requires analyzing fields and methods through their reference locations. During the construction of object oriented programs, compilers create class files which hold most necessary information to proceed in analysis. The following subsections illustrate this process.
4.1.1. Extracting Raw Dependency Data:
Metadata within class files provides references to parent classes. Such information comprises the architectural dependencies among the constituent classes. Indeed, such general dependencies cover external classes, methods, and fields that may contribute in class’s behavior. Thus, the static analysis tool would be able to extract different kinds of static relationships including inheritance and associations.
4.1.2. Reference Dependency Locations:
A major phase in program static analysis is to scan the source code for the purpose of locating references of different entities. Therefore, a tool would be able to make a semantic connection among nodes within dependency graphs. In other words, connect constituents according to a specific description of a relationship. So, we could extract useful information from code such as: Inheritance, Composition, Fields, and Number of classes, Number of Interfaces, Depth of Inheritance, and Number of subclasses in an inheritance relation.
4.2. Model Analysis
Code analysis is able to collect the required information that lead to comparing relationships with each others. However, in some types of critical systems, especially concurrent programs, dynamic analysis must take place for the purpose of measuring the change as a comparison direction in this research. The effect of changes on coded programs has dynamic aspects. It must be measured or quantified during the code execution. Changes might take place at compilation time or at run time; as system states are changed from time to time (e.g., The Temporal Property of Concurrent Execution).
In this research, we will model and check coded programs in order to enrich our conclusion with dynamic aspects of changes during execution time as well as the static ones. The goal is to verify that changes will not affect the correctness of the overall system specifications. Figure 2 depicts how models are checked using this technique.

5. Experiments and Results
In order to evaluate the effect of rate of change on inheritance, coupling, cohesion, and size of classes, we conducted a set of experiments that take into account the static and behavioral aspects of Java programs. First, we will describe the experimental collection of programs. Then, describe the experimental setup including evaluation metrics. Finally, we will list and comment on the resulted data.
5.1. Experimental Collection
For the purpose of testing the proposed framework, we collected a large number of open source Java programs from different resources. Further, to proceed with our research goal of
studying evolution, we chose only programs available in 5 different releases or versions. Thus, our collection will hold changing programs in terms of releases. Then, we filtered the sample programs to include only concurrent Java programs. The resulted collection consists of 380 different concurrent programs. Further, these programs hold 6400 classes and consist of more than 1 million lines of code. For the purpose of this research, we ran our algorithms on each version independently in order to measure how dependencies affect the rate and correctness of changes. Table 1 shows a numerical description of each version.
<table>
<thead>
<tr>
<th>Version</th>
<th>NO. Programs</th>
<th>NO. Classes</th>
<th>LOC (Rounded)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ver. 1</td>
<td>380</td>
<td>6800</td>
<td>1000K</td>
</tr>
<tr>
<td>Ver. 2</td>
<td>380</td>
<td>7255</td>
<td>1200K</td>
</tr>
<tr>
<td>Ver. 3</td>
<td>380</td>
<td>8010</td>
<td>1250K</td>
</tr>
<tr>
<td>Ver. 4</td>
<td>380</td>
<td>8642</td>
<td>1420K</td>
</tr>
<tr>
<td>Ver. 5</td>
<td>380</td>
<td>9231</td>
<td>1601K</td>
</tr>
</tbody>
</table>
Figure 3 shows the distribution of programs as compared to their size. The figure highlights that our selected collection includes programs of different sizes.
5.2. Experimental Setup
In this section, we will describe the metrics used to construct the dependency graphs and to report the results, the way we compute the rate of change of collected programs, and the model checking tool that will be used to verify programs’ specifications.
5.2.1. Metrics:
In these experiments, we used DIT (Depth of Inheritance), NOC (Number of Children), and LOC (Line of code). These Metrics have been used in order to measure the complexity of the inheritance relation for a specific class and to quantify the class size [14]. Although there are many metrics to compute class size, such as function-point, we chose LOC as it reflects functional and non-functional blocks of code. Moreover, LOC is widely used in many software engineering tools, easy to understand, and easy to compute as compared to other measurements. Thus, our experiments could be re-implemented using different tools in which LOC is the only size metric.
In this paper, we used Code Metrics tool [15] for the purpose of computing DIT and NOC. Figure 4 shows an example of code metrics snapshot. For DIT, code metrics tool relies on the definition in [16] that considers DIT as the maximum length from the node to the root of the inheritance tree.
\[ DIT = \text{max} \text{length from node to root} \]
Figure 4 shows an example of code metrics snapshot. For DIT, code metrics tool relies on the definition in [16] that considers DIT as the maximum length from the node to the root of the inheritance tree.
In addition, we used DOC (Degree of coupling) and DCH (Degree of Cohesion). These two metrics provide our study with the following advantages:
1. Measuring the functional strength of the classes of concurrent Java programs.
2. Measuring the dependency of the classes to construct the dependency graph.
We used the models in [17] to compute DOC and DCH as follows:
\[ DOC = \frac{MRC}{MPC} \] \hspace{1cm} (2)
Where MRC (Message Received Coupling) calculates the complexity of messages that have received from a specific class and MPC (Message Passed Coupling) computes the number of messages that have been passed among objects of the class.
\[ DCH = \frac{NAU}{TNA} \] \hspace{1cm} (3)
Where NAU is the number of attributes used in the class and TNA is the total number of attributes. Both DOC and DCH can be analyzed and computed using code metrics tool by constructing dependency graphs for classes and attributes to analyze the code.
5.1.2. Rate of Change:
As the experimental collection is classified into a set of versions, each version is considered as a change. In other words, the rate of change of a class is computed as the number of times it appears in different versions. Although time is a measure factor to analyze rate of change, we ignore it in this research as code analysis is a static activity.
5.1.3. Model Checking:
For the purpose of checking whether the system confirms to its specifications in a concurrent environment, we used the model checker UPPAL [18] to handle this task. UPPAL is a modeling tool to verify software that is based on constraint solving and on-the-fly techniques. It has been developed to verify non-deterministic processes with finite control structure. The design goal of UPPAL is to check invariant and reachability properties by exploring the state space of a system. Therefore, we chose this tool as our specifications are mainly focusing on reachability. Figure 5 shows a sample UPPAL model generated from our collection.
5.3. Results
As described in the above section, our collection of programs is divided into five sets in which each one of them represents a change to the previous one. Notice that, the newer versions are really different from the previous ones as they have been modified to reflect some changes. To list our experimental results, we extracted metrics about coupling, cohesion, code level metrics to measure inheritance relations, rate of change, and model checking reachability properties.
Table 2 shows the values of message passed coupling and message received coupling. These values were calculated by examining the message calling graph that has been generated automatically using DMS Software Reengineering Toolkit [19]. The resulted degree of coupling value has been rounded up for each version of classes.
Table 3 displays the measurements needed to compute the degree of cohesion. Here, the metric takes into account the number of attributes that have been used by the methods within the class while ignoring the others.
Table 4 shows the information extracted directly from the code itself using code metrics tool. This information includes the number of lines of code (repeated from Table 1), the average depth of inheritance, and the average number of children (which is reported for each class but to avoid huge Table size we reported the average).
Table 4. Code Metrics
<table>
<thead>
<tr>
<th>Version</th>
<th>LOC</th>
<th>Average Depth of Inheritance</th>
<th>Average NO. Children</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ver. 1</td>
<td>1000K</td>
<td>≈2.30</td>
<td>≈4.2</td>
</tr>
<tr>
<td>Ver. 2</td>
<td>1200K</td>
<td>≈2.35</td>
<td>≈5</td>
</tr>
<tr>
<td>Ver. 3</td>
<td>1250K</td>
<td>≈3.00</td>
<td>≈6.4</td>
</tr>
<tr>
<td>Ver. 4</td>
<td>1420K</td>
<td>≈3.55</td>
<td>≈6.9</td>
</tr>
<tr>
<td>Ver. 5</td>
<td>1601K</td>
<td>≈3.80</td>
<td>≈7.2</td>
</tr>
</tbody>
</table>
Table 5 reports the ratio of changes on newer versions of classes for the same set of programs. It represents the rate of change as compared to the previous version.
Table 5. Rate of Change over Versions
<table>
<thead>
<tr>
<th>Versions</th>
<th>Rate of Change</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ver. 1</td>
<td>0</td>
</tr>
<tr>
<td>Ver. 2</td>
<td>≈1.06</td>
</tr>
<tr>
<td>Ver. 3</td>
<td>≈1.11</td>
</tr>
<tr>
<td>Ver. 4</td>
<td>≈1.08</td>
</tr>
<tr>
<td>Ver. 5</td>
<td>≈1.07</td>
</tr>
</tbody>
</table>
Finally, we performed model checking on every version independently and reported the number of unreachable states, number of faulty states and the ratio of satisfied specifications for each version; according to programs’ specifications.
Table 6. Reachability Analysis (Model Checking)
<table>
<thead>
<tr>
<th>Versions</th>
<th>No. Unreachable States</th>
<th>No. Faulty States</th>
<th>Ratio of Satisfied Specification</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ver. 1</td>
<td>12</td>
<td>7</td>
<td>≈0.87</td>
</tr>
<tr>
<td>Ver. 2</td>
<td>15</td>
<td>11</td>
<td>≈0.82</td>
</tr>
<tr>
<td>Ver. 3</td>
<td>33</td>
<td>29</td>
<td>≈0.73</td>
</tr>
<tr>
<td>Ver. 4</td>
<td>56</td>
<td>34</td>
<td>≈0.73</td>
</tr>
<tr>
<td>Ver. 5</td>
<td>71</td>
<td>43</td>
<td>≈0.71</td>
</tr>
</tbody>
</table>
6. Discussion
In this section, we provide a detailed description and analysis of the empirical results. The following subsections are dedicated to describe the impact of rate of change over a set of program attributes. Notice that, we applied student t-test to verify the significance of our findings. The test has been performed on the exact values; i.e., not the averages. However, as the resulted database of numbers is very huge, we summarized the results using the average Metric.
Before proceeding with our detailed discussion, our collection of concurrent Java programs shows that there is a statistically positive correlation between LOC and rate of change. In
other words, the size of programs is increased overtime during enhancements and maintenance operations. Figure 6 depicts our conclusion.

6.1. Inheritance vs. Rate of Change
Inheritance represents the generalization-specialization relationship in object oriented programming languages. In order to measure the effect of changes over inheritance, we measured the depth of inheritance (DIT) and the change over the number of subclasses (NOC). To simplify the representation, Figure 7 depicts this correlation using the average of both metrics.

We found that there was a significant positive correlation between the depth of inheritance and rate of change. Also, the correlation between the number of subclasses (children) and rate of change was also founded positive. According to the documentation, every version reflects a set of enhancements over the previous one, which justifies our conclusion. Indeed, we cannot generalize this conclusion over all object oriented programs as, in many cases; changes might collapse the inheritance diagram. On the other hand, we measured the relationship between DIT and NOC and found that there was a positive correlation between them.
6.2. Coupling vs. Rate of Change
Coupling can be defined as interrelationship among classes within a program. It shows how classes are dependent on each other. During this analysis, we found that the correlation between coupling (DOC) and rate of change is negative; and sometimes stable (see Figure 9). We justify this conclusion by examining the values of MPC and MRC in Figure 8. It is clear that in this collection the increasing rate of MPC was faster than MRC especially at version 4 and 5. The actual reason behind this conclusion is that enhancements made on newer versions decrease the coupling among classes.

**Figure 8. The Rate of Passed and Received Messages**

**Figure 9. DOC Value per Version**
6.3. Cohesion vs. Rate of Change
Cohesion metrics measure the homogeneity among class contents. However, our empirical results indicate that there is no clear evidence that the rate of change affect the cohesion metrics. Figure 10 shows that the total number of attributes and total number of used attributes are increased from one version to another. However, small fluctuation in this relation causes the cohesion metric to be fluctuated drastically in different point (see Figure 11).
6.4. Impact of Class Attributes Metrics on Rate of Change
Up to this point, we saw from previous subsections the correlations between rate of change and some relationships of object oriented programs. The remaining and most important research question in this paper is the effect of changes on the correctness of programs. Notice that, our collection represents concurrent Java programs in which each one of them has its own specifications.
First, Figure 12 shows two important aspects; unreachable states and faulty states. Our empirical results clearly show that the number of unreachable states increases as more changes are made. Also, the number of faulty states increases with respect to the rate of change. By this relation, we want to show that correctness is a critical and important aspect to measure the effect of changes.
Figure 12. Unreachable and Faulty States during Changes
Figure 13 shows that the ratio of programs specifications that have been satisfied decreased as more changes are applied. Meaning that, temporal and logical errors might arise from changes as well.
Figure 13. The Ratio of Satisfied Specifications
7. Conclusion
In this paper, we presented a hybrid framework to measure the impact of rate of change over different object oriented relationships. The contributions of this research are: 1) studying the impact of rate of change on inheritance relationship, coupling, cohesion and size of classes 2) measuring the impact of rate of change on program specifications.
The experiments performed in this research were based on three tools to extract static information and analyze the code: Code Metrics, DMS, and UPPAL. We evaluated the proposed method and reported the correlation among different OO relationships and rate of change. The results indicate that rate of change positively affects the complexity of program structure in terms of inheritance, cohesion, and coupling. Furthermore, we concluded that rate of change negatively affects the correctness of programs’ specifications.
References
|
{"Source-Url": "http://www.sersc.org/journals/IJSEIA/vol7_no5_2013/3.pdf", "len_cl100k_base": 5970, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 28972, "total-output-tokens": 7434, "length": "2e12", "weborganizer": {"__label__adult": 0.0003657341003417969, "__label__art_design": 0.00025177001953125, "__label__crime_law": 0.0003476142883300781, "__label__education_jobs": 0.0008258819580078125, "__label__entertainment": 4.857778549194336e-05, "__label__fashion_beauty": 0.00014865398406982422, "__label__finance_business": 0.00014197826385498047, "__label__food_dining": 0.00032711029052734375, "__label__games": 0.0005125999450683594, "__label__hardware": 0.0006976127624511719, "__label__health": 0.00048613548278808594, "__label__history": 0.0001780986785888672, "__label__home_hobbies": 7.826089859008789e-05, "__label__industrial": 0.00031876564025878906, "__label__literature": 0.00023734569549560547, "__label__politics": 0.0002357959747314453, "__label__religion": 0.0004498958587646485, "__label__science_tech": 0.0079193115234375, "__label__social_life": 8.994340896606445e-05, "__label__software": 0.0034198760986328125, "__label__software_dev": 0.98193359375, "__label__sports_fitness": 0.00033545494079589844, "__label__transportation": 0.0004701614379882813, "__label__travel": 0.0001900196075439453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31836, 0.04107]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31836, 0.52511]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31836, 0.90927]], "google_gemma-3-12b-it_contains_pii": [[0, 3211, false], [3211, 7321, null], [7321, 9860, null], [9860, 12798, null], [12798, 15015, null], [15015, 17159, null], [17159, 19699, null], [19699, 20731, null], [20731, 23634, null], [23634, 24881, null], [24881, 26136, null], [26136, 26972, null], [26972, 28556, null], [28556, 31836, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3211, true], [3211, 7321, null], [7321, 9860, null], [9860, 12798, null], [12798, 15015, null], [15015, 17159, null], [17159, 19699, null], [19699, 20731, null], [20731, 23634, null], [23634, 24881, null], [24881, 26136, null], [26136, 26972, null], [26972, 28556, null], [28556, 31836, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31836, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31836, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31836, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31836, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31836, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31836, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31836, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31836, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31836, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31836, null]], "pdf_page_numbers": [[0, 3211, 1], [3211, 7321, 2], [7321, 9860, 3], [9860, 12798, 4], [12798, 15015, 5], [15015, 17159, 6], [17159, 19699, 7], [19699, 20731, 8], [20731, 23634, 9], [23634, 24881, 10], [24881, 26136, 11], [26136, 26972, 12], [26972, 28556, 13], [28556, 31836, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31836, 0.17178]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.