Skip to content

Commit 17176ee

Browse files
committed
Provide script to import git log as csv
1 parent 9e2ca7f commit 17176ee

17 files changed

+439
-11
lines changed

COMMANDS.md

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,10 @@
2424
- [Setup jQAssistant Java Code Analyzer](#setup-jqassistant-java-code-analyzer)
2525
- [Download Maven Artifacts to analyze](#download-maven-artifacts-to-analyze)
2626
- [Reset the database and scan the java artifacts](#reset-the-database-and-scan-the-java-artifacts)
27+
- [Import git log](#import-git-log)
28+
- [Parameters](#parameters)
29+
- [Resolving git files to code files](#resolving-git-files-to-code-files)
30+
- [Import aggregated git log](#import-aggregated-git-log)
2731
- [Database Queries](#database-queries)
2832
- [Cypher Shell](#cypher-shell)
2933
- [HTTP API](#http-api)
@@ -214,6 +218,35 @@ enhance the data further with relationships between artifacts and packages.
214218

215219
Be aware that this script deletes all previous relationships and nodes in the local Neo4j Graph database.
216220

221+
### Import git log
222+
223+
Use [importGitLog.sh](./scripts/importGitLog.sh) to import git log data into the Graph.
224+
It uses `git log` to extract commits, their authors and the names of the files changed with them. These are stored in an intermediate CSV file and are then imported into Neo4j with the following schema:
225+
226+
```Cypher
227+
(Git:Log:Author)-[:AUTHORED]->(Git:Log:Commit)->[:CONTAINS]->(Git:Log:File)
228+
```
229+
230+
👉**Note:** Commit messages containing `[bot]` are filtered out to ignore changes made by bots.
231+
232+
#### Parameters
233+
234+
The optional parameter `--repository directory-path-to-a-git-repository` can be used to select a different directory for the repository. By default, the `source` directory within the analysis workspace directory is used. This command only needs the git history to be present so a `git clone --bare` is sufficient. If the `source` directory is also used for the analysis then a full git clone is of course needed (like for Typescript).
235+
236+
#### Resolving git files to code files
237+
238+
After git log data has been imported successfully, [Add_RESOLVES_TO_relationships_to_git_files.cypher](./cypher/GitLog/Add_RESOLVES_TO_relationships_to_git_files.cypher) is used to try to resolve the imported git file names to code files. This first attempt will cover most cases, but not all of them. With this approach it is, for example, not possible to distinguish identical file names in different Java jars from the git source files of a mono repo.
239+
240+
You can use [List_unresolved_git_files.cypher](./cypher/GitLog/List_unresolved_git_files.cypher) to find code files that couldn't be matched to git file names and [List_ambiguous_git_files.cypher](./cypher/GitLog/List_ambiguous_git_files.cypher) to find ambiguously resolved git files. If you have any idea on how to improve this feel free to [open an issue](https://github.com/JohT/code-graph-analysis-pipeline/issues/new).
241+
242+
### Import aggregated git log
243+
244+
Use [importAggregatedGitLog.sh](./scripts/importAggregatedGitLog.sh) to import git log data in an aggregated form into the Graph. It works similar to the [full git log version above](#import-git-log). The only difference is that not every single commit is imported. Instead, changes are grouped per month including their commit count. This is in many cases sufficient and reduces data size and processing time significantly. Here is the resulting schema:
245+
246+
```Cypher
247+
(Git:Log:Author)-[:AUTHORED]->(Git:Log:ChangeSpan)-[:CONTAINS]->(Git:Log:File)
248+
```
249+
217250
## Database Queries
218251

219252
### Cypher Shell

GETTING_STARTED.md

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ Please read through the [Prerequisites](./README.md#hammer_and_wrench-prerequisi
2424
cd MyFirstAnalysis
2525
```
2626

27-
1. Choose an initial password for Neo4j
27+
1. Choose an initial password for Neo4j if not already done
2828

2929
```shell
3030
export NEO4J_INITIAL_PASSWORD=theinitialpasswordthatihavechosenforneo4j
@@ -36,9 +36,11 @@ Please read through the [Prerequisites](./README.md#hammer_and_wrench-prerequisi
3636
mkdir artifacts
3737
```
3838

39-
1. Move the artifacts you want to analyze into the `artifacts` directory
39+
1. Move the artifacts (Java jar or Typescript analysis json files) you want to analyze into the `artifacts` directory
4040

41-
1. Optionally run a predefined script to download artifacts
41+
1. Optionally, create a `source` directory and clone the corresponding source code into it to also gather git log data.
42+
43+
1. Alternatively to the steps above, run an already predefined download script
4244

4345
```shell
4446
./../../scripts/downloader/downloadAxonFramework.sh <version>
@@ -48,31 +50,31 @@ Please read through the [Prerequisites](./README.md#hammer_and_wrench-prerequisi
4850

4951
1. Start the analysis
5052

51-
- Without any additional dependencies:
53+
- Without any additional dependencies:
5254

5355
```shell
5456
./../../scripts/analysis/analyze.sh --report Csv
5557
```
5658

57-
- Jupyter notebook reports when Python and Conda are installed:
59+
- Jupyter notebook reports when Python and Conda are installed:
5860

5961
```shell
6062
./../../scripts/analysis/analyze.sh --report Jupyter
6163
```
6264

63-
- Graph visualizations when Node.js and npm are installed:
65+
- Graph visualizations when Node.js and npm are installed:
6466

6567
```shell
6668
./../../scripts/analysis/analyze.sh --report Jupyter
6769
```
6870

69-
- All reports with Python, Conda, Node.js and npm installed:
71+
- All reports with Python, Conda, Node.js and npm installed:
7072

7173
```shell
7274
./../../scripts/analysis/analyze.sh
7375
```
7476

75-
- To explore the database yourself without any automatically generated reports and no additional requirements:
77+
- To explore the database yourself without any automatically generated reports and no additional requirements:
7678

7779
```shell
7880
./../../scripts/analysis/analyze.sh --explore

README.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,9 @@ This could be as simple as running the following command in your Typescript proj
9191
npx --yes @jqassistant/ts-lce
9292
```
9393

94-
- Copy the resulting json file (e.g. `.reports/jqa/ts-output.json`) into the "artifacts" directory for your analysis work directory. Custom subdirectories within "artifacts" are also supported.
94+
- It is recommended to put the cloned source code repository into a directory called `source` within the analysis workspace so that it will also be picked up to import git log data.
95+
96+
- Copy the resulting json file (e.g. `.reports/jqa/ts-output.json`) into the `artifacts` directory for your analysis work directory. Custom subdirectories within `artifacts` are also supported.
9597

9698
## :rocket: Getting Started
9799

@@ -105,7 +107,7 @@ The [Code Structure Analysis Pipeline](./.github/workflows/java-code-analysis.ym
105107
- [Checkout GIT Repository](https://github.com/actions/checkout)
106108
- [Setup Java](https://github.com/actions/setup-java)
107109
- [Setup Python with Conda](https://github.com/conda-incubator/setup-miniconda) package manager [Mambaforge](https://github.com/conda-forge/miniforge#mambaforge)
108-
- Download artifacts that contain the code to be analyzed [scripts/artifacts](./scripts/downloader/)
110+
- Download artifacts and optionally source code that contain the code to be analyzed [scripts/downloader](./scripts/downloader)
109111
- Setup [Neo4j](https://neo4j.com) Graph Database ([analysis.sh](./scripts/analysis/analyze.sh))
110112
- Setup [jQAssistant](https://jqassistant.github.io/jqassistant/doc) for Java and [Typescript](https://github.com/jqassistant-plugin/jqassistant-typescript-plugin) analysis ([analysis.sh](./scripts/analysis/analyze.sh))
111113
- Start [Neo4j](https://neo4j.com) Graph Database ([analysis.sh](./scripts/analysis/analyze.sh))
@@ -176,7 +178,7 @@ The [Code Structure Analysis Pipeline](./.github/workflows/java-code-analysis.ym
176178
👉 The script will automatically be included because of the directory and its name ending with "Jupyter.sh".
177179

178180
- How can i add another code basis to be analyzed automatically?
179-
👉 Create a new artifacts download script in the [scripts/downloader](./scripts/downloader/) directory. Take for example [downloadAxonFramework.sh](./scripts/downloader/downloadAxonFramework.sh) as a reference.
181+
👉 Create a new download script in the [scripts/downloader](./scripts/downloader/) directory. Take for example [downloadAxonFramework.sh](./scripts/downloader/downloadAxonFramework.sh) as a reference.
180182
👉 Run the script separately before executing [analyze.sh](./scripts/analysis/analyze.sh) also in the [pipeline](./.github/workflows/java-code-analysis.yml).
181183

182184
- How can i trigger a full re-scan of all artifacts?
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
// Connect git files to Java files with a RESOLVES_TO relationship if their names match
2+
3+
MATCH (file:File&!Git)
4+
WITH file, replace(file.fileName, '.class', '.java') AS fileName
5+
MATCH (git_file:File&Git)
6+
WHERE git_file.fileName ENDS WITH fileName
7+
MERGE (git_file)-[:RESOLVES_TO]->(file)
8+
SET git_file.resolved = true
9+
RETURN labels(file)[0..4] AS labels
10+
,count(DISTINCT fileName) AS numberOfFileNames
11+
,collect(DISTINCT fileName)[0..4] AS examples
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
// Delete all Git log data in the Graph
2+
3+
MATCH (n:Git)
4+
CALL { WITH n
5+
DETACH DELETE n
6+
} IN TRANSACTIONS OF 1000 ROWS
7+
RETURN count(n) as numberOfDeletedRows
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
// Import aggregated git log CSV data with the following schema: (Git:Log:Author)-[:AUTHORED]->(Git:Log:ChangeSpan)-[:CONTAINS]->(Git:Log:File)
2+
3+
LOAD CSV WITH HEADERS FROM "file:///aggregatedGitLog.csv" AS row
4+
CALL { WITH row
5+
MERGE (git_author:Git:Log:Author {name: row.author, email: row.email})
6+
MERGE (git_change_span:Git:Log:ChangeSpan {
7+
year: toInteger(row.year),
8+
month: toInteger(row.month),
9+
commits: toInteger(row.commits)
10+
})
11+
MERGE (git_file:Git:Log:File {fileName: row.filename})
12+
MERGE (git_author)-[:AUTHORED]->(git_change_span)
13+
MERGE (git_change_span)-[:CONTAINS]->(git_file)
14+
} IN TRANSACTIONS OF 1000 ROWS
15+
RETURN count(DISTINCT row.author) AS numberOfAuthors
16+
,count(DISTINCT row.filename) AS numberOfFiles
17+
,sum(toInteger(row.commits)) AS numberOfCommits
Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
// Import git log CSV data with the following schema: (Git:Log:Author)-[:AUTHORED]->(Git:Log:Commit)-[:CONTAINS]->(Git:Log:File)
2+
3+
LOAD CSV WITH HEADERS FROM "file:///gitLog.csv" AS row
4+
CALL { WITH row
5+
MERGE (git_author:Git:Log:Author {name: row.author, email: row.email})
6+
MERGE (git_commit:Git:Log:Commit {
7+
hash: row.hash,
8+
message: row.message,
9+
timestamp: datetime(row.timestamp),
10+
timestamp_unix: toInteger(row.timestamp_unix)
11+
})
12+
MERGE (git_file:Git:Log:File {fileName: row.filename})
13+
MERGE (git_author)-[:AUTHORED]->(git_commit)
14+
MERGE (git_commit)-[:CONTAINS]->(git_file)
15+
} IN TRANSACTIONS OF 1000 ROWS
16+
RETURN count(DISTINCT row.author) AS numberOfAuthors
17+
,count(DISTINCT row.filename) AS numberOfFiles
18+
,count(DISTINCT row.hash) AS numberOfCommits
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
// Create index for author name (git data)
2+
3+
CREATE INDEX INDEX_AUTHOR_NAME IF NOT EXISTS FOR (n:Author) ON (n.name)
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
// Create index for change span year (aggregated git data)
2+
3+
CREATE INDEX INDEX_CHANGE_SPAN_YEAR IF NOT EXISTS FOR (n:ChangeSpan) ON (n.year)
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
// Create index for commit hash (git data)
2+
3+
CREATE INDEX INDEX_COMMIT_HASH IF NOT EXISTS FOR (n:Commit) ON (n.hash)

0 commit comments

Comments
 (0)