Compare commits

..

157 Commits

Author SHA1 Message Date
622dd91ba6
node-26
All checks were successful
Unittests / unittests (push) Successful in 1m7s
Unittests / deploy-dev (push) Successful in 1m2s
Release / deploy-production (release) Successful in 2m17s
2024-11-19 14:50:14 +01:00
bc875b83a4
Add Bun
All checks were successful
Unittests / unittests (push) Successful in 14s
Unittests / deploy-dev (push) Successful in 47s
2024-07-13 12:05:40 +02:00
27152bad72
Debug line
All checks were successful
Unittests / unittests (push) Successful in 9s
Unittests / deploy-dev (push) Successful in 46s
2024-05-28 00:38:33 +02:00
27948ee5b6
HTTP API for snapshots
All checks were successful
Unittests / unittests (push) Successful in 9s
Unittests / deploy-dev (push) Successful in 41s
2024-05-27 18:51:56 +02:00
8c5a419efc
Fix default tech
All checks were successful
Unittests / unittests (push) Successful in 9s
Unittests / deploy-dev (push) Successful in 43s
2024-05-26 01:10:14 +02:00
88262a27d8
Better debug
All checks were successful
Unittests / unittests (push) Successful in 9s
Unittests / deploy-dev (push) Successful in 43s
2024-05-26 01:08:44 +02:00
088b6bdcf6
Get isPasswordSet when getting app
All checks were successful
Unittests / unittests (push) Successful in 8s
Unittests / deploy-dev (push) Successful in 41s
2024-05-25 16:23:10 +02:00
5f80da8cbd
Is password set field
All checks were successful
Unittests / unittests (push) Successful in 9s
Unittests / deploy-dev (push) Successful in 43s
2024-05-25 14:19:03 +02:00
2077271306
Clear password feature
All checks were successful
Unittests / unittests (push) Successful in 50s
Unittests / deploy-dev (push) Successful in 1m27s
2024-05-25 13:35:47 +02:00
cx
9e82bfc2b5 Chagne node-x ip
All checks were successful
Unittests / unittests (push) Successful in 1m23s
Unittests / deploy-dev (push) Successful in 1m15s
2024-04-15 10:30:05 +00:00
e0b5832e75
Fix deps and crashing when deleting nonexistant app
All checks were successful
Unittests / unittests (push) Successful in 9s
Unittests / deploy-dev (push) Successful in 59s
2024-03-03 02:55:05 +01:00
863d857283
Restart container when tech changes
All checks were successful
Unittests / unittests (push) Successful in 10s
Unittests / deploy-dev (push) Successful in 47s
2024-02-01 22:02:22 +01:00
1c5b8d8f50
Add better debug message to SetTechnology
All checks were successful
Unittests / unittests (push) Successful in 9s
Unittests / deploy-dev (push) Successful in 46s
2024-02-01 21:57:34 +01:00
bc4b6c7bff
Possibility to set tech during update
All checks were successful
Unittests / unittests (push) Successful in 9s
Unittests / deploy-dev (push) Successful in 53s
2024-02-01 20:57:13 +01:00
45899f3b0c
Set owner of metadata
All checks were successful
Unittests / unittests (push) Successful in 9s
Unittests / deploy-dev (push) Successful in 45s
2024-01-31 00:16:40 +01:00
5513da35b3
Fix endpoint name for metadata
All checks were successful
Unittests / unittests (push) Successful in 9s
Unittests / deploy-dev (push) Successful in 44s
2024-01-30 23:52:33 +01:00
31ba1ce5a3
Save metadata endpoint
All checks were successful
Unittests / unittests (push) Successful in 9s
Unittests / deploy-dev (push) Successful in 46s
2024-01-30 23:50:03 +01:00
a3d0ee92ce
Stats errors in Sentry
All checks were successful
Unittests / unittests (push) Successful in 9s
Unittests / deploy-dev (push) Successful in 52s
2024-01-26 21:43:45 +01:00
7a170e56d6
Gather more errors in Sentry
All checks were successful
Unittests / unittests (push) Successful in 9s
Unittests / deploy-dev (push) Successful in 54s
2024-01-26 21:15:14 +01:00
4e9398512e
Removing drone pipeline 2024-01-26 21:13:58 +01:00
a4e2bac0ff
Adding Sentry
All checks were successful
Unittests / unittests (push) Successful in 16s
Unittests / deploy-dev (push) Successful in 47s
2024-01-26 18:46:19 +01:00
02cdf5f815
Fix archive path
All checks were successful
Unittests / unittests (push) Successful in 14s
Unittests / deploy-dev (push) Successful in 1m0s
2023-12-13 17:45:17 +01:00
036587a77a
Volume preparation fix
All checks were successful
Unittests / unittests (push) Successful in 14s
Unittests / deploy-dev (push) Successful in 1m9s
2023-12-13 17:40:28 +01:00
6d62b200a4
Change suffix to .tar.zst
All checks were successful
Unittests / unittests (push) Successful in 17s
Unittests / deploy-dev (push) Successful in 1m7s
2023-12-13 17:32:20 +01:00
9564118f40
Add possibility to prepare /srv from an archive
All checks were successful
Unittests / unittests (push) Successful in 18s
Unittests / deploy-dev (push) Successful in 1m13s
2023-12-08 19:01:07 +01:00
0ad979b240
Pipeline migrated to gitea
All checks were successful
Unittests / unittests (push) Successful in 14s
Unittests / deploy-dev (push) Successful in 1m5s
2023-11-26 01:43:02 +01:00
9224675139
Fix sql data type
Some checks failed
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
continuous-integration/drone/promote/production Build is failing
2023-10-15 01:20:28 +02:00
5fc6f39529
Our own JSON map
Some checks failed
continuous-integration/drone/push Build is failing
2023-10-15 00:57:41 +02:00
fe8aa885e7
Add env
Some checks failed
continuous-integration/drone/push Build is failing
2023-10-14 01:13:02 +02:00
00beda8137
Add env
Some checks failed
continuous-integration/drone/push Build is failing
2023-10-14 01:10:14 +02:00
37ca4ece39
Disable OOM killer for apps with less than 1.5 GB of RAM
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/promote/production Build is passing
2023-09-29 13:33:49 +02:00
072a643c1d
Disable oomkiller just for smaller containers
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/promote/production Build is passing
2023-09-25 19:46:42 +02:00
ed5061fd58
Disable OOM killer
All checks were successful
continuous-integration/drone/push Build is passing
2023-09-25 18:47:11 +02:00
7f8ed1c018
Fix error when technology couldn't be determined
All checks were successful
continuous-integration/drone/push Build is passing
2023-09-23 23:41:00 +02:00
dab9b52953
Add support for Debian 12 builds
All checks were successful
continuous-integration/drone Build is passing
continuous-integration/drone/promote/production Build is passing
2023-09-16 11:40:34 +02:00
0b5ded2abf
Fix debian 10 deployment
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/promote/production Build is passing
2023-07-01 00:02:01 +02:00
494e31c4b1
Add saturn's keys
Some checks failed
continuous-integration/drone/push Build is passing
continuous-integration/drone/promote/production Build is failing
2023-06-30 23:49:06 +02:00
d25e0aba36
Fix production deploy
Some checks failed
continuous-integration/drone/push Build is passing
continuous-integration/drone/promote/production Build is failing
2023-06-30 23:39:12 +02:00
6390fb19bb
Fix production deployment
Some checks failed
continuous-integration/drone/push Build is passing
continuous-integration/drone/promote/production Build is failing
2023-06-30 23:27:11 +02:00
e9fb6a6b46
SSH keys related logging
Some checks failed
continuous-integration/drone/push Build is passing
continuous-integration/drone/promote/production Build is failing
2023-06-30 00:46:15 +02:00
4a4da2c080
Fix owner issue
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/promote/production Build is passing
2023-04-29 00:02:33 +02:00
617f818df4
Fix crash when new app is created
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/promote/production Build is passing
2023-04-25 19:11:05 +02:00
c6f9620945
Fix link
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-24 22:42:31 +02:00
af1e69e594
Fix symlink
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-24 22:35:58 +02:00
6fd3278ee8
Debug lines
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-24 22:29:59 +02:00
1ccf4b8301
Missing get_active_tech handler
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-24 22:24:37 +02:00
37a5297c88
Get active tech feature
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-24 14:19:58 +02:00
5b0a46951b
Skip host and deploy keys in another level
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/promote/production Build is passing
2023-04-21 12:01:34 +02:00
16bb4e71d5
Return empty strings for host and ssh keys if container is down
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/promote/production Build is passing
2023-04-21 11:13:02 +02:00
8c8ecc6379
Fix obtaining host key
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/promote/production Build is passing
2023-04-19 23:40:55 +02:00
d465421fa0
Prefix to deploy keys
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-14 19:22:10 +02:00
4c2bcdd3e7
Fix SSH deploy keyType
Some checks reported errors
continuous-integration/drone/push Build was killed
2023-04-14 19:19:45 +02:00
45996c4d1b
Fix host keys split
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-14 19:12:21 +02:00
3ae383d1cd
Error when host key is not found
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-14 19:05:11 +02:00
7f07a50bfd
Host key refactoring
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-13 22:54:14 +02:00
036aa4fc28
SSH keys fixes
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-13 22:49:06 +02:00
bcc641307a
Fix key generator
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-13 22:37:39 +02:00
da5ad13ae3
Fix commands
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-13 22:30:11 +02:00
d7b878bbbc
SSH host keyscan typo fix
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-13 20:17:29 +02:00
9a37518431
Get host ssh key handler
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-13 20:07:31 +02:00
8051310677
Get SSH host keys message handler
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
2023-04-11 22:56:15 +02:00
df5a390680
Add SSH deploy keys generator
Some checks failed
continuous-integration/drone/push Build is failing
2023-04-11 22:49:50 +02:00
f3f50b0ace
Fix race condition in create endpoint
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
2023-02-05 00:42:49 +01:00
779d9ba95a
Set password, ssh keys and tech in single step during create
All checks were successful
continuous-integration/drone/push Build is passing
2023-02-04 10:33:47 +01:00
d469c813a1
AppPort support
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-28 11:46:59 +01:00
b7dfa22f94
Fix tests
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
2022-10-04 20:00:29 +02:00
938f6f89b6
Add oomkilled status to the app in the database
Some checks failed
continuous-integration/drone/push Build is failing
2022-10-04 19:54:26 +02:00
b15e85474e
tarBin
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
2022-07-23 00:36:57 +02:00
8adbf84362
Switch to /bin/tar
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-23 00:33:34 +02:00
5b63a0d9aa
CI: zstd fix
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-23 00:28:29 +02:00
797bc6d654
Update unittest image
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-23 00:26:30 +02:00
0725b352c6
Sleep before minio boots
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-23 00:22:17 +02:00
96071cdfcb
Use tar to restore instead of archiver
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-22 19:49:37 +02:00
37d884e665
Use tar instead of archive
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-22 19:47:59 +02:00
0ea302d688
Update of archive lib
Some checks failed
continuous-integration/drone/push Build is failing
continuous-integration/drone Build is passing
2022-07-22 18:44:17 +02:00
5caa570055
Potential fix of set tech in the container + logging
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
2022-06-15 20:16:20 +02:00
61adbd7a28
Stop destroys the container
All checks were successful
continuous-integration/drone/push Build is passing
2022-05-27 19:12:19 +02:00
e8d952cac0
Fast get for get without status
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
2022-05-27 18:24:39 +02:00
69e147ccf9
Different pipeline for Debian 11
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
2022-05-21 22:42:17 +02:00
1017288a4c
New production nodes
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
2022-05-21 21:29:47 +02:00
0cc986d22f
New test node
All checks were successful
continuous-integration/drone/push Build is passing
2022-05-20 23:53:54 +02:00
24fffe736e
Fix schema creation for performance index
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
2022-05-07 02:21:21 +02:00
473d561b84
Make it float
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
2022-05-06 19:08:23 +02:00
52d8f7b250
stats elapsed time in seconds
All checks were successful
continuous-integration/drone/push Build is passing
2022-05-06 19:05:04 +02:00
02ebfb8eea
Metric for stats elapsed time
All checks were successful
continuous-integration/drone/push Build is passing
2022-05-06 18:59:58 +02:00
68121fda15
Making metrics faster
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
2022-05-06 18:40:22 +02:00
35a0c5acd1
Don't care about non existing containers when rebuilding
All checks were successful
continuous-integration/drone/push Build is passing
2022-04-21 22:31:58 +02:00
3f1a0d9a8a
Getting arch back
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
2022-04-21 22:21:56 +02:00
29866d0e75
Fix cases where /opt/techs doesn't exist
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
2022-04-20 14:06:16 +02:00
fea1a46a11
Disable platform specification
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-24 15:13:06 +01:00
7334337c93
New bot pattern shiziyama
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-10 12:25:25 +01:00
7add860d83
Quick fix of update handler
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
2022-02-09 16:54:30 +01:00
00f2d418d6
Youtube DL pattern
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
continuous-integration/drone Build is passing
2022-02-08 00:43:58 +01:00
2405383ff1
More container tests
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-07 23:59:35 +01:00
157afdcdbe
A few more container's tests
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-07 18:30:42 +01:00
c11de8a3e9
Removing volume directory fix
Directory mounted in the container wasn't removed along the container
so we ended up with a lot of empty directories on the server.
2022-02-07 18:20:22 +01:00
81b3d45a4b
Fix GetProcesses in docker
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-07 17:34:10 +01:00
1d1a225be6
Fix stats inputs
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-07 17:18:44 +01:00
69e46f577a
Docker socket fix
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-07 17:12:23 +01:00
676ddf2136
Fixes
All checks were successful
continuous-integration/drone/push Build is passing
*Fix get action
*Taking common out of modules
* initial glue tests
* dynamic docker sock for testing
* Stats rewroked to be more encapsulated
* Fix of stats gathering in background
2022-02-07 17:00:16 +01:00
722d2a1c86
Fix of missing docker sock config in the container instance
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-06 02:02:30 +01:00
c8a93d7993
Disabling the test again and let's get it into X instance
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-06 01:43:10 +01:00
1fb659b1e6
Wait for the container
Some checks failed
continuous-integration/drone/push Build is failing
2022-02-06 01:40:04 +01:00
e50e139f60
Fix containers test in Makefile
Some checks failed
continuous-integration/drone/push Build is failing
2022-02-06 01:15:55 +01:00
8924db5712
DIND for container testing
Some checks failed
continuous-integration/drone/push Build is failing
2022-02-06 01:13:45 +01:00
d43529fbb8
Moving config from containers module
All checks were successful
continuous-integration/drone/push Build is passing
For better encapsulation
2022-02-06 00:49:32 +01:00
e58d6462a9
Massive handlers refactoring
All checks were successful
continuous-integration/drone/push Build is passing
Taking logic from handler into glue module
Add tests for apps
Updated docker library
2022-02-06 00:01:05 +01:00
fdcdca237f
Upgade of docker client, ...
All checks were successful
continuous-integration/drone/push Build is passing
Docker client upgrade,
Miner detection
Flags for apps
2022-02-05 02:22:53 +01:00
49f1a03b42
Fix GORM type for flags
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-04 18:09:26 +01:00
caf791548d
Fix handlers
All checks were successful
continuous-integration/drone/push Build is passing
2022-02-03 01:41:00 +01:00
27754caaf0
Drop one of the tests
Some checks failed
continuous-integration/drone/push Build is failing
2022-02-03 01:36:08 +01:00
a213fac34a
Flags support
Some checks failed
continuous-integration/drone/push Build is failing
2022-02-03 01:31:47 +01:00
2a34177ab8
Possibility to set technology version
Some checks failed
continuous-integration/drone/push Build is failing
2021-12-23 14:01:27 +01:00
6f532b052e
Set proper JSON technology JSON output
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
2021-12-22 01:29:56 +01:00
a4beb9727e
Fix primary tech in case of single technology image
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
2021-12-22 01:12:42 +01:00
c2e9b1cc18
Fix single technology image
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
It caused premature ending of the status processing in the admin.
2021-12-22 01:02:23 +01:00
7fe8dfcd9b
Removing old node server
All checks were successful
continuous-integration/drone/push Build is passing
2021-12-21 19:03:10 +01:00
fd3c85bed4
Hide gathering tech info errors
Some checks failed
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is failing
2021-12-20 19:13:45 +01:00
2a545daedb
FIx techs listing
All checks were successful
continuous-integration/drone/push Build is passing
2021-12-20 18:51:34 +01:00
7c46fdd886
Primary tech and list of available techs
All checks were successful
continuous-integration/drone/push Build is passing
This is initial support limited to running apps only and only for get handler.
2021-12-20 04:53:22 +01:00
3fdabb2229
Fixing returned value of getSnapshotDownloadLinkEventHandler
All checks were successful
continuous-integration/drone/push Build is passing
2021-11-04 00:20:30 +01:00
c5667f99b8
Stack overflow fix
All checks were successful
continuous-integration/drone/push Build is passing
2021-11-04 00:09:42 +01:00
3120a8c6ee
Returning functional snapshot download link
All checks were successful
continuous-integration/drone/push Build is passing
2021-11-03 23:50:58 +01:00
872826c0b1
Download link generator for snapshots
All checks were successful
continuous-integration/drone/push Build is passing
Three days expiration
2021-11-03 22:32:38 +01:00
66476df2bc
Fix removing of .chowned
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
2021-11-01 21:38:24 +01:00
564e9ec4f1
Remove .chowned file to restore ownership of files when restore from snapshot
Some checks failed
continuous-integration/drone/push Build is failing
2021-11-01 21:30:08 +01:00
4e56ab7657
Size of the snapshot in metadata fix
All checks were successful
continuous-integration/drone/push Build is passing
2021-11-01 21:19:42 +01:00
d2b935b85c
Extension fix
All checks were successful
continuous-integration/drone/push Build is passing
2021-11-01 19:55:38 +01:00
12c2b14b6f
Switch to TarZstd 2021-11-01 19:06:23 +01:00
f65702b416
Changing default of SnapshotsIndexLabel to owner_id
All checks were successful
continuous-integration/drone/push Build is passing
2021-11-01 01:26:14 +01:00
66476275e4
Fix of snapshotProcessor initiation
All checks were successful
continuous-integration/drone/push Build is passing
Second try
2021-11-01 01:17:35 +01:00
801b082785
Fix snapshotProcessor initiation
All checks were successful
continuous-integration/drone/push Build is passing
2021-11-01 01:12:28 +01:00
d4dddc0df3
Snapshot metadata storage reworked
All checks were successful
continuous-integration/drone/push Build is passing
Because there is a limit of filesize 255 bytes in Linux which we hit during testing.
2021-11-01 01:05:59 +01:00
95ae31fdfb
Reporting error problem fix
All checks were successful
continuous-integration/drone/push Build is passing
2021-10-31 20:39:50 +01:00
fc78a7f375
Restore from snapshot logline
All checks were successful
continuous-integration/drone/push Build is passing
2021-10-31 19:00:06 +01:00
dae67e6be6
Get snapshot handler
All checks were successful
continuous-integration/drone/push Build is passing
It translates snapshot keys into metadata structure
2021-10-31 12:01:24 +01:00
1a107e2a28
Creating new apps from snapshots support
All checks were successful
continuous-integration/drone/push Build is passing
2021-10-31 02:29:58 +02:00
7616eef12b
Add size of the snapshot into the structure
All checks were successful
continuous-integration/drone/push Build is passing
2021-10-30 13:29:35 +02:00
81bcdcf00c
Encapsulate snapshot list replies with key
All checks were successful
continuous-integration/drone/push Build is passing
2021-10-30 13:17:33 +02:00
615029f6c0
Fix tests
All checks were successful
continuous-integration/drone/push Build is passing
2021-10-29 01:09:49 +02:00
03eb87d9dd
Proper name for JSON format of snapshot record
Some checks failed
continuous-integration/drone/push Build is failing
Sandbox renamed to dev in the deploy pipeline
2021-10-29 01:03:17 +02:00
dc478714bf
Handler for list_snapshots_by_label message
All checks were successful
continuous-integration/drone/push Build is passing
2021-10-29 00:57:06 +02:00
4e56466b07
Snapshot: continue on error
All checks were successful
continuous-integration/drone/push Build is passing
This should help with sock files error
2021-10-28 01:22:50 +02:00
fa277e9f08
Typo fix
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
2021-10-27 01:26:04 +02:00
fa857b1913
Setting up the corrent production servers
All checks were successful
continuous-integration/drone/push Build is passing
2021-10-27 01:23:39 +02:00
65bdabd78c
More deploy messages, fix mkdir -p issue
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
2021-10-27 01:20:26 +02:00
2640690cc6
Production deploy
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone Build is passing
2021-10-27 01:11:57 +02:00
3241db2ef1
Temporarly changed node-x domain to address
All checks were successful
continuous-integration/drone/push Build is passing
2021-10-27 00:08:29 +02:00
03b9dd1c58
Set bulleye for building sandbox binary
Some checks failed
continuous-integration/drone/push Build is failing
2021-10-27 00:01:01 +02:00
d15aee9fa0
Fix SSH key
Some checks failed
continuous-integration/drone/push Build is failing
2021-10-26 23:50:26 +02:00
b534ee7c05
Print key to check
Some checks failed
continuous-integration/drone/push Build is failing
2021-10-26 23:46:21 +02:00
5ba061895b
Fix
Some checks failed
continuous-integration/drone/push Build is failing
continuous-integration/drone Build is failing
2021-10-26 23:38:19 +02:00
119baa48c5
Sandbox deploy fix
Some checks failed
continuous-integration/drone/push Build is failing
2021-10-26 23:24:46 +02:00
4b823afc51
Key scan
Some checks failed
continuous-integration/drone/push Build is failing
2021-10-26 23:19:54 +02:00
a9cf273650
Pipeline dep fix
Some checks failed
continuous-integration/drone/push Build is failing
2021-10-26 23:14:34 +02:00
0a6789c56c
Sandbox update fix 2021-10-26 23:13:20 +02:00
3fb5c217e4
Sandbox deploy
Some checks failed
continuous-integration/drone/push Build is failing
2021-10-26 23:08:07 +02:00
44 changed files with 6696 additions and 1507 deletions

View File

@ -1,25 +0,0 @@
kind: pipeline
type: docker
name: testing
steps:
- name: test
image: golang
environment:
SNAPSHOTS_S3_ENDPOINT: minio:9000
TEST_S3_ENDPOINT: minio:9000
commands:
- go mod tidy
- make test
services:
- name: minio
image: minio/minio:latest
environment:
MINIO_ROOT_USER: test
MINIO_ROOT_PASSWORD: testtest
command:
- server
- /data
- --console-address
- :9001

View File

@ -0,0 +1,33 @@
name: Release
on:
release:
types: [published]
# push:
# branches: [main]
workflow_dispatch: {}
jobs:
deploy-production:
runs-on: [amd64, prod]
env:
NODES: node-22.rosti.cz node-23.rosti.cz node-24.rosti.cz node-25.rosti.cz node-26.rosti.cz
steps:
- uses: actions/checkout@v4
- name: deploy
run: |
echo "Building for Debian 12 .."
docker run --rm --privileged -ti -v `pwd`:/srv golang:1.21-bookworm /bin/sh -c "cd /srv && go build"
for NODE in $NODES; do
echo "\033[0;32mDeploying $NODE\033[0m"
echo "\033[1;33m.. scanning SSH keys\033[0m"
ssh -o "StrictHostKeyChecking=no" root@$NODE echo "Setting up key"
echo "\033[1;33m.. copying the binary\033[0m"
scp node-api root@$NODE:/usr/local/bin/node-api_
echo "\033[1;33m.. replacing the binary\033[0m"
ssh root@$NODE mv /usr/local/bin/node-api_ /usr/local/bin/node-api
echo "\033[1;33m.. restarting service\033[0m"
ssh root@$NODE systemctl restart node-api
done

View File

@ -0,0 +1,62 @@
name: Unittests
on:
push:
branches: [main]
workflow_dispatch: {}
jobs:
unittests:
runs-on: [amd64, moon]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v3
with:
go-version: 1.21
- name: start minio
run: |
docker run -d --rm --name nodeapi_minio -p 9000:9000 -p 9001:9001 -e MINIO_ROOT_USER=test -e MINIO_ROOT_PASSWORD=testtest minio/minio:latest server /data --console-address :9001
- name: deps
run: apt update && apt install -y tar zstd
- name: Test
run: |
make test
- name: stop minio
if: always()
run: |
docker stop nodeapi_minio
# TODO: probably not supported by Gitea workflows yet
# services:
# minio:
# image: minio/minio:latest
# env:
# MINIO_ROOT_USER: test
# MINIO_ROOT_PASSWORD: testtest
# ports:
# - 9001:9001
# options: server /data --console-address :9001
deploy-dev:
runs-on: [amd64, moon]
env:
NODES: "192.168.1.33"
steps:
- uses: actions/checkout@v4
- name: deploy
run: |
# echo LS1
# ls
docker run --rm --privileged -ti -v `pwd`:/srv golang:1.20-buster /bin/sh -c "cd /srv && go build"
# docker run --rm --privileged -ti -v `pwd`:/srv golang:1.20-buster /bin/sh -c "cd /srv && echo LS2 && ls"
for NODE in $NODES; do
echo "\033[0;32mDeploying $NODE\033[0m"
echo "\033[1;33m.. scanning SSH keys\033[0m"
ssh -o "StrictHostKeyChecking=no" root@$NODE echo "Setting up key"
echo "\033[1;33m.. copying the binary\033[0m"
scp node-api root@$NODE:/usr/local/bin/node-api_
echo "\033[1;33m.. replacing the binary\033[0m"
ssh root@$NODE mv /usr/local/bin/node-api_ /usr/local/bin/node-api
echo "\033[1;33m.. restarting service\033[0m"
ssh root@$NODE systemctl restart node-api
done

View File

@ -1,7 +1,9 @@
.PHONY: test .PHONY: test
test: test:
go test -v apps/*.go go test -race -v apps/*.go
go test -v apps/drivers/*.go go test -race -v apps/drivers/*.go
go test -race -v detector/*.go
# env DOCKER_SOCKET="unix:///var/run/docker.sock" go test -v containers/*.go # Doesn't work in Drone right now
build: build:
#podman run --rm --privileged -ti -v ${shell pwd}:/srv docker.io/library/golang:1.14-stretch /bin/sh -c "cd /srv && go build" #podman run --rm --privileged -ti -v ${shell pwd}:/srv docker.io/library/golang:1.14-stretch /bin/sh -c "cd /srv && go build"
@ -18,4 +20,9 @@ minio:
-p 9001:9001 \ -p 9001:9001 \
-e MINIO_ROOT_USER=test \ -e MINIO_ROOT_USER=test \
-e MINIO_ROOT_PASSWORD=testtest \ -e MINIO_ROOT_PASSWORD=testtest \
minio/minio server /data --console-address ":9001" docker.io/minio/minio:latest server /data --console-address ":9001"
.PHONY: clean
clean:
-podman stop rosti-snapshots
-podman rm rosti-snapshots

View File

@ -4,3 +4,16 @@
Node API is an microservice that runs on node servers. It provides interface between Node API is an microservice that runs on node servers. It provides interface between
Docker and the admin site. Docker and the admin site.
## Test
On Fedora run podman API:
Root: sudo systemctl enable --now podman.socket
Rootless: podman system service -t 0 --log-level=debug
set -x DOCKER_SOCKET unix:/run/user/1000/podman/podman.sock
# or
export DOCKER_SOCKET=unix:/run/user/1000/podman/podman.sock

View File

@ -1,6 +1,7 @@
package drivers package drivers
import ( import (
"errors"
"io" "io"
"io/ioutil" "io/ioutil"
"os" "os"
@ -83,3 +84,8 @@ func (f FSDriver) List(prefix string) ([]string, error) {
func (f FSDriver) Delete(key string) error { func (f FSDriver) Delete(key string) error {
return os.Remove(path.Join(f.Path, key)) return os.Remove(path.Join(f.Path, key))
} }
// GetDownloadLink is not implemented in this driver
func (f FSDriver) GetDownloadLink(key string) (string, error) {
return "", errors.New("not implemented")
}

View File

@ -5,11 +5,14 @@ import (
"context" "context"
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"time"
"github.com/minio/minio-go/v7" "github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials" "github.com/minio/minio-go/v7/pkg/credentials"
) )
const s3LinkExpiration = 72 // in hours
// S3Driver provides basic interface for S3 storage compatible with Driver interface // S3Driver provides basic interface for S3 storage compatible with Driver interface
type S3Driver struct { type S3Driver struct {
// S3 storage related things // S3 storage related things
@ -155,3 +158,22 @@ func (s S3Driver) Delete(key string) error {
return nil return nil
} }
// GetDownloadLink returns URL of object with given key. That URL can be used to download the object.
func (s S3Driver) GetDownloadLink(key string) (string, error) {
client, err := s.getMinioClient()
if err != nil {
return "", fmt.Errorf("getting minio client error: %v", err)
}
expiry := time.Second * s3LinkExpiration * 60 * 60 // 1 day.
presignedURL, err := client.PresignedPutObject(context.Background(), s.Bucket, key, expiry)
if err != nil {
return "", fmt.Errorf("generating presign URL error: %v", err)
}
if s.S3SSL {
return fmt.Sprintf("https://%s%s?%s", s.S3Endpoint, presignedURL.Path, presignedURL.RawQuery), nil
}
return fmt.Sprintf("http://%s%s?%s", s.S3Endpoint, presignedURL.Path, presignedURL.RawQuery), nil
}

View File

@ -77,3 +77,12 @@ func TestS3Delete(t *testing.T) {
assert.Nil(t, err) assert.Nil(t, err)
assert.NotContains(t, keys, "testkey") assert.NotContains(t, keys, "testkey")
} }
func TestGetDownloadLink(t *testing.T) {
err := testS3Driver.Write("testkey", []byte(testContent))
assert.Nil(t, err)
link, err := testS3Driver.GetDownloadLink("testkey")
assert.Nil(t, err)
assert.Contains(t, link, "/testsnapshots/testkey?X-Amz-Algorithm=AWS4-HMAC-SHA256")
}

View File

@ -19,4 +19,7 @@ type DriverInterface interface {
// Delete deletes one object // Delete deletes one object
Delete(key string) error Delete(key string) error
// GetDownloadLink returns URL of object with given key. That URL can be used to download the object.
GetDownloadLink(key string) (string, error)
} }

View File

@ -3,9 +3,11 @@ package apps
import ( import (
"github.com/jinzhu/gorm" "github.com/jinzhu/gorm"
_ "github.com/jinzhu/gorm/dialects/sqlite" _ "github.com/jinzhu/gorm/dialects/sqlite"
"github.com/rosti-cz/node-api/detector"
) )
// AppsProcessor encapsulates functions for apps manipulation // AppsProcessor encapsulates functions for apps manipulation
// This handles only the database part but not containers
type AppsProcessor struct { type AppsProcessor struct {
DB *gorm.DB DB *gorm.DB
} }
@ -16,19 +18,19 @@ func (a *AppsProcessor) Init() {
} }
// Get returns one app // Get returns one app
func (a *AppsProcessor) Get(name string) (*App, error) { func (a *AppsProcessor) Get(name string) (App, error) {
var app App var app App
err := a.DB.Preload("Labels").Where("name = ?", name).First(&app).Error err := a.DB.Preload("Labels").Where("name = ?", name).First(&app).Error
if err != nil { if err != nil {
return nil, err return app, err
} }
return &app, nil return app, nil
} }
// List returns all apps located on this node // List returns all apps located on this node
func (a *AppsProcessor) List() (*Apps, error) { func (a *AppsProcessor) List() (Apps, error) {
var apps Apps var apps Apps
err := a.DB.Preload("Labels").Find(&apps).Error err := a.DB.Preload("Labels").Find(&apps).Error
@ -36,7 +38,7 @@ func (a *AppsProcessor) List() (*Apps, error) {
return nil, err return nil, err
} }
return &apps, nil return apps, nil
} }
// New creates new record about application in the database // New creates new record about application in the database
@ -65,7 +67,7 @@ func (a *AppsProcessor) New(name string, SSHPort int, HTTPPort int, image string
} }
// Update changes value about app in the database // Update changes value about app in the database
func (a *AppsProcessor) Update(name string, SSHPort int, HTTPPort int, image string, CPU int, memory int) (*App, error) { func (a *AppsProcessor) Update(name string, SSHPort int, HTTPPort int, image string, CPU int, memory int, env map[string]string) (*App, error) {
var app App var app App
err := a.DB.Where("name = ?", name).First(&app).Error err := a.DB.Where("name = ?", name).First(&app).Error
@ -96,6 +98,10 @@ func (a *AppsProcessor) Update(name string, SSHPort int, HTTPPort int, image str
app.HTTPPort = HTTPPort app.HTTPPort = HTTPPort
} }
if len(env) != 0 {
app.SetEnv(env)
}
validationErrors := app.Validate() validationErrors := app.Validate()
if len(validationErrors) != 0 { if len(validationErrors) != 0 {
return &app, ValidationError{ return &app, ValidationError{
@ -109,21 +115,25 @@ func (a *AppsProcessor) Update(name string, SSHPort int, HTTPPort int, image str
} }
// UpdateResources updates various metrics saved in the database // UpdateResources updates various metrics saved in the database
func (a *AppsProcessor) UpdateResources(name string, state string, CPUUsage float64, memory int, diskUsageBytes int, diskUsageInodes int) error { func (a *AppsProcessor) UpdateResources(name string, state string, OOMKilled bool, CPUUsage float64, memory int, diskUsageBytes int, diskUsageInodes int, flags detector.Flags, isPasswordSet bool) error {
err := a.DB.Model(&App{}).Where("name = ?", name).Updates(App{ err := a.DB.Model(&App{}).Where("name = ?", name).Updates(App{
State: state, State: state,
OOMKilled: OOMKilled,
CPUUsage: CPUUsage, CPUUsage: CPUUsage,
MemoryUsage: memory, MemoryUsage: memory,
DiskUsageBytes: diskUsageBytes, DiskUsageBytes: diskUsageBytes,
DiskUsageInodes: diskUsageInodes, DiskUsageInodes: diskUsageInodes,
Flags: flags.String(),
IsPasswordSet: isPasswordSet,
}).Error }).Error
return err return err
} }
// UpdateState sets container's state // UpdateState sets container's state
func (a *AppsProcessor) UpdateState(name string, state string) error { func (a *AppsProcessor) UpdateState(name string, state string, OOMKilled bool) error {
err := a.DB.Model(&App{}).Where("name = ?", name).Updates(App{ err := a.DB.Model(&App{}).Where("name = ?", name).Updates(App{
State: state, State: state,
OOMKilled: OOMKilled,
}).Error }).Error
return err return err
} }
@ -178,5 +188,5 @@ func (a *AppsProcessor) RemoveLabel(appName string, label string) error {
return err return err
} }
return a.DB.Where("label = ? AND app_id = ?", label, app.ID).Delete(&Label{}).Error return a.DB.Where("value = ? AND app_id = ?", label, app.ID).Delete(&Label{}).Error
} }

194
apps/main_test.go Normal file
View File

@ -0,0 +1,194 @@
package apps
import (
"log"
"testing"
"github.com/jinzhu/gorm"
"github.com/rosti-cz/node-api/detector"
"github.com/stretchr/testify/assert"
// This is line from GORM documentation that imports database dialect
_ "github.com/jinzhu/gorm/dialects/sqlite"
)
const testDBPath = "file::memory:?cache=shared"
func testDB() *gorm.DB {
db, err := gorm.Open("sqlite3", testDBPath)
if err != nil {
log.Fatalln(err)
}
return db
}
func TestAppsProcessorGet(t *testing.T) {
processor := AppsProcessor{
DB: testDB(),
}
processor.Init()
err := processor.New("testapp_1234", 1000, 1001, "testimage", 2, 256)
assert.Nil(t, err)
app, err := processor.Get("testapp_1234")
assert.Nil(t, err)
assert.Greater(t, int(app.ID), 0)
}
func TestAppsProcessorList(t *testing.T) {
processor := AppsProcessor{
DB: testDB(),
}
processor.Init()
err := processor.New("testapp_2234", 1002, 1003, "testimage", 2, 256)
assert.Nil(t, err)
apps, err := processor.List()
assert.Nil(t, err)
assert.Equal(t, "testapp_1234", apps[0].Name)
}
func TestAppsProcessorNew(t *testing.T) {
processor := AppsProcessor{
DB: testDB(),
}
processor.Init()
err := processor.New("testapp_1234", 1002, 1003, "testimage", 2, 256)
assert.Nil(t, err)
}
func TestAppsProcessorUpdate(t *testing.T) {
processor := AppsProcessor{
DB: testDB(),
}
processor.Init()
err := processor.New("updateapp_1224", 1002, 1003, "testimage", 2, 256)
assert.Nil(t, err)
env := make(map[string]string)
env["TEST"] = "test"
_, err = processor.Update("updateapp_1224", 1052, 1053, "testimage2", 4, 512, env)
assert.Nil(t, err)
app, err := processor.Get("updateapp_1224")
assert.Nil(t, err)
assert.Equal(t, 1052, app.SSHPort)
assert.Equal(t, 1053, app.HTTPPort)
assert.Equal(t, "testimage2", app.Image)
assert.Equal(t, 4, app.CPU)
assert.Equal(t, 512, app.Memory)
}
func TestAppsProcessorUpdateResources(t *testing.T) {
processor := AppsProcessor{
DB: testDB(),
}
processor.Init()
err := processor.New("updateresources_1224", 1002, 1003, "testimage", 2, 256)
assert.Nil(t, err)
err = processor.UpdateResources("updateresources_1224", "running", true, 1000, 256, 100, 200, detector.Flags{"test"}, true)
assert.Nil(t, err)
app, err := processor.Get("updateresources_1224")
assert.Nil(t, err)
assert.Equal(t, "running", app.State)
assert.Equal(t, true, app.OOMKilled)
assert.Equal(t, float64(1000), app.CPUUsage)
assert.Equal(t, 256, app.MemoryUsage)
assert.Equal(t, 100, app.DiskUsageBytes)
assert.Equal(t, 200, app.DiskUsageInodes)
assert.Equal(t, true, app.IsPasswordSet)
assert.Contains(t, app.Flags, "test")
}
func TestAppsProcessorUpdateState(t *testing.T) {
processor := AppsProcessor{
DB: testDB(),
}
processor.Init()
err := processor.New("update_1224", 1002, 1003, "testimage", 2, 256)
assert.Nil(t, err)
err = processor.UpdateState("update_1224", "no-container", false)
assert.Nil(t, err)
app, err := processor.Get("update_1224")
assert.Nil(t, err)
assert.Equal(t, "no-container", app.State)
}
func TestAppsProcessorAddLabel(t *testing.T) {
processor := AppsProcessor{
DB: testDB(),
}
processor.Init()
err := processor.New("label_1224", 1002, 1003, "testimage", 2, 256)
assert.Nil(t, err)
err = processor.AddLabel("label_1224", "testlabel")
assert.Nil(t, err)
app, err := processor.Get("label_1224")
assert.Nil(t, err)
assert.Equal(t, "testlabel", app.Labels[0].Value)
}
func TestAppsProcessorRemoveLabel(t *testing.T) {
processor := AppsProcessor{
DB: testDB(),
}
processor.Init()
err := processor.New("label_1223", 1002, 1003, "testimage", 2, 256)
assert.Nil(t, err)
err = processor.AddLabel("label_1223", "testlabel")
assert.Nil(t, err)
app, err := processor.Get("label_1223")
assert.Nil(t, err)
assert.Equal(t, "testlabel", app.Labels[0].Value)
err = processor.RemoveLabel("label_1223", "testlabel")
assert.Nil(t, err)
app, err = processor.Get("label_1223")
assert.Nil(t, err)
assert.Equal(t, 0, len(app.Labels))
}
func TestAppsProcessorDelete(t *testing.T) {
processor := AppsProcessor{
DB: testDB(),
}
processor.Init()
err := processor.New("testapp_5234", 1002, 1003, "testimage", 2, 256)
assert.Nil(t, err)
app, err := processor.Get("testapp_5234")
assert.Nil(t, err)
assert.Equal(t, 256, app.Memory)
err = processor.Delete("testapp_5234")
assert.Nil(t, err)
_, err = processor.Get("testapp_5234")
assert.Error(t, err, "record not found")
}

View File

@ -1,29 +1,33 @@
package apps package apps
import ( import (
"encoding/base64"
"encoding/json" "encoding/json"
"errors" "errors"
"fmt" "fmt"
"log" "log"
"os" "os"
"os/exec"
"path" "path"
"strings" "strings"
"time" "time"
"github.com/mholt/archiver/v3"
"github.com/rosti-cz/node-api/apps/drivers" "github.com/rosti-cz/node-api/apps/drivers"
uuid "github.com/satori/go.uuid"
) )
const bucketName = "snapshots" const bucketName = "snapshots"
const dateFormat = "20060102_150405" const dateFormat = "20060102_150405"
const keySplitCharacter = ":" const keySplitCharacter = ":"
const metadataPrefix = "_metadata"
const metadataKeyTemplate = metadataPrefix + "/%s"
const tarBin = "/bin/tar"
// Snapshot contains metadata about a single snapshot // Snapshot contains metadata about a single snapshot
type Snapshot struct { type Snapshot struct {
AppName string UUID string `json:"uuid"`
TimeStamp int64 AppName string `json:"app_name"`
Labels []string TimeStamp int64 `json:"ts"`
Labels []string `json:"labels"`
} }
// SnapshotIndexLine is struct holding information about a single snapshot // SnapshotIndexLine is struct holding information about a single snapshot
@ -34,31 +38,15 @@ func (s *Snapshot) ToString() string {
} }
// KeyName returns keyname used to store the snapshot in the storage // KeyName returns keyname used to store the snapshot in the storage
func (s *Snapshot) KeyName() string { func (s *Snapshot) KeyName(indexLabel string) string {
metadata := base64.StdEncoding.EncodeToString([]byte(s.ToString())) labelValue := "-"
// TODO: this can't be bigger than 1kB for _, label := range s.Labels {
return fmt.Sprintf("%s%s%d%s%s", s.AppName, keySplitCharacter, s.TimeStamp, keySplitCharacter, metadata) if strings.HasPrefix(label, indexLabel+":") {
} labelValue = label[len(indexLabel)+1:]
}
// DecodeKeyName returns snapshot structure containing all metadata about a single snapshot
func DecodeKeyName(key string) (Snapshot, error) {
parts := strings.Split(key, keySplitCharacter)
if len(parts) != 3 {
return Snapshot{}, errors.New("key name in incorrect format")
} }
_metadata, err := base64.StdEncoding.DecodeString(parts[2]) return fmt.Sprintf("%s%s%s%s%s%s%d", s.AppName, keySplitCharacter, s.UUID, keySplitCharacter, labelValue, keySplitCharacter, s.TimeStamp)
if len(parts) != 3 {
return Snapshot{}, fmt.Errorf("base64 decoding error: %v", err)
}
snapshot := Snapshot{}
err = json.Unmarshal(_metadata, &snapshot)
if err != nil {
return snapshot, fmt.Errorf("metadata unmarshal error: %v", err)
}
return snapshot, nil
} }
type Snapshots []Snapshot type Snapshots []Snapshot
@ -66,47 +54,104 @@ type Snapshots []Snapshot
// SnapshotProcessor encapsulates everything realted to snapshots. Snapshot is an archive of app's // SnapshotProcessor encapsulates everything realted to snapshots. Snapshot is an archive of app's
// directory content. It's stored in S3. // directory content. It's stored in S3.
// The biggest problem in the implementation is speed of looking for snapshots by labels. // The biggest problem in the implementation is speed of looking for snapshots by labels.
// So we can setup one label that can be used as index and it will speed up filtering a little bit in
// certain situations - all snapshot of one owner for example.
// This is distributed interface for the snapshot storage and any node can handle the request message // This is distributed interface for the snapshot storage and any node can handle the request message
// so we don't have any locking mechanism here and we cannot created index of snapshots without // so we don't have any locking mechanism here and we cannot created a single index of snapshots without
// significant time spend on it. Let's deal with it later. I think we are fine for first 10k snapshots. // significant time spend on it. Let's deal with it later. I think we are fine for first 10k snapshots.
type SnapshotProcessor struct { type SnapshotProcessor struct {
AppsPath string // Where apps are stored AppsPath string // Where apps are stored
TmpSnapshotPath string // where temporary location for snapshots is TmpSnapshotPath string // where temporary location for snapshots is
IndexLabel string // Label that will be used as index to make listing faster
Driver drivers.DriverInterface Driver drivers.DriverInterface
} }
// CreateSnapshot creates an archive of existing application and stores it in S3 storage // saveMetadata saves metadata of single snapshot into the metadata storage
// Returns key under which is the snapshot stored and/or error if there is any. func (s *SnapshotProcessor) saveMetadata(snapshot Snapshot) error {
// Metadata about the snapshot are encoded in the third part of the keyname. body, err := json.Marshal(snapshot)
// The keyname cannot be bigger than 1 kB. if err != nil {
func (s *SnapshotProcessor) CreateSnapshot(appName string, labels []string) (string, error) { return fmt.Errorf("marshal snapshot into JSON error: %v", err)
// Create an archive
archive := archiver.Zip{
CompressionLevel: 6,
MkdirAll: true,
SelectiveCompression: true,
ContinueOnError: false,
OverwriteExisting: false,
ImplicitTopLevelFolder: false,
} }
err = s.Driver.Write(fmt.Sprintf(metadataKeyTemplate, snapshot.UUID), body)
if err != nil {
return fmt.Errorf("copying metadata into S3 error: %v", err)
}
return nil
}
// loadMetadata returns metadata for given snapshot's UUID
func (s *SnapshotProcessor) loadMetadata(snapshotUUID string) (Snapshot, error) {
snapshot := Snapshot{}
body, err := s.Driver.Read(fmt.Sprintf(metadataKeyTemplate, snapshotUUID))
if err != nil {
return snapshot, fmt.Errorf("reading metadata from S3 error: %v", err)
}
err = json.Unmarshal(body, &snapshot)
if err != nil {
return snapshot, fmt.Errorf("decoding metadata from JSON error: %v", err)
}
return snapshot, nil
}
// deleteMetadata deletes metadata object
func (s *SnapshotProcessor) deleteMetadata(snapshotUUID string) error {
err := s.Driver.Delete(fmt.Sprintf(metadataKeyTemplate, snapshotUUID))
if err != nil {
return fmt.Errorf("delete metadata from S3 error: %v", err)
}
return nil
}
// metadataForSnapshotKey returns metadata for snapshot key
func (s *SnapshotProcessor) metadataForSnapshotKey(snapshotKey string) (Snapshot, error) {
parts := strings.Split(snapshotKey, keySplitCharacter)
if len(parts) != 4 {
return Snapshot{}, errors.New("wrong snapshot key format")
}
snapshot, err := s.loadMetadata(parts[1])
return snapshot, err
}
// CreateSnapshot creates an archive of existing application and stores it in S3 storage
// Returns key under which is the snapshot stored and/or error if there is any.
// Metadata about the snapshot are stored in extra object under metadata/ prefix.
func (s *SnapshotProcessor) CreateSnapshot(appName string, labels []string) (string, error) {
snapshot := Snapshot{ snapshot := Snapshot{
UUID: uuid.NewV4().String(),
AppName: appName, AppName: appName,
TimeStamp: time.Now().Unix(), TimeStamp: time.Now().Unix(),
Labels: labels, Labels: labels,
} }
tmpSnapshotArchivePath := path.Join(s.TmpSnapshotPath, snapshot.KeyName()+".zip") tmpSnapshotArchivePath := path.Join(s.TmpSnapshotPath, snapshot.KeyName(s.IndexLabel)+".tar.zst")
err := os.Chdir(path.Join(s.AppsPath, appName)) err := os.Chdir(path.Join(s.AppsPath, appName))
if err != nil { if err != nil {
return snapshot.KeyName(), fmt.Errorf("change working directory error: %v", err) return snapshot.KeyName(s.IndexLabel), fmt.Errorf("change working directory error: %v", err)
} }
err = archive.Archive([]string{"./"}, tmpSnapshotArchivePath) err = exec.Command(tarBin, "-acf", tmpSnapshotArchivePath, "./").Run()
if err != nil { if err != nil {
return snapshot.KeyName(), fmt.Errorf("compression error: %v", err) return snapshot.KeyName(s.IndexLabel), fmt.Errorf("compression error: %v", err)
}
info, err := os.Stat(tmpSnapshotArchivePath)
if err != nil {
return snapshot.KeyName(s.IndexLabel), fmt.Errorf("temporary file stat error: %v", err)
}
snapshot.Labels = append(snapshot.Labels, fmt.Sprintf("size:%d", info.Size()))
err = s.saveMetadata(snapshot)
if err != nil {
return snapshot.KeyName(s.IndexLabel), fmt.Errorf("saving metadata error: %v", err)
} }
// Clean after myself // Clean after myself
@ -118,19 +163,19 @@ func (s *SnapshotProcessor) CreateSnapshot(appName string, labels []string) (str
}() }()
// Pipe it into the storage // Pipe it into the storage
err = s.Driver.Create(snapshot.KeyName(), tmpSnapshotArchivePath) err = s.Driver.Create(snapshot.KeyName(s.IndexLabel), tmpSnapshotArchivePath)
if err != nil { if err != nil {
return snapshot.KeyName(), fmt.Errorf("copying snapshot into S3 error: %v", err) return snapshot.KeyName(s.IndexLabel), fmt.Errorf("copying snapshot into S3 error: %v", err)
} }
return snapshot.KeyName(), nil return snapshot.KeyName(s.IndexLabel), nil
} }
// RestoreSnapshot restores snapshot into an existing application // RestoreSnapshot restores snapshot into an existing application
// If you need a new app from existing snapshot just create it. // If you need a new app from existing snapshot just create it.
// This restored only content of the disk, doesn't create the container. // This restores only content on the disk, doesn't create the container.
func (s *SnapshotProcessor) RestoreSnapshot(key string, newAppName string) error { func (s *SnapshotProcessor) RestoreSnapshot(key string, newAppName string) error {
tmpSnapshotArchivePath := path.Join(s.TmpSnapshotPath, key+".zip") tmpSnapshotArchivePath := path.Join(s.TmpSnapshotPath, key+".tar.zst")
err := os.MkdirAll(path.Join(s.AppsPath, newAppName), 0755) err := os.MkdirAll(path.Join(s.AppsPath, newAppName), 0755)
if err != nil { if err != nil {
@ -142,21 +187,12 @@ func (s *SnapshotProcessor) RestoreSnapshot(key string, newAppName string) error
return fmt.Errorf("creating destination path error: %v", err) return fmt.Errorf("creating destination path error: %v", err)
} }
s.Driver.Get(key, tmpSnapshotArchivePath) err = s.Driver.Get(key, tmpSnapshotArchivePath)
if err != nil { if err != nil {
return fmt.Errorf("getting the archive from S3 error: %v", err) return fmt.Errorf("getting the archive from S3 error: %v", err)
} }
archive := archiver.Zip{ err = exec.Command(tarBin, "-axf", tmpSnapshotArchivePath).Run()
CompressionLevel: 6,
MkdirAll: true,
SelectiveCompression: true,
ContinueOnError: false,
OverwriteExisting: false,
ImplicitTopLevelFolder: false,
}
err = archive.Unarchive(tmpSnapshotArchivePath, "./")
if err != nil { if err != nil {
return fmt.Errorf("unarchiving error: %v", err) return fmt.Errorf("unarchiving error: %v", err)
} }
@ -166,6 +202,12 @@ func (s *SnapshotProcessor) RestoreSnapshot(key string, newAppName string) error
return fmt.Errorf("removing the archive error: %v", err) return fmt.Errorf("removing the archive error: %v", err)
} }
// remove .chowned file to tell the container to setup ownership of the files again
err = os.Remove("./.chowned")
if err != nil && !strings.Contains(err.Error(), "no such file or directory") {
return fmt.Errorf("removing error: %v", err)
}
return nil return nil
} }
@ -179,8 +221,12 @@ func (s *SnapshotProcessor) ListAppSnapshots(appName string) ([]Snapshot, error)
} }
for _, key := range keys { for _, key := range keys {
if key == metadataPrefix+"/" {
continue
}
if strings.HasPrefix(key, appName+keySplitCharacter) { if strings.HasPrefix(key, appName+keySplitCharacter) {
snapshot, err := DecodeKeyName(key) snapshot, err := s.metadataForSnapshotKey(key)
if err != nil { if err != nil {
return snapshots, err return snapshots, err
} }
@ -202,9 +248,13 @@ func (s *SnapshotProcessor) ListAppsSnapshots(appNames []string) ([]Snapshot, er
} }
for _, key := range keys { for _, key := range keys {
if key == metadataPrefix+"/" {
continue
}
for _, appName := range appNames { for _, appName := range appNames {
if strings.HasPrefix(key, appName+keySplitCharacter) { if strings.HasPrefix(key, appName+keySplitCharacter) {
snapshot, err := DecodeKeyName(key) snapshot, err := s.metadataForSnapshotKey(key)
if err != nil { if err != nil {
return snapshots, err return snapshots, err
} }
@ -217,9 +267,9 @@ func (s *SnapshotProcessor) ListAppsSnapshots(appNames []string) ([]Snapshot, er
return snapshots, nil return snapshots, nil
} }
// ListAppsSnapshotsByLabels returns list of snapshots with given label // ListAppsSnapshotsByLabel returns list of snapshots with given label
// TODO: this will be ok for now but probably little slow when users start using it more // TODO: this will be ok for now but probably little slow when users start using it more
func (s *SnapshotProcessor) ListAppsSnapshotsByLabels(desiredLabel string) ([]Snapshot, error) { func (s *SnapshotProcessor) ListAppsSnapshotsByLabel(labelValue string) ([]Snapshot, error) {
snapshots := []Snapshot{} snapshots := []Snapshot{}
keys, err := s.Driver.List("") keys, err := s.Driver.List("")
@ -228,13 +278,20 @@ func (s *SnapshotProcessor) ListAppsSnapshotsByLabels(desiredLabel string) ([]Sn
} }
for _, key := range keys { for _, key := range keys {
snapshot, err := DecodeKeyName(key) if key == metadataPrefix+"/" {
if err != nil { continue
}
snapshot, err := s.metadataForSnapshotKey(key)
if err != nil && strings.Contains(err.Error(), "wrong snapshot key format") {
log.Printf("WARNING: Snapshot storage: invalid key found (%s)", key)
continue
} else if err != nil {
return snapshots, err return snapshots, err
} }
for _, label := range snapshot.Labels { for _, label := range snapshot.Labels {
if label == desiredLabel { if label == fmt.Sprintf("%s:%s", s.IndexLabel, labelValue) {
snapshots = append(snapshots, snapshot) snapshots = append(snapshots, snapshot)
} }
} }
@ -243,6 +300,28 @@ func (s *SnapshotProcessor) ListAppsSnapshotsByLabels(desiredLabel string) ([]Sn
return snapshots, nil return snapshots, nil
} }
// GetSnapshot returns a single snapshot's metadata for given key.
func (s *SnapshotProcessor) GetSnapshot(key string) (Snapshot, error) {
snapshot := Snapshot{}
snapshot, err := s.metadataForSnapshotKey(key)
if err != nil {
return snapshot, err
}
return snapshot, nil
}
// GetDownloadLink returns an URL for given snapshot
func (s *SnapshotProcessor) GetDownloadLink(key string) (string, error) {
link, err := s.Driver.GetDownloadLink(key)
if err != nil {
return link, err
}
return link, nil
}
// DeleteSnapshot delete's one snapshot // DeleteSnapshot delete's one snapshot
func (s *SnapshotProcessor) DeleteSnapshot(key string) error { func (s *SnapshotProcessor) DeleteSnapshot(key string) error {
err := s.Driver.Delete(key) err := s.Driver.Delete(key)
@ -261,7 +340,7 @@ func (s *SnapshotProcessor) DeleteAppSnapshots(appName string) error {
} }
for _, snapshot := range snapshots { for _, snapshot := range snapshots {
err = s.DeleteSnapshot(snapshot.KeyName()) err = s.DeleteSnapshot(snapshot.KeyName(s.IndexLabel))
if err != nil { if err != nil {
return fmt.Errorf("removing snapshots error: %v", err) return fmt.Errorf("removing snapshots error: %v", err)
} }

View File

@ -47,6 +47,7 @@ func TestMain(m *testing.M) {
snapshotProcessor = &SnapshotProcessor{ snapshotProcessor = &SnapshotProcessor{
AppsPath: path.Join(initialPwd, "tmp/apps"), AppsPath: path.Join(initialPwd, "tmp/apps"),
TmpSnapshotPath: path.Join(initialPwd, "tmp/snapshots"), TmpSnapshotPath: path.Join(initialPwd, "tmp/snapshots"),
IndexLabel: "testlabel",
Driver: drivers.S3Driver{ Driver: drivers.S3Driver{
S3AccessKey: "test", S3AccessKey: "test",
@ -76,14 +77,18 @@ func TestMain(m *testing.M) {
func TestSnapshot(t *testing.T) { func TestSnapshot(t *testing.T) {
snapshot := Snapshot{ snapshot := Snapshot{
UUID: "ABCDEF",
AppName: "app_0102", AppName: "app_0102",
TimeStamp: 1634510035, TimeStamp: 1634510035,
Labels: []string{"userid:1"}, Labels: []string{"userid:1"},
} }
assert.Equal(t, "app_0102:1634510035:eyJBcHBOYW1lIjoiYXBwXzAxMDIiLCJUaW1lU3RhbXAiOjE2MzQ1MTAwMzUsIkxhYmVscyI6WyJ1c2VyaWQ6MSJdfQ==", snapshot.KeyName()) assert.Equal(t, "app_0102:ABCDEF:1:1634510035", snapshot.KeyName("userid"))
snapshot2, err := DecodeKeyName("app_0102:1634510035:eyJBcHBOYW1lIjoiYXBwXzAxMDIiLCJUaW1lU3RhbXAiOjE2MzQ1MTAwMzUsIkxhYmVscyI6WyJ1c2VyaWQ6MSJdfQ==") err := snapshotProcessor.saveMetadata(snapshot)
assert.Nil(t, err)
snapshot2, err := snapshotProcessor.metadataForSnapshotKey("app_0102:ABCDEF:1:1634510035")
assert.Nil(t, err) assert.Nil(t, err)
assert.Equal(t, "app_0102", snapshot2.AppName) assert.Equal(t, "app_0102", snapshot2.AppName)
assert.Equal(t, int64(1634510035), snapshot2.TimeStamp) assert.Equal(t, int64(1634510035), snapshot2.TimeStamp)
@ -153,3 +158,37 @@ func TestCreateRestoreListSnapshot(t *testing.T) {
assert.Equal(t, os.IsNotExist(err), false) assert.Equal(t, os.IsNotExist(err), false)
} }
func TestGetSnapshot(t *testing.T) {
snapshot, err := snapshotProcessor.GetSnapshot("app_0102:ABCDEF:1:1634510035")
assert.Nil(t, err)
assert.Equal(t, "app_0102", snapshot.AppName)
}
func TestListAppsSnapshotsByLabel(t *testing.T) {
appName := "app_0102"
// Create an app structure
err := os.MkdirAll(path.Join(snapshotProcessor.AppsPath, appName), 0755)
if err != nil {
fmt.Println(err.Error())
os.Exit(1)
}
_, err = exec.Command("bash", "-c", "echo content > "+path.Join(snapshotProcessor.AppsPath, appName)+"/a_file.txt").CombinedOutput()
if err != nil {
fmt.Println(err.Error())
os.Exit(1)
}
// Create an snapshot
snapshotName, err := snapshotProcessor.CreateSnapshot(appName, []string{"app:test2", "almost_no_content", "testlabel:abcde"})
assert.Nil(t, err)
assert.Equal(t, strings.HasPrefix(snapshotName, appName+":"), true)
snapshots, err := snapshotProcessor.ListAppsSnapshotsByLabel("abcde")
assert.Nil(t, err)
assert.True(t, len(snapshots) > 0)
assert.Equal(t, appName, snapshots[0].AppName)
}

View File

@ -5,8 +5,8 @@ import (
"strings" "strings"
"time" "time"
// This is line from GORM documentation that imports database dialect "github.com/rosti-cz/node-api/detector"
_ "github.com/jinzhu/gorm/dialects/sqlite" "github.com/rosti-cz/node-api/jsonmap"
) )
// ValidationError is error that holds multiple validation error messages // ValidationError is error that holds multiple validation error messages
@ -31,11 +31,14 @@ type Label struct {
// AppState contains info about runnint application, it's not saved in the database // AppState contains info about runnint application, it's not saved in the database
type AppState struct { type AppState struct {
State string `json:"state"` State string `json:"state"`
CPUUsage float64 `json:"cpu_usage"` // in percents OOMKilled bool `json:"oom_killed"`
MemoryUsage int `json:"memory_usage"` // in MB CPUUsage float64 `json:"cpu_usage"` // in percents
DiskUsageBytes int `json:"disk_usage_bytes"` MemoryUsage int `json:"memory_usage"` // in MB
DiskUsageInodes int `json:"disk_usage_inodes"` DiskUsageBytes int `json:"disk_usage_bytes"`
DiskUsageInodes int `json:"disk_usage_inodes"`
Flags detector.Flags `json:"flags"`
IsPasswordSet bool `json:"is_password_set"`
} }
// Apps is list of applications // Apps is list of applications
@ -53,6 +56,8 @@ type App struct {
// Datetime of deletion // Datetime of deletion
DeletedAt *time.Time `sql:"index" json:"deleted_at"` DeletedAt *time.Time `sql:"index" json:"deleted_at"`
// ####################################################
// This part is used in app template
// Name of the application // Name of the application
// Example: test_1234 // Example: test_1234
Name string `json:"name" gorm:"unique,index,not_null"` Name string `json:"name" gorm:"unique,index,not_null"`
@ -64,6 +69,9 @@ type App struct {
// Unique: true // Unique: true
// Example: 10002 // Example: 10002
HTTPPort int `json:"http_port"` HTTPPort int `json:"http_port"`
// Port of the application inside the container. Default is 0 which means default by the image.
// But it has to be between 1024 and 65536 with exception of 8000.
AppPort int `json:"app_port"`
// Runtime image // Runtime image
Image string `json:"image"` Image string `json:"image"`
// Number of CPUs ticks assigned, 100 means one CPU, 200 are two // Number of CPUs ticks assigned, 100 means one CPU, 200 are two
@ -72,9 +80,12 @@ type App struct {
Memory int `json:"memory"` // Limit in MB Memory int `json:"memory"` // Limit in MB
// Custom labels // Custom labels
Labels []Label `json:"labels" gorm:"foreignkey:AppID"` // username:cx or user_id:1 Labels []Label `json:"labels" gorm:"foreignkey:AppID"` // username:cx or user_id:1
// ####################################################
// Current status of the application (underlaying container) // Current status of the application (underlaying container)
State string `json:"state"` State string `json:"state"`
// True if the current container has been killed by OOM killer
OOMKilled bool `json:"oom_killed"`
// CPU usage in percents // CPU usage in percents
CPUUsage float64 `json:"cpu_usage"` // in percents CPUUsage float64 `json:"cpu_usage"` // in percents
// Memory usage in bytes // Memory usage in bytes
@ -83,6 +94,44 @@ type App struct {
DiskUsageBytes int `json:"disk_usage_bytes"` DiskUsageBytes int `json:"disk_usage_bytes"`
// Disk usage in inodes // Disk usage in inodes
DiskUsageInodes int `json:"disk_usage_inodes"` DiskUsageInodes int `json:"disk_usage_inodes"`
// Flags from detector of problems in the container
Flags string `json:"flags"` // flags are separated by comma
IsPasswordSet bool `json:"is_password_set"` // True if the password is set in the container (file with the password exists)
// this is gathered in docker package and has to be assembled externally
Techs AppTechs `json:"techs,omitempty" gorm:"-"` // list of available technologies in the image
PrimaryTech AppTech `json:"primary_tech,omitempty" gorm:"-"` // Technology that was selected as primary in the environment
// This is not store in the database but used in create request when the app suppose to be created from an existing snapshot
Snapshot string `json:"snapshot" gorm:"-"`
EnvRaw jsonmap.JSONMap `json:"env" sql:"type:json"`
// Fields to setup during creating of the app, this is not stored in the database
Setup struct {
SSHKeys string `json:"ssh_keys"`
Tech string `json:"tech"`
TechVersion string `json:"tech_version"`
Password string `json:"password"`
ArchiveURL string `json:"archive_url"` // Archive with content of /srv
} `json:"setup,omitempty" gorm:"-"`
}
// Return env as map[string]string
func (a *App) GetEnv() map[string]string {
env := make(map[string]string)
for key, value := range a.EnvRaw {
env[key] = value.(string)
}
return env
}
// SetEnv sets env from map[string]string
func (a *App) SetEnv(env map[string]string) {
a.EnvRaw = make(jsonmap.JSONMap)
for key, value := range env {
a.EnvRaw[key] = value
}
} }
// Validate do basic checks of the struct values // Validate do basic checks of the struct values
@ -102,6 +151,10 @@ func (a *App) Validate() []string {
errors = append(errors, "HTTP port has to be between 1 and 65536") errors = append(errors, "HTTP port has to be between 1 and 65536")
} }
if a.AppPort != 0 && ((a.AppPort < 1024 && a.AppPort > 65536) || a.AppPort == 8000) {
errors = append(errors, "App port has to be between 1024 and 65536 with exception of 8000")
}
if a.Image == "" { if a.Image == "" {
errors = append(errors, "image cannot be empty") errors = append(errors, "image cannot be empty")
} }
@ -116,3 +169,12 @@ func (a *App) Validate() []string {
return errors return errors
} }
// AppTechs is list of technologies available in the app
type AppTechs []AppTech
// AppTech holds info about one technology in the app
type AppTech struct {
Name string `json:"name"`
Version string `json:"version"`
}

View File

@ -3,7 +3,7 @@ package main
import ( import (
"strings" "strings"
"github.com/labstack/echo" "github.com/labstack/echo/v4"
) )
var skipPaths []string = []string{"/metrics"} var skipPaths []string = []string{"/metrics"}

View File

@ -8,6 +8,7 @@ import (
// Config keeps info about configuration of this daemon // Config keeps info about configuration of this daemon
type Config struct { type Config struct {
DockerSocket string `envconfig:"DOCKER_SOCKET" default:"unix:///var/run/docker.sock"`
Token string `envconfig:"TOKEN" required:"true"` Token string `envconfig:"TOKEN" required:"true"`
AppsPath string `envconfig:"APPS_PATH" default:"/srv"` // Where applications are located AppsPath string `envconfig:"APPS_PATH" default:"/srv"` // Where applications are located
AppsBindIPHTTP string `envconfig:"APPS_BIND_IP_HTTP" default:"0.0.0.0"` // On what IP apps' HTTP port gonna be bound AppsBindIPHTTP string `envconfig:"APPS_BIND_IP_HTTP" default:"0.0.0.0"` // On what IP apps' HTTP port gonna be bound
@ -21,6 +22,9 @@ type Config struct {
SnapshotsS3Endpoint string `envconfig:"SNAPSHOTS_S3_ENDPOINT" required:"false"` SnapshotsS3Endpoint string `envconfig:"SNAPSHOTS_S3_ENDPOINT" required:"false"`
SnapshotsS3SSL bool `envconfig:"SNAPSHOTS_S3_SSL" required:"false" default:"true"` SnapshotsS3SSL bool `envconfig:"SNAPSHOTS_S3_SSL" required:"false" default:"true"`
SnapshotsS3Bucket string `envconfig:"SNAPSHOTS_S3_BUCKET" required:"false" default:"snapshots"` SnapshotsS3Bucket string `envconfig:"SNAPSHOTS_S3_BUCKET" required:"false" default:"snapshots"`
SnapshotsIndexLabel string `envconfig:"SNAPSHOTS_INDEX_LABEL" required:"false" default:"owner_id"` // Label that will be part of the object name and it will be used as index to quick listing
SentryDSN string `envconfig:"SENTRY_DSN" required:"false"`
SentryENV string `envconfig:"SENTRY_ENV" default:"development"`
} }
// GetConfig return configuration created based on environment variables // GetConfig return configuration created based on environment variables

View File

@ -1,9 +1,10 @@
package docker package containers
import ( import (
"context" "context"
"encoding/json" "encoding/json"
"errors" "errors"
"fmt"
"io" "io"
"io/ioutil" "io/ioutil"
"log" "log"
@ -17,6 +18,8 @@ import (
"github.com/docker/docker/api/types/network" "github.com/docker/docker/api/types/network"
dockerClient "github.com/docker/docker/client" dockerClient "github.com/docker/docker/client"
"github.com/docker/go-connections/nat" "github.com/docker/go-connections/nat"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/rosti-cz/node-api/detector"
) )
// Stats delay in seconds // Stats delay in seconds
@ -25,29 +28,22 @@ const statsDelay = 1
// Docker timeout // Docker timeout
const dockerTimeout = 10 const dockerTimeout = 10
// DOCKER_SOCK tells where to connect to docker, it will be always local sock // DOCKER_API_VERSION set API version of Docker, 1.41 is needed for the platform struct
const dockerSock = "/var/run/docker.sock" const dockerAPIVersion = "1.41"
const podmanSock = "/run/podman/podman.sock"
// DOCKER_API_VERSION set API version of Docker, 1.40 belongs to Docker 19.03.11
const dockerAPIVersion = "1.38"
// Driver keeps everything for connection to Docker // Driver keeps everything for connection to Docker
type Driver struct { type Driver struct {
DockerSock string // Docker socket
BindIPHTTP string // IP to which containers are bound BindIPHTTP string // IP to which containers are bound
BindIPSSH string // IP to which containers are bound BindIPSSH string // IP to which containers are bound
} }
func (d *Driver) getClient() (*dockerClient.Client, error) { func (d *Driver) getClient() (*dockerClient.Client, error) {
var connectTo string cli, err := dockerClient.NewClient(d.DockerSock, dockerAPIVersion, nil, nil)
if _, err := os.Stat(podmanSock); !os.IsNotExist(err) { if err != nil {
connectTo = podmanSock return cli, fmt.Errorf("get docker client error: %v", err)
} else {
connectTo = dockerSock
} }
return cli, nil
cli, err := dockerClient.NewClient("unix://"+connectTo, dockerAPIVersion, nil, nil)
return cli, err
} }
// ConnectionStatus checks connection to the Docker daemon // ConnectionStatus checks connection to the Docker daemon
@ -83,8 +79,10 @@ func (d *Driver) nameToID(name string) (string, error) {
} }
// Status return current status of container with given name // Status return current status of container with given name
func (d *Driver) Status(name string) (string, error) { func (d *Driver) Status(name string) (ContainerStatus, error) {
status := "unknown" status := ContainerStatus{
Status: "unknown",
}
cli, err := d.getClient() cli, err := d.getClient()
if err != nil { if err != nil {
@ -94,7 +92,8 @@ func (d *Driver) Status(name string) (string, error) {
containerID, err := d.nameToID(name) containerID, err := d.nameToID(name)
if err != nil && err.Error() == "no container found" { if err != nil && err.Error() == "no container found" {
return "no-container", err status.Status = "no-container"
return status, err
} }
if err != nil { if err != nil {
return status, err return status, err
@ -105,7 +104,10 @@ func (d *Driver) Status(name string) (string, error) {
return status, err return status, err
} }
return info.State.Status, nil status.Status = info.State.Status
status.OOMKilled = info.State.OOMKilled
return status, nil
} }
@ -200,13 +202,13 @@ func (d *Driver) Remove(name string) error {
return err return err
} }
timeout := time.Duration(dockerTimeout * time.Second) timeout := dockerTimeout
err = cli.ContainerStop(context.TODO(), containerID, &timeout) err = cli.ContainerStop(context.TODO(), containerID, container.StopOptions{Timeout: &timeout})
if err != nil { if err != nil {
return err return err
} }
err = cli.ContainerRemove(context.TODO(), containerID, types.ContainerRemoveOptions{}) err = cli.ContainerRemove(context.TODO(), containerID, container.RemoveOptions{})
return err return err
} }
@ -244,8 +246,8 @@ func (d *Driver) Stop(name string) error {
return err return err
} }
timeout := time.Duration(dockerTimeout * time.Second) timeout := dockerTimeout
err = cli.ContainerStop(context.TODO(), containerID, &timeout) err = cli.ContainerStop(context.TODO(), containerID, container.StopOptions{Timeout: &timeout})
return err return err
} }
@ -305,7 +307,7 @@ func (d *Driver) pullImage(image string) error {
// cmd - string slice of command and its arguments // cmd - string slice of command and its arguments
// volumePath - host's directory to mount into the container // volumePath - host's directory to mount into the container
// returns container ID // returns container ID
func (d *Driver) Create(name string, image string, volumePath string, HTTPPort int, SSHPort int, CPU int, memory int, cmd []string) (string, error) { func (d *Driver) Create(name string, image string, volumePath string, HTTPPort int, SSHPort int, CPU int, memory int, cmd []string, env map[string]string) (string, error) {
log.Println("Creating container " + name) log.Println("Creating container " + name)
cli, err := d.getClient() cli, err := d.getClient()
if err != nil { if err != nil {
@ -336,11 +338,21 @@ func (d *Driver) Create(name string, image string, volumePath string, HTTPPort i
} }
} }
OOMKillDisable := false
if memory < 1500 {
OOMKillDisable = true
}
envList := []string{}
for key, value := range env {
envList = append(envList, key+"="+value)
}
createdContainer, err := cli.ContainerCreate( createdContainer, err := cli.ContainerCreate(
context.Background(), context.Background(),
&container.Config{ &container.Config{
Hostname: name, Hostname: name,
Env: []string{}, Env: envList,
Image: image, Image: image,
Cmd: cmd, Cmd: cmd,
ExposedPorts: nat.PortSet{ ExposedPorts: nat.PortSet{
@ -354,6 +366,7 @@ func (d *Driver) Create(name string, image string, volumePath string, HTTPPort i
CPUQuota: int64(CPU) * 1000, CPUQuota: int64(CPU) * 1000,
Memory: int64(memory*110/100) * 1024 * 1024, // Allow 10 % more memory so we have space for MemoryReservation Memory: int64(memory*110/100) * 1024 * 1024, // Allow 10 % more memory so we have space for MemoryReservation
MemoryReservation: int64(memory) * 1024 * 1024, // This should provide softer way how to limit the memory of our containers MemoryReservation: int64(memory) * 1024 * 1024, // This should provide softer way how to limit the memory of our containers
OomKillDisable: &OOMKillDisable,
}, },
PortBindings: portBindings, PortBindings: portBindings,
AutoRemove: false, AutoRemove: false,
@ -366,6 +379,10 @@ func (d *Driver) Create(name string, image string, volumePath string, HTTPPort i
}, },
}, },
&network.NetworkingConfig{}, &network.NetworkingConfig{},
&specs.Platform{
Architecture: "amd64",
OS: "linux",
},
name, name,
) )
if err != nil { if err != nil {
@ -418,11 +435,9 @@ func (d *Driver) Exec(name string, cmd []string, stdin string, env []string, att
return &[]byte{}, err return &[]byte{}, err
} }
respAttach, err := cli.ContainerExecAttach(ctx, resp.ID, types.ExecConfig{ respAttach, err := cli.ContainerExecAttach(ctx, resp.ID, types.ExecStartCheck{
AttachStdin: stdinEnabled, Detach: false,
AttachStdout: attachStdout, Tty: true,
AttachStderr: false,
Tty: attachStdout,
}) })
if err != nil { if err != nil {
return &[]byte{}, err return &[]byte{}, err
@ -448,3 +463,44 @@ func (d *Driver) Exec(name string, cmd []string, stdin string, env []string, att
return &stdouterr, err return &stdouterr, err
} }
// GetProcesses return list of processes running under this container
func (d *Driver) GetProcesses(name string) ([]string, error) {
processes := []string{}
ctx := context.Background()
cli, err := d.getClient()
if err != nil {
return processes, err
}
defer cli.Close()
processList, err := cli.ContainerTop(ctx, name, []string{"-eo", "pid,args"})
if err != nil {
return processes, fmt.Errorf("docker container top call error: %v", err)
}
for _, process := range processList.Processes {
if len(process) > 0 {
// This removes PID from the list. PID has to be printed otherwise docker daemon can't handle it.
processes = append(processes, strings.Join(strings.Fields(process[0])[1:], " "))
}
}
return processes, nil
}
// GetFlags returns list of flags with problems found in the container, mainly used to detect miners or viruses
func (d *Driver) GetFlags(name string) (detector.Flags, error) {
processes, err := d.GetProcesses(name)
if err != nil {
return detector.Flags{}, err
}
flags, err := detector.Check(processes)
if err != nil {
return flags, err
}
return flags, nil
}

44
containers/docker_test.go Normal file
View File

@ -0,0 +1,44 @@
package containers
import (
"os"
"testing"
"time"
"github.com/stretchr/testify/assert"
)
func getTestDockerSock() string {
dockerSocket := os.Getenv("DOCKER_SOCKET")
if dockerSocket == "" {
return "unix:///run/user/1000/podman/podman.sock"
}
return dockerSocket
}
func TestGetProcesses(t *testing.T) {
driver := Driver{
DockerSock: getTestDockerSock(),
BindIPHTTP: "127.0.0.1",
BindIPSSH: "127.0.0.1",
}
driver.Remove("test")
env := make(map[string]string)
env["TEST"] = "test"
_, err := driver.Create("test", "docker.io/library/busybox", "/tmp", 8990, 8922, 1, 128, []string{"sleep", "3600"}, env)
assert.Nil(t, err)
err = driver.Start("test")
assert.Nil(t, err)
time.Sleep(5 * time.Second)
processes, err := driver.GetProcesses("test")
assert.Nil(t, err)
assert.Contains(t, processes, "sleep 3600")
driver.Remove("test")
}

19
containers/stats.go Normal file
View File

@ -0,0 +1,19 @@
package containers
// ContainerStats contains fields returned in docker stats function stream
type ContainerStats struct {
Pids struct {
Current int `json:"current"`
} `json:"pids_stats"`
CPU struct {
Usage struct {
Total int64 `json:"total_usage"`
} `json:"cpu_usage"`
} `json:"cpu_stats"`
Memory struct {
Usage int `json:"usage"`
MaxUsage int `json:"max_usage"`
Limit int `json:"limit"`
} `json:"memory_stats"`
ID string `json:"id"`
}

View File

@ -1,11 +1,13 @@
package docker package containers
import ( import (
"bytes" "bytes"
"fmt"
"log" "log"
"os" "os"
"os/exec" "os/exec"
"path/filepath" "path/filepath"
"regexp"
"strconv" "strconv"
"strings" "strings"
"time" "time"
@ -62,7 +64,7 @@ func du(path string) (int, int, error) {
return space, inodes, nil return space, inodes, nil
} }
// Removes content of given directory // Removes content of given directory and then the directory itself
func removeDirectory(dir string) error { func removeDirectory(dir string) error {
d, err := os.Open(dir) d, err := os.Open(dir)
if err != nil { if err != nil {
@ -79,6 +81,12 @@ func removeDirectory(dir string) error {
return err return err
} }
} }
err = os.Remove(filepath.Join(dir))
if err != nil {
return err
}
return nil return nil
} }
@ -129,3 +137,53 @@ func CPUMemoryStats(applist *[]apps.App, sample int) (*[]apps.App, error) {
return &updatedApps, nil return &updatedApps, nil
} }
func getTechAndVersion(symlink string) (*TechInfo, error) {
link, err := os.Readlink(symlink)
if os.IsNotExist(err) {
return &TechInfo{
Tech: "default",
Version: "",
}, nil
}
if err != nil {
return nil, fmt.Errorf("error reading symlink: %w", err)
}
absLink, err := filepath.Abs(link)
if err != nil {
return nil, fmt.Errorf("error getting absolute path: %w", err)
}
absLink = strings.TrimSuffix(absLink, "/bin")
dirName := filepath.Base(absLink)
parts := strings.Split(dirName, "-")
fmt.Println("DEBUG", symlink)
fmt.Println("DEBUG", absLink)
fmt.Println("DEBUG", dirName)
fmt.Println("DEBUG", parts)
if len(parts) < 2 {
return &TechInfo{
Tech: "default",
Version: "",
}, nil
}
re := regexp.MustCompile(`\d+\.\d+\.\d+`)
version := re.FindString(parts[1])
if version == "" {
// In case version couldn't be determined we return "unknown", otherwise returning
// error in this case was crashing admin when user fucked up the symlink for the
// tech manually.
log.Println("failed to extract version from symlink")
return &TechInfo{
Tech: "unknown",
Version: "",
}, nil
}
return &TechInfo{
Tech: parts[0],
Version: version,
}, nil
}

640
containers/types.go Normal file
View File

@ -0,0 +1,640 @@
package containers
import (
"bytes"
"errors"
"fmt"
"log"
"os"
"path"
"strconv"
"strings"
"github.com/rosti-cz/node-api/apps"
"github.com/rosti-cz/node-api/detector"
)
// username in the containers under which all containers run
const appUsername = "app"
const owner = "app:app"
const passwordFile = "/srv/.rosti"
const passwordFileNoPath = ".rosti"
const deployKeyType = "ed25519"
const deployKeyPrefix = "rosti_deploy"
// Structure containing info about technology and its version
type TechInfo struct {
Tech string `json:"tech"`
Version string `json:"version"`
}
// Process contains info about background application usually running in supervisor
type Process struct {
Name string `json:"name"`
State string `json:"state"`
}
// Current status of the container
type ContainerStatus struct {
Status string `json:"status"`
OOMKilled bool `json:"oom_killed"`
}
// Container extends App struct from App
type Container struct {
App *apps.App `json:"app"`
DockerSock string `json:"-"`
BindIPHTTP string `json:"-"`
BindIPSSH string `json:"-"`
AppsPath string `json:"-"`
}
func (c *Container) getDriver() *Driver {
driver := &Driver{
DockerSock: c.DockerSock,
BindIPHTTP: c.BindIPHTTP,
BindIPSSH: c.BindIPSSH,
}
return driver
}
// volumeHostPath each container has one volume mounted into it,
func (c *Container) VolumeHostPath() string {
return path.Join(c.AppsPath, c.App.Name)
}
// GetRawResourceStats returns RAW CPU and memory usage directly from Docker API
func (c *Container) GetRawResourceStats() (int64, int, error) {
driver := c.getDriver()
cpu, memory, err := driver.RawStats(c.App.Name)
return cpu, memory, err
}
// GetState returns app state object with populated state fields
func (c *Container) GetState() (*apps.AppState, error) {
status, err := c.Status()
if err != nil {
return nil, err
}
// TODO: this implementation takes more than one hour for 470 containers. It needs to be implemented differently.
// cpu, memory, err := c.ResourceUsage()
// if err != nil {
// return nil, err
// }
bytes, inodes, err := c.DiskUsage()
if err != nil {
return nil, err
}
processes, err := c.GetSystemProcesses()
if err != nil {
return nil, err
}
flags, err := detector.Check(processes)
if err != nil {
return nil, err
}
isPasswordSet, err := c.IsPasswordSet()
if err != nil {
return nil, err
}
state := apps.AppState{
State: status.Status,
OOMKilled: status.OOMKilled,
// CPUUsage: cpu,
// MemoryUsage: memory,
CPUUsage: -1.0,
MemoryUsage: -1.0,
DiskUsageBytes: bytes,
DiskUsageInodes: inodes,
Flags: flags,
IsPasswordSet: isPasswordSet,
}
return &state, nil
}
// Status returns state of the container
// Possible values: running, exited (stopped), no-container, unknown
func (c *Container) Status() (ContainerStatus, error) {
status := ContainerStatus{
Status: "unknown",
}
driver := c.getDriver()
containerStatus, err := driver.Status(c.App.Name)
if err != nil && err.Error() == "no container found" {
return ContainerStatus{Status: "no-container"}, nil
}
if err != nil {
return status, err
}
return containerStatus, nil
}
// DiskUsage returns number of bytes and inodes used by the container in it's mounted volume
func (c *Container) DiskUsage() (int, int, error) {
return du(c.VolumeHostPath())
}
// IsPasswordSet returns true if the password is set for the container (file with the password exists)
func (c *Container) IsPasswordSet() (bool, error) {
_, err := os.Stat(path.Join(c.VolumeHostPath(), passwordFileNoPath))
if err != nil && os.IsNotExist(err) {
return false, nil
}
if err != nil {
return false, err
}
return true, nil
}
// ResourceUsage returns amount of memory in B and CPU in % that the app occupies
func (c *Container) ResourceUsage() (float64, int, error) {
driver := c.getDriver()
cpu, memory, err := driver.Stats(c.App.Name)
if err != nil {
return 0.0, 0, err
}
return cpu, memory, nil
}
// Create creates the container
func (c *Container) Create() error {
driver := c.getDriver()
_, err := driver.Create(
c.App.Name,
c.App.Image,
c.VolumeHostPath(),
c.App.HTTPPort,
c.App.SSHPort,
c.App.CPU,
c.App.Memory,
[]string{},
c.App.GetEnv(),
)
return err
}
// Start starts the container
func (c *Container) Start() error {
driver := c.getDriver()
return driver.Start(c.App.Name)
}
// Stop stops the container
func (c *Container) Stop() error {
driver := c.getDriver()
return driver.Stop(c.App.Name)
}
// Restart restarts the container
func (c *Container) Restart() error {
driver := c.getDriver()
err := driver.Stop(c.App.Name)
if err != nil {
return err
}
return driver.Start(c.App.Name)
}
// Destroy removes the container but keeps the data so it can be created again
func (c *Container) Destroy() error {
driver := c.getDriver()
return driver.Remove(c.App.Name)
}
// Delete removes both data and the container
func (c *Container) Delete() error {
status, err := c.Status()
if err != nil {
return err
}
// It's questionable to have this here. The problem is this method
// does two things, deleting the container and the data and when
// the deleted container doesn't exist we actually don't care
// and we can continue to remove the data.
if status.Status != "no-container" {
err = c.Destroy()
if err != nil {
return err
}
}
volumePath := path.Join(c.AppsPath, c.App.Name)
err = removeDirectory(volumePath)
if err != nil {
log.Println(err)
}
return nil
}
// SetPassword configures password for system user app in the container
func (c *Container) SetPassword(password string) error {
driver := c.getDriver()
_, err := driver.Exec(c.App.Name, []string{"chpasswd"}, appUsername+":"+password, []string{}, false)
if err != nil {
return err
}
_, err = driver.Exec(c.App.Name, []string{"tee", passwordFile}, password, []string{}, false)
if err != nil {
return err
}
return err
}
// ClearPassword removes password for system user app in the container
func (c *Container) ClearPassword() error {
driver := c.getDriver()
_, err := driver.Exec(c.App.Name, []string{"passwd", "-d", "app"}, "", []string{}, false)
if err != nil {
return err
}
_, err = driver.Exec(c.App.Name, []string{"rm", "-f", passwordFile}, "", []string{}, false)
if err != nil {
return err
}
return err
}
// Generate SSH keys and copies it into authorized keys
// Returns true if the key was generated in this call and error if there is any.
// The container has to run for this to work.
func (c *Container) GenerateDeploySSHKeys() (bool, error) {
driver := c.getDriver()
privateKey, pubKey, _ := c.GetDeploySSHKeys()
if privateKey != "" || pubKey != "" {
return false, nil
}
_, err := driver.Exec(c.App.Name, []string{"mkdir", "-p", "/srv/.ssh"}, "", []string{}, true)
if err != nil {
return false, err
}
_, err = driver.Exec(c.App.Name, []string{"ssh-keygen", "-t", deployKeyType, "-f", "/srv/.ssh/" + deployKeyPrefix + "_id_" + deployKeyType, "-P", ""}, "", []string{}, true)
if err != nil {
return false, err
}
_, err = driver.Exec(c.App.Name, []string{"chown", "app:app", "-R", "/srv/.ssh"}, "", []string{}, true)
if err != nil {
return false, err
}
return true, nil
}
// Generate SSH keys and copies it into authorized keys
// Return private key, public key and error.
// The container has to run for this to work.
func (c *Container) GetDeploySSHKeys() (string, string, error) {
driver := c.getDriver()
privateKey, err := driver.Exec(c.App.Name, []string{"cat", "/srv/.ssh/" + deployKeyPrefix + "_id_" + deployKeyType}, "", []string{}, true)
if err != nil {
return "", "", err
}
pubKey, err := driver.Exec(c.App.Name, []string{"cat", "/srv/.ssh/" + deployKeyPrefix + "_id_" + deployKeyType + ".pub"}, "", []string{}, true)
if err != nil {
return "", "", err
}
if privateKey != nil && pubKey != nil && !bytes.Contains(*privateKey, []byte("No such file")) && !bytes.Contains(*pubKey, []byte("No such file")) {
return string(*privateKey), string(*pubKey), nil
}
return "", "", nil
}
// Return host key without hostname
// The container has to run for this to work.
func (c *Container) GetHostKey() (string, error) {
driver := c.getDriver()
hostKeyRaw, err := driver.Exec(c.App.Name, []string{"ssh-keyscan", "localhost"}, "", []string{}, true)
if err != nil {
return "", err
}
// Loop over lines and search for localhost ssh
line := ""
if hostKeyRaw != nil {
for _, line = range strings.Split(string(*hostKeyRaw), "\n") {
if strings.HasPrefix(line, "localhost ssh") {
line = strings.TrimSpace(line)
break
}
}
}
if line == "" {
return "", errors.New("key not found")
}
parts := strings.SplitN(line, " ", 2)
if len(parts) > 1 {
return parts[1], nil
}
return "", errors.New("key not found")
}
// Append text to a file in the container
func (c *Container) AppendFile(filename string, text string, mode string) error {
driver := c.getDriver()
directory := path.Dir(filename)
_, err := driver.Exec(c.App.Name, []string{"mkdir", "-p", directory}, "", []string{}, false)
if err != nil {
return err
}
_, err = driver.Exec(c.App.Name, []string{"tee", "-a", filename}, text, []string{}, false)
if err != nil {
return err
}
_, err = driver.Exec(c.App.Name, []string{"chmod", mode, filename}, "", []string{}, false)
if err != nil {
return err
}
_, err = driver.Exec(c.App.Name, []string{"chown", owner, directory}, "", []string{}, false)
if err != nil {
return err
}
_, err = driver.Exec(c.App.Name, []string{"chown", owner, filename}, "", []string{}, false)
if err != nil {
return err
}
return nil
}
// SetAppPort changes application port in the container
func (c *Container) SetAppPort(port int) error {
driver := c.getDriver()
_, err := driver.Exec(
c.App.Name,
[]string{
"sed",
"-i",
"s+proxy_pass[ ]*http://127.0.0.1:8080/;+proxy_pass http://127.0.0.1:" + strconv.Itoa(port) + "/;+g",
"/srv/conf/nginx.d/app.conf",
},
"",
[]string{},
false,
)
if err != nil {
return err
}
return err
}
// SetFileContent uploads text into a file inside the container. It's greate for uploading SSH keys.
// The method creates the diretory where the file is located and sets mode of the final file
func (c *Container) SetFileContent(filename string, text string, mode string) error {
driver := c.getDriver()
directory := path.Dir(filename)
_, err := driver.Exec(c.App.Name, []string{"mkdir", "-p", directory}, "", []string{}, false)
if err != nil {
return err
}
_, err = driver.Exec(c.App.Name, []string{"tee", filename}, text, []string{}, false)
if err != nil {
return err
}
_, err = driver.Exec(c.App.Name, []string{"chown", owner, directory}, "", []string{}, false)
if err != nil {
return err
}
_, err = driver.Exec(c.App.Name, []string{"chown", owner, filename}, "", []string{}, false)
if err != nil {
return err
}
_, err = driver.Exec(c.App.Name, []string{"chmod", mode, filename}, "", []string{}, false)
return err
}
// SetTechnology prepares container for given technology (Python, PHP, Node.js, ...)
// Where tech can be php, python or node and latest available version is used.
// If version is empty string default version will be used.
func (c *Container) SetTechnology(tech string, version string) error {
driver := c.getDriver()
var err error
// TODO: script injection here?
var output *[]byte
if version == "" {
output, err = driver.Exec(c.App.Name, []string{"su", "app", "-c", "rosti " + tech}, "", []string{}, false)
} else {
output, err = driver.Exec(c.App.Name, []string{"su", "app", "-c", "rosti " + tech + " " + version}, "", []string{}, false)
}
log.Printf("DEBUG: enable tech %s/%s for %s output: %s", tech, version, c.App.Name, string(*output))
return err
}
// GetProcessList returns list of processes managed by supervisor.
func (c *Container) GetProcessList() ([]Process, error) {
driver := c.getDriver()
processes := []Process{}
stdouterr, err := driver.Exec(c.App.Name, []string{"supervisorctl", "status"}, "", []string{}, true)
if err != nil {
return processes, nil
}
trimmed := strings.TrimSpace(string(*stdouterr))
for _, row := range strings.Split(trimmed, "\n") {
fields := strings.Fields(row)
if len(fields) > 2 {
processes = append(processes, Process{
Name: fields[0],
State: fields[1],
})
}
}
return processes, nil
}
// Restarts supervisord process
func (c *Container) RestartProcess(name string) error {
driver := c.getDriver()
_, err := driver.Exec(c.App.Name, []string{"supervisorctl", "restart", name}, "", []string{}, false)
return err
}
// Starts supervisord process
func (c *Container) StartProcess(name string) error {
driver := c.getDriver()
_, err := driver.Exec(c.App.Name, []string{"supervisorctl", "start", name}, "", []string{}, false)
return err
}
// Stops supervisord process
func (c *Container) StopProcess(name string) error {
driver := c.getDriver()
_, err := driver.Exec(c.App.Name, []string{"supervisorctl", "stop", name}, "", []string{}, false)
return err
}
// Reread supervisord config
func (c *Container) ReloadSupervisor() error {
driver := c.getDriver()
_, err := driver.Exec(c.App.Name, []string{"supervisorctl", "reread"}, "", []string{}, false)
if err != nil {
return err
}
_, err = driver.Exec(c.App.Name, []string{"supervisorctl", "update"}, "", []string{}, false)
if err != nil {
return err
}
return err
}
// GetSystemProcesses return list of running system processes
func (c *Container) GetSystemProcesses() ([]string, error) {
driver := c.getDriver()
processes, err := driver.GetProcesses(c.App.Name)
return processes, err
}
// GetPrimaryTech returns primary tech configured in the container.
func (c *Container) GetPrimaryTech() (apps.AppTech, error) {
tech := apps.AppTech{}
driver := c.getDriver()
stdouterr, err := driver.Exec(c.App.Name, []string{"readlink", "/srv/bin/primary_tech"}, "", []string{}, true)
if err != nil {
// in case there is an error just return empty response
return tech, nil
}
if len(string(*stdouterr)) > 0 {
parts := strings.Split(string(*stdouterr), "/")
if len(parts) == 5 {
rawTech := parts[3]
if rawTech == "default" {
return apps.AppTech{
Name: "default",
Version: "",
}, nil
}
techParts := strings.Split(rawTech, "-")
if len(techParts) != 2 {
return tech, errors.New("wrong number of tech parts (" + rawTech + ")")
}
return apps.AppTech{
Name: techParts[0],
Version: techParts[1],
}, nil
}
}
// Probably single technology image in case the output is empty
return tech, nil
}
// GetTechs returns all techs available in the container
func (c *Container) GetTechs() (apps.AppTechs, error) {
techs := apps.AppTechs{}
driver := c.getDriver()
stdouterr, err := driver.Exec(c.App.Name, []string{"ls", "/opt"}, "", []string{}, true)
if err != nil {
// in case there is an error just return empty response
return techs, nil
}
// Check if /opt/techs exists
if !strings.Contains(string(*stdouterr), "techs") {
return techs, nil
}
stdouterr, err = driver.Exec(c.App.Name, []string{"ls", "/opt/techs"}, "", []string{}, true)
if err != nil {
// in case there is an error just return empty response
return techs, nil
}
// If the directory doesn't exist it's single technology image
if strings.Contains(string(*stdouterr), "No such file or directory") {
return techs, nil
}
techsRaw := strings.Fields(string(*stdouterr))
for _, techRaw := range techsRaw {
techParts := strings.Split(techRaw, "-")
if len(techParts) == 2 {
techs = append(techs, apps.AppTech{
Name: techParts[0],
Version: techParts[1],
})
} else {
return techs, fmt.Errorf("one of the tech has wrong number of parts (%s)", techRaw)
}
}
return techs, nil
}
// Returns info about active technology
func (c *Container) GetActiveTech() (*TechInfo, error) {
info, err := getTechAndVersion(path.Join(c.VolumeHostPath(), "bin", "primary_tech"))
if err != nil {
return info, err
}
return info, nil
}

39
containers/types_test.go Normal file
View File

@ -0,0 +1,39 @@
package containers
import (
"os"
"path/filepath"
"testing"
)
func TestGetTechAndVersion(t *testing.T) {
// Create a temporary directory for testing
tempDir := t.TempDir()
// Create a fake language directory with version in the temporary directory
fakeLangDir := filepath.Join(tempDir, "techs", "python-3.10.4", "bin")
err := os.MkdirAll(fakeLangDir, 0755)
if err != nil {
t.Fatalf("Failed to create fake language directory: %v", err)
}
// Create a symlink for testing
symlink := filepath.Join(tempDir, "primary_tech")
err = os.Symlink(fakeLangDir, symlink)
if err != nil {
t.Fatalf("Failed to create symlink: %v", err)
}
// Test parseLanguageAndVersion function
info, err := getTechAndVersion(symlink)
if err != nil {
t.Fatalf("Unexpected error: %v", err)
}
expectedLanguage := "python"
expectedVersion := "3.10.4"
if info.Tech != expectedLanguage || info.Version != expectedVersion {
t.Errorf("Expected language: %s, version: %s, but got language: %s, version: %s",
expectedLanguage, expectedVersion, info.Tech, info.Version)
}
}

View File

@ -3,4 +3,5 @@
nats pub --count 1 admin.apps.node-x.requests -w '{"type": "list"}' nats pub --count 1 admin.apps.node-x.requests -w '{"type": "list"}'
nats pub --count 1 admin.apps.node-x.requests -w '{"type": "get", "app_name": "test_1234"}' nats pub --count 1 admin.apps.node-x.requests -w '{"type": "get", "app_name": "test_1234"}'
nats pub --count 1 admin.apps.node-x.requests -w '{"type": "create", "app_name": "natstest_0004", "payload": {"name": "natstest_0004", "image": "docker.io/rosti/runtime:2020.09-1", "cpu": 50, "memory": 256, "ssh_port": 20004, "http_port": 30004}}' nats pub --count 1 admin.apps.node-local.requests -w '{"type": "create", "name": "testrunnats_5372", "payload": "{\"name\": \"testrunnats_5372\", \"image\": \"docker.io/rosti/runtime:2022.01-1\", \"cpu\": 50, \"memory\": 512, \"ssh_port\": 25372, \"http_port\": 35372}"}'
nats pub --count 1 admin.apps.node-local.requests -w '{"type": "create", "name": "testrunnats_5372", "payload": "{\"name\": \"testrunnats_5372\", \"image\": \"harbor.hq.rosti.cz/public/runtime:2022.01-1\", \"cpu\": 50, \"memory\": 512, \"ssh_port\": 25372, \"http_port\": 35372}"}'

40
detector/main.go Normal file
View File

@ -0,0 +1,40 @@
package detector
import (
"regexp"
"strings"
)
// Flags is list of strings describing problems found among the processes
type Flags []string
func (f *Flags) String() string {
return strings.Join(*f, ",")
}
// Check goes over patterns and tries to flag given list of processes with flags.
func Check(processes []string) (Flags, error) {
flags := Flags{}
tmpFlags := make(map[string]bool)
for _, process := range processes {
for flag, patternSet := range patterns {
for _, pattern := range patternSet {
matched, err := regexp.MatchString(".*"+pattern+".*", process)
if err != nil {
return flags, err
}
if matched {
tmpFlags[flag] = true
break
}
}
}
}
for flag, _ := range tmpFlags {
flags = append(flags, flag)
}
return flags, nil
}

35
detector/main_test.go Normal file
View File

@ -0,0 +1,35 @@
package detector
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestCheck(t *testing.T) {
flags, err := Check([]string{
"sleep",
"apache2",
"miner",
"verus-solve",
})
assert.Nil(t, err)
assert.Contains(t, flags, "miner")
flags, err = Check([]string{
"sleep",
"apache2",
"hellminer",
})
assert.Nil(t, err)
assert.Contains(t, flags, "miner")
flags, err = Check([]string{
"sleep",
"apache2",
"miner", // This is not among patterns map
})
assert.Nil(t, err)
assert.NotContains(t, flags, "miner")
}

12
detector/patterns.go Normal file
View File

@ -0,0 +1,12 @@
package detector
var patterns map[string][]string = map[string][]string{
"miner": {
`verus\-solve`,
`hellminer`,
},
"bot": {
`youtube\-dl`,
`shiziyama`,
},
}

View File

@ -1,19 +0,0 @@
package docker
// ContainerStats contains fields returned in docker stats function stream
type ContainerStats struct {
Pids struct {
Current int `json:"current"`
} `json:"pids_stats"`
CPU struct {
Usage struct {
Total int64 `json:"total_usage"`
} `json:"cpu_usage"`
} `json:"cpu_stats"`
Memory struct {
Usage int `json:"usage"`
MaxUsage int `json:"max_usage"`
Limit int `json:"limit"`
} `json:"memory_stats"`
ID string `json:"id"`
}

View File

@ -1,279 +0,0 @@
package docker
import (
"log"
"path"
"strings"
"github.com/rosti-cz/node-api/apps"
"github.com/rosti-cz/node-api/common"
)
// username in the containers under which all containers run
const appUsername = "app"
const passwordFile = "/srv/.rosti"
// Process contains info about background application usually running in supervisor
type Process struct {
Name string `json:"name"`
State string `json:"state"`
}
// Container extends App struct from App
type Container struct {
App *apps.App `json:"app"`
}
func (c *Container) getDriver() *Driver {
config := common.GetConfig()
driver := &Driver{
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
}
return driver
}
// volumeHostPath each container has one volume mounted into it,
func (c *Container) volumeHostPath() string {
config := common.GetConfig()
return path.Join(config.AppsPath, c.App.Name)
}
// GetRawResourceStats returns RAW CPU and memory usage directly from Docker API
func (c *Container) GetRawResourceStats() (int64, int, error) {
driver := c.getDriver()
cpu, memory, err := driver.RawStats(c.App.Name)
return cpu, memory, err
}
// GetState returns app state object with populated state fields
func (c *Container) GetState() (*apps.AppState, error) {
status, err := c.Status()
if err != nil {
return nil, err
}
// TODO: this implementation takes more than one hour for 470 containers. It needs to be implemented differently.
// cpu, memory, err := c.ResourceUsage()
// if err != nil {
// return nil, err
// }
bytes, inodes, err := c.DiskUsage()
if err != nil {
return nil, err
}
state := apps.AppState{
State: status,
// CPUUsage: cpu,
// MemoryUsage: memory,
CPUUsage: -1.0,
MemoryUsage: -1.0,
DiskUsageBytes: bytes,
DiskUsageInodes: inodes,
}
return &state, nil
}
// Status returns state of the container
// Possible values: running, stopped, no-container, unknown
func (c *Container) Status() (string, error) {
status := "unknown"
// config := common.GetConfig()
// if _, err := os.Stat(path.Join(config.AppsPath, c.App.Name)); !os.IsNotExist(err) {
// status = "data-only"
// }
driver := c.getDriver()
containerStatus, err := driver.Status(c.App.Name)
if err != nil && err.Error() == "no container found" {
return "no-container", nil
}
if err != nil {
return status, err
}
status = containerStatus
return status, nil
}
// DiskUsage returns number of bytes and inodes used by the container in it's mounted volume
func (c *Container) DiskUsage() (int, int, error) {
return du(c.volumeHostPath())
}
// ResourceUsage returns amount of memory in B and CPU in % that the app occupies
func (c *Container) ResourceUsage() (float64, int, error) {
driver := c.getDriver()
cpu, memory, err := driver.Stats(c.App.Name)
if err != nil {
return 0.0, 0, err
}
return cpu, memory, nil
}
// Create creates the container
func (c *Container) Create() error {
driver := c.getDriver()
_, err := driver.Create(
c.App.Name,
c.App.Image,
c.volumeHostPath(),
c.App.HTTPPort,
c.App.SSHPort,
c.App.CPU,
c.App.Memory,
[]string{},
)
return err
}
// Start starts the container
func (c *Container) Start() error {
driver := c.getDriver()
return driver.Start(c.App.Name)
}
// Stop stops the container
func (c *Container) Stop() error {
driver := c.getDriver()
return driver.Stop(c.App.Name)
}
// Restart restarts the container
func (c *Container) Restart() error {
driver := c.getDriver()
err := driver.Stop(c.App.Name)
if err != nil {
return err
}
return driver.Start(c.App.Name)
}
// Destroy removes the container but keeps the data so it can be created again
func (c *Container) Destroy() error {
driver := c.getDriver()
return driver.Remove(c.App.Name)
}
// Delete removes both data and the container
func (c *Container) Delete() error {
status, err := c.Status()
if err != nil {
return err
}
// It's questionable to have this here. The problem is this method
// does two things, deleting the container and the data and when
// the deleted container doesn't exist we actually don't care
// and we can continue to remove the data.
if status != "no-container" {
err = c.Destroy()
if err != nil {
return err
}
}
config := common.GetConfig()
volumePath := path.Join(config.AppsPath, c.App.Name)
err = removeDirectory(volumePath)
if err != nil {
log.Println(err)
}
return nil
}
// SetPassword configures password for system user app in the container
func (c *Container) SetPassword(password string) error {
driver := c.getDriver()
_, err := driver.Exec(c.App.Name, []string{"chpasswd"}, appUsername+":"+password, []string{}, false)
if err != nil {
return err
}
_, err = driver.Exec(c.App.Name, []string{"tee", passwordFile}, password, []string{}, false)
if err != nil {
return err
}
return err
}
// SetFileContent uploads text into a file inside the container. It's greate for uploading SSH keys.
// The method creates the diretory where the file is located and sets mode of the final file
func (c *Container) SetFileContent(filename string, text string, mode string) error {
driver := c.getDriver()
directory := path.Dir(filename)
_, err := driver.Exec(c.App.Name, []string{"mkdir", "-p", directory}, "", []string{}, false)
if err != nil {
return err
}
_, err = driver.Exec(c.App.Name, []string{"tee", filename}, text, []string{}, false)
if err != nil {
return err
}
_, err = driver.Exec(c.App.Name, []string{"chown", "app:app", directory}, "", []string{}, false)
if err != nil {
return err
}
_, err = driver.Exec(c.App.Name, []string{"chown", "app:app", filename}, "", []string{}, false)
if err != nil {
return err
}
_, err = driver.Exec(c.App.Name, []string{"chmod", mode, filename}, "", []string{}, false)
return err
}
// SetTechnology prepares container for given technology (Python, PHP, Node.js, ...)
// Where tech can be php, python or node and latest available version is used.
func (c *Container) SetTechnology(tech string) error {
driver := c.getDriver()
_, err := driver.Exec(c.App.Name, []string{"su", "app", "-c", "rosti " + tech}, "", []string{}, false)
return err
}
// GetProcessList returns list of processes managed by supervisor.
func (c *Container) GetProcessList() (*[]Process, error) {
driver := c.getDriver()
processes := []Process{}
stdouterr, err := driver.Exec(c.App.Name, []string{"supervisorctl", "status"}, "", []string{}, true)
if err != nil {
return &processes, nil
}
trimmed := strings.TrimSpace(string(*stdouterr))
for _, row := range strings.Split(trimmed, "\n") {
fields := strings.Fields(row)
if len(fields) > 2 {
processes = append(processes, Process{
Name: fields[0],
State: fields[1],
})
}
}
return &processes, nil
}

975
glue/main.go Normal file
View File

@ -0,0 +1,975 @@
package glue
import (
"errors"
"fmt"
"io"
"log"
"net/http"
"os"
"os/exec"
"path"
"strings"
"time"
"github.com/jinzhu/gorm"
"github.com/rosti-cz/node-api/apps"
"github.com/rosti-cz/node-api/containers"
docker "github.com/rosti-cz/node-api/containers"
"github.com/rosti-cz/node-api/detector"
"github.com/rosti-cz/node-api/node"
)
// Wait for the container a little bit longer
const ENABLE_TECH_WAIT = 10
// Processor separates logic of apps, containers, detector and node from handlers.
// It defines common interface for both types of handlers, HTTP and the events.
type Processor struct {
AppName string
DB *gorm.DB
SnapshotProcessor *apps.SnapshotProcessor
NodeProcessor *node.Processor
WaitForAppLoops uint // each loop is five seconds
DockerSock string
BindIPHTTP string
BindIPSSH string
AppsPath string
}
// Return prepared Container instance
func (p *Processor) getContainer() (containers.Container, error) {
container := containers.Container{}
processor := p.getAppProcessor()
app, err := processor.Get(p.AppName)
if err != nil {
return container, err
}
container = docker.Container{
App: &app,
DockerSock: p.DockerSock,
BindIPHTTP: p.BindIPHTTP,
BindIPSSH: p.BindIPSSH,
AppsPath: p.AppsPath,
}
return container, nil
}
// returns instance of getAppProcessor
func (p *Processor) getAppProcessor() apps.AppsProcessor {
processor := apps.AppsProcessor{
DB: p.DB,
}
processor.Init()
return processor
}
// waits until app is ready
func (p *Processor) waitForApp() error {
sleepFor := 5 * time.Second
loops := 6
if p.WaitForAppLoops != 0 {
loops = int(p.WaitForAppLoops)
}
statsProcessor := StatsProcessor{
DB: p.DB,
DockerSock: p.DockerSock,
BindIPHTTP: p.BindIPHTTP,
BindIPSSH: p.BindIPSSH,
AppsPath: p.AppsPath,
}
for i := 0; i < loops; i++ {
err := statsProcessor.UpdateState(p.AppName)
if err != nil {
time.Sleep(sleepFor)
continue
}
container, err := p.getContainer()
if err != nil {
return err
}
status, err := container.Status()
if err != nil {
return err
}
if status.Status == "running" {
return nil
}
time.Sleep(sleepFor)
}
return errors.New("timeout reached")
}
// List returns list of apps
// noUpdate skips stats gathering to speed things up
func (p *Processor) List(noUpdate bool) (apps.Apps, error) {
appList := apps.Apps{}
if !noUpdate {
statsProcessor := StatsProcessor{
DB: p.DB,
DockerSock: p.DockerSock,
BindIPHTTP: p.BindIPHTTP,
BindIPSSH: p.BindIPSSH,
AppsPath: p.AppsPath,
}
err := statsProcessor.GatherStates()
if err != nil {
return appList, fmt.Errorf("backend error: %v", err)
}
}
processor := p.getAppProcessor()
appList, err := processor.List()
if err != nil {
return appList, fmt.Errorf("backend error: %v", err)
}
return appList, err
}
// Get returns one app
func (p *Processor) Get(noUpdate bool) (apps.App, error) {
app := apps.App{}
if !noUpdate {
statsProcessor := StatsProcessor{
DB: p.DB,
DockerSock: p.DockerSock,
BindIPHTTP: p.BindIPHTTP,
BindIPSSH: p.BindIPSSH,
AppsPath: p.AppsPath,
}
err := statsProcessor.UpdateState(p.AppName)
if err != nil {
return app, err
}
}
processor := p.getAppProcessor()
app, err := processor.Get(p.AppName)
if err != nil {
return app, err
}
// Gather runtime info about the container
if !noUpdate {
container := docker.Container{
App: &app,
DockerSock: p.DockerSock,
BindIPHTTP: p.BindIPHTTP,
BindIPSSH: p.BindIPSSH,
AppsPath: p.AppsPath,
}
isPasswordSet, err := container.IsPasswordSet()
if err != nil {
return app, err
}
app.IsPasswordSet = isPasswordSet
status, err := container.Status()
if err != nil {
return app, err
}
if status.Status == "running" {
var err error
app.Techs, err = container.GetTechs()
if err != nil {
return app, err
}
app.PrimaryTech, err = container.GetPrimaryTech()
if err != nil {
return app, err
}
processList, err := container.GetSystemProcesses()
if err != nil {
return app, err
}
flags, err := detector.Check(processList)
if err != nil {
return app, err
}
app.Flags = flags.String()
}
}
return app, nil
}
// Takes URL with an tar archive and prepares container's volume from it.
func (p *Processor) volumeFromURL(url string, container *docker.Container) error {
// Validation, check if url ends with tar.zst
if !strings.HasSuffix(url, ".tar.zst") {
return fmt.Errorf("archive has to end with .tar.zst")
}
volumePath := container.VolumeHostPath()
// Prepare volume path
err := os.MkdirAll(volumePath, 0755)
if err != nil {
return fmt.Errorf("failed to create volume path: %v", err)
}
// Download the archive
archivePath := path.Join(volumePath, "archive.tar.zst")
log.Printf("%s: downloading archive from %s\n", container.App.Name, url)
f, err := os.Create(archivePath)
if err != nil {
return fmt.Errorf("failed to create archive file: %v", err)
}
defer f.Close()
resp, err := http.Get(url)
if err != nil {
return fmt.Errorf("failed to download archive: %v", err)
}
defer resp.Body.Close()
n, err := io.Copy(f, resp.Body)
if err != nil {
return fmt.Errorf("failed to download archive: %v", err)
}
log.Printf("downloaded %d bytes\n", n)
// Extract the archive
log.Printf("%s: extracting archive\n", container.App.Name)
// Call tar xf archive.tar.zst -C /volume
cmd := exec.Command("tar", "-xf", archivePath, "-C", volumePath)
err = cmd.Run()
if err != nil {
log.Printf("%s: failed to extract archive: %v", container.App.Name, err)
return err
}
// Remove archive
log.Printf("%s: removing archive\n", container.App.Name)
err = os.Remove(volumePath + "/archive.tar.zst")
if err != nil {
return fmt.Errorf("failed to remove archive: %v", err)
}
log.Printf("%s: volume preparing done\n", container.App.Name)
return nil
}
// Create creates a single app in the system
func (p *Processor) Create(appTemplate apps.App) error {
if appTemplate.EnvRaw == nil {
appTemplate.EnvRaw = make(map[string]interface{})
}
err := p.Register(appTemplate)
if err != nil {
return err
}
container := docker.Container{
App: &appTemplate,
DockerSock: p.DockerSock,
BindIPHTTP: p.BindIPHTTP,
BindIPSSH: p.BindIPSSH,
AppsPath: p.AppsPath,
}
if len(appTemplate.Snapshot) > 0 && len(appTemplate.Setup.ArchiveURL) > 0 {
return fmt.Errorf("snapshot and archive_url cannot be used together")
}
if len(appTemplate.Setup.ArchiveURL) > 0 {
err = p.volumeFromURL(appTemplate.Setup.ArchiveURL, &container)
if err != nil {
return fmt.Errorf("failed to prepare volume: %v", err)
}
}
err = container.Create()
if err != nil {
return err
}
// Restore from snapshot if it's noted in the request
if len(appTemplate.Snapshot) > 0 {
log.Printf("App %s is going to be created from %s snapshot\n", appTemplate.Name, appTemplate.Snapshot)
// Restore the data
err = p.SnapshotProcessor.RestoreSnapshot(appTemplate.Snapshot, appTemplate.Name)
if err != nil {
return fmt.Errorf("failed to restore snapshot: %v", err)
}
}
err = container.Start()
if err != nil {
return fmt.Errorf("failed to start container: %v", err)
}
// Wait for the app to be created
err = p.waitForApp()
if err != nil {
return fmt.Errorf("failed to wait for app: %v", err)
}
time.Sleep(5 * time.Second) // We wait for a little bit longer to make sure the container is fully started
// Setup SSH keys if it's noted in the request
log.Println("Checking if SSH key is required")
if len(appTemplate.Setup.SSHKeys) > 0 && len(appTemplate.Snapshot) == 0 {
log.Println("Setting up SSH keys")
err = p.UpdateKeys(appTemplate.Setup.SSHKeys)
if err != nil {
return fmt.Errorf("failed to update keys: %v", err)
}
}
// Setup technology if it's noted in the request
if len(appTemplate.Setup.Tech) > 0 && len(appTemplate.Snapshot) == 0 {
err = p.EnableTech(appTemplate.Setup.Tech, appTemplate.Setup.TechVersion)
if err != nil {
return fmt.Errorf("failed to enable tech: %v", err)
}
}
// Set password if it's noted in the request
if len(appTemplate.Setup.Password) > 0 && len(appTemplate.Snapshot) == 0 {
err = p.SetPassword(appTemplate.Setup.Password)
if err != nil {
return fmt.Errorf("failed to set password: %v", err)
}
}
// Changes port of the app hosted inside the container
if appTemplate.AppPort != 0 && len(appTemplate.Snapshot) == 0 {
err = container.SetAppPort(appTemplate.AppPort)
if err != nil {
return fmt.Errorf("failed to change app port to %d: %v", appTemplate.AppPort, err)
}
err = container.RestartProcess("nginx")
if err != nil {
return fmt.Errorf("failed to restart nginx: %v", err)
}
}
return nil
}
// Register registers app without creating a container for it
// Returns app name and an error
func (p *Processor) Register(appTemplate apps.App) error {
processor := p.getAppProcessor()
err := processor.New(appTemplate.Name, appTemplate.SSHPort, appTemplate.HTTPPort, appTemplate.Image, appTemplate.CPU, appTemplate.Memory)
if err != nil {
if validationError, ok := err.(apps.ValidationError); ok {
return fmt.Errorf("validation error: %v", validationError.Error())
}
return err
}
return nil
}
// Update updates application
func (p *Processor) Update(appTemplate apps.App) error {
if appTemplate.EnvRaw == nil {
appTemplate.EnvRaw = make(map[string]interface{})
}
processor := p.getAppProcessor()
app, err := processor.Update(appTemplate.Name, appTemplate.SSHPort, appTemplate.HTTPPort, appTemplate.Image, appTemplate.CPU, appTemplate.Memory, appTemplate.GetEnv())
if err != nil {
if validationError, ok := err.(apps.ValidationError); ok {
return fmt.Errorf("validation error: %v", validationError.Error())
}
return err
}
container := docker.Container{
App: app,
DockerSock: p.DockerSock,
BindIPHTTP: p.BindIPHTTP,
BindIPSSH: p.BindIPSSH,
AppsPath: p.AppsPath,
}
err = container.Destroy()
if err != nil && err.Error() == "no container found" {
// We don't care if the container didn't exist anyway
err = nil
}
if err != nil {
return err
}
err = container.Create()
if err != nil {
return err
}
err = container.Start()
if err != nil {
return err
}
// Setup technology if it's noted in the request
if len(appTemplate.Setup.Tech) > 0 {
err := p.waitForApp()
if err != nil {
return err
}
err = p.EnableTech(appTemplate.Setup.Tech, appTemplate.Setup.TechVersion)
if err != nil {
return fmt.Errorf("failed to enable tech: %v", err)
}
// We restart the container so everything can use the new tech
err = container.Restart()
if err != nil {
return err
}
}
return nil
}
// Delete removes app from the system
func (p *Processor) Delete() error {
processor := p.getAppProcessor()
container, err := p.getContainer()
if err != nil {
log.Println("ERROR: delete app:", err.Error())
return err
}
status, err := container.Status()
if err != nil {
return err
}
if status.Status != "no-container" {
// We stop the container first
err = container.Stop()
if err != nil {
return err
}
// Then delete it
err = container.Delete()
if err != nil {
return err
}
}
err = processor.Delete(p.AppName)
if err != nil {
return err
}
return nil
}
// Stop stops app
func (p *Processor) Stop() error {
container, err := p.getContainer()
if err != nil {
return err
}
status, err := container.Status()
if err != nil {
return err
}
// Stop the container only when it exists
if status.Status != "no-container" {
err = container.Stop()
if err != nil {
return err
}
err = container.Destroy()
if err != nil {
return err
}
}
return nil
}
// Start starts app
func (p *Processor) Start() error {
container, err := p.getContainer()
if err != nil {
return err
}
status, err := container.Status()
if err != nil {
return err
}
if status.Status == "no-container" {
err = container.Create()
if err != nil {
return err
}
}
err = container.Start()
if err != nil {
return err
}
return nil
}
// Restart restarts app
func (p *Processor) Restart() error {
container, err := p.getContainer()
if err != nil {
return err
}
err = container.Restart()
if err != nil {
return err
}
return nil
}
// UpdateKeys uploads SSH keys into the container
// keys parameters is just a string, one key per line
func (p *Processor) UpdateKeys(keys string) error {
err := p.waitForApp()
if err != nil {
return err
}
container, err := p.getContainer()
if err != nil {
return err
}
log.Println("Storing keys into " + sshPubKeysLocation)
err = container.AppendFile(sshPubKeysLocation, keys+"\n", "0600")
if err != nil {
return err
}
return nil
}
// SetPassword sets up a new password to access the SSH user
func (p *Processor) SetPassword(password string) error {
err := p.waitForApp()
if err != nil {
return err
}
container, err := p.getContainer()
if err != nil {
return err
}
err = container.SetPassword(password)
if err != nil {
return err
}
return nil
}
// ClearPassword removes password from the SSH user
func (p *Processor) ClearPassword() error {
err := p.waitForApp()
if err != nil {
return err
}
container, err := p.getContainer()
if err != nil {
return err
}
err = container.ClearPassword()
if err != nil {
return err
}
return nil
}
// Generate SSH key and adds it into authorized_keys
// These pair of keys is used for deployment.
// Returns private key, pubkey and error.
// Keys are returned every time even if it was already generated
func (p *Processor) GenerateDeploySSHKeys() (string, string, error) {
container, err := p.getContainer()
if err != nil {
return "", "", err
}
// If the container is not running we skip this code
status, err := container.Status()
if err != nil {
return "", "", err
}
if status.Status != "running" {
return "", "", nil
}
created, err := container.GenerateDeploySSHKeys()
if err != nil {
return "", "", err
}
privateKey, pubKey, err := container.GetDeploySSHKeys()
if err != nil {
return "", "", err
}
if created {
err = container.AppendFile(sshPubKeysLocation, pubKey+"\n", "0600")
if err != nil {
return "", "", err
}
}
return privateKey, pubKey, nil
}
// Return SSH host key without hostname (first part of the line)
func (p *Processor) GetHostKey() (string, error) {
container, err := p.getContainer()
if err != nil {
return "", err
}
// If the container is not running we skip this code
status, err := container.Status()
if err != nil {
return "", err
}
if status.Status != "running" {
return "", nil
}
hostKey, err := container.GetHostKey()
if err != nil {
return "", err
}
return hostKey, nil
}
// Save meta data about app into a file
func (p *Processor) SaveMetadata(metadata string) error {
container, err := p.getContainer()
if err != nil {
return err
}
volumePath := container.VolumeHostPath()
f, err := os.Create(path.Join(volumePath, ".metadata.json"))
if err != nil {
return err
}
defer f.Close()
_, err = f.Write([]byte(metadata))
if err != nil {
return err
}
// Set permissions
err = os.Chmod(path.Join(volumePath, ".metadata.json"), 0600)
if err != nil {
return err
}
// Set owner
err = os.Chown(path.Join(volumePath, ".metadata.json"), ownerUID, ownerGID)
if err != nil {
return err
}
return nil
}
// Processes returns list of supervisord processes
func (p *Processor) Processes() ([]docker.Process, error) {
container, err := p.getContainer()
if err != nil {
return nil, err
}
processes, err := container.GetProcessList()
if err != nil {
return nil, err
}
return processes, nil
}
// EnableTech sets up runtime for new tech or new version a tech
func (p *Processor) EnableTech(service, version string) error {
err := p.waitForApp()
if err != nil {
return err
}
time.Sleep(ENABLE_TECH_WAIT * time.Second)
container, err := p.getContainer()
if err != nil {
return err
}
err = container.SetTechnology(service, version)
if err != nil {
return err
}
return nil
}
// Rebuild recreates container for app
func (p *Processor) Rebuild() error {
container, err := p.getContainer()
if err != nil {
return err
}
err = container.Destroy()
if err != nil && !strings.Contains(err.Error(), "no container found") {
return err
}
err = container.Create()
if err != nil {
return err
}
err = container.Start()
if err != nil {
return err
}
return nil
}
// AddLabel adds a label to the app
func (p *Processor) AddLabel(label string) error {
appProcessor := p.getAppProcessor()
err := appProcessor.AddLabel(p.AppName, label)
if err != nil {
return err
}
return nil
}
// RemoveLabel removes a label from the app
func (p *Processor) RemoveLabel(label string) error {
appProcessor := p.getAppProcessor()
err := appProcessor.RemoveLabel(p.AppName, label)
if err != nil {
return err
}
return nil
}
// GetNote returns node's info
func (p *Processor) GetNode() (*node.Node, error) {
node, err := p.NodeProcessor.GetNodeInfo()
if err != nil {
return nil, err
}
return node, nil
}
// CreateSnapshot creates a snapshot of given app
func (p *Processor) CreateSnapshot(labels []string) error {
_, err := p.SnapshotProcessor.CreateSnapshot(p.AppName, labels)
return err
}
// RestoreFromSnapshot restores app from given snapshot
func (p *Processor) RestoreFromSnapshot(snapshotName string) error {
container, err := p.getContainer()
// Stop the container
status, err := container.Status()
if err != nil {
return err
}
// Stop the container only when it exists
if status.Status != "no-container" {
err = container.Stop()
if err != nil {
return err
}
}
// Restore the data
err = p.SnapshotProcessor.RestoreSnapshot(snapshotName, p.AppName)
if err != nil {
return err
}
// Start the container
status, err = container.Status()
if err != nil {
return err
}
if status.Status == "no-container" {
err = container.Create()
if err != nil {
return err
}
}
err = container.Start()
if err != nil {
return err
}
return nil
}
// ListSnapshots returns list of snapshots
func (p *Processor) ListSnapshots() (SnapshotsMetadata, error) {
snapshots, err := p.SnapshotProcessor.ListAppSnapshots(p.AppName)
if err != nil {
return SnapshotsMetadata{}, err
}
var output SnapshotsMetadata
for _, snapshot := range snapshots {
output = append(output, SnapshotMetadata{
Key: snapshot.KeyName(p.SnapshotProcessor.IndexLabel),
Metadata: snapshot,
})
}
return output, nil
}
// ListAppsSnapshots returns list of snapshots for given app list
func (p *Processor) ListAppsSnapshots(appNames []string) (SnapshotsMetadata, error) {
snapshots, err := p.SnapshotProcessor.ListAppsSnapshots(appNames)
if err != nil {
return SnapshotsMetadata{}, err
}
var output SnapshotsMetadata
for _, snapshot := range snapshots {
output = append(output, SnapshotMetadata{
Key: snapshot.KeyName(p.SnapshotProcessor.IndexLabel),
Metadata: snapshot,
})
}
return output, nil
}
// ListSnapshotsByLabel returns list of snapshots by given label
func (p *Processor) ListSnapshotsByLabel(label string) (SnapshotsMetadata, error) {
snapshots, err := p.SnapshotProcessor.ListAppsSnapshotsByLabel(label)
if err != nil {
return SnapshotsMetadata{}, err
}
var output SnapshotsMetadata
for _, snapshot := range snapshots {
output = append(output, SnapshotMetadata{
Key: snapshot.KeyName(p.SnapshotProcessor.IndexLabel),
Metadata: snapshot,
})
}
return output, nil
}
// GetSnapshot returns info about a single snapshot
func (p *Processor) GetSnapshot(snapshotName string) (SnapshotMetadata, error) {
snapshot, err := p.SnapshotProcessor.GetSnapshot(snapshotName)
if err != nil {
return SnapshotMetadata{}, err
}
output := SnapshotMetadata{
Key: snapshot.KeyName(p.SnapshotProcessor.IndexLabel),
Metadata: snapshot,
}
return output, nil
}
// GetSnapshotDownloadLink return download link for a snapshot
func (p *Processor) GetSnapshotDownloadLink(snapshotName string) (string, error) {
link, err := p.SnapshotProcessor.GetDownloadLink(snapshotName)
if err != nil {
return "", err
}
return link, nil
}
// DeleteSnapshot deletes a snapshot
func (p *Processor) DeleteSnapshot(snapshotName string) error {
err := p.SnapshotProcessor.DeleteSnapshot(snapshotName)
if err != nil {
return err
}
return nil
}
// DeleteAppSnapshots deletes all snapshot of given app
func (p *Processor) DeleteAppSnapshots() error {
err := p.SnapshotProcessor.DeleteAppSnapshots(p.AppName)
if err != nil {
return err
}
return nil
}
// Returns active technology in the app
func (p *Processor) GetActiveTech() (*containers.TechInfo, error) {
container, err := p.getContainer()
if err != nil {
return nil, err
}
tech, err := container.GetActiveTech()
if err != nil {
return tech, err
}
return tech, nil
}

244
glue/main_test.go Normal file
View File

@ -0,0 +1,244 @@
package glue
import (
"log"
"os"
"path"
"testing"
"github.com/jinzhu/gorm"
"github.com/rosti-cz/node-api/apps"
"github.com/stretchr/testify/assert"
// This is line from GORM documentation that imports database dialect
_ "github.com/jinzhu/gorm/dialects/sqlite"
)
const testDBPath = "file::memory:?cache=shared"
var testAppTemplate apps.App = apps.App{
Name: "test_1234",
SSHPort: 10000,
HTTPPort: 10001,
Image: "harbor.hq.rosti.cz/public/runtime:2022.01-1",
CPU: 50,
Memory: 256,
}
func getPWD() string {
dir, err := os.Getwd()
if err != nil {
log.Fatal(err)
}
return dir
}
func testDB() *gorm.DB {
db, err := gorm.Open("sqlite3", testDBPath)
if err != nil {
log.Fatalln(err)
}
return db
}
func getTestDockerSock() string {
dockerSocket := os.Getenv("DOCKER_SOCKET")
if dockerSocket == "" {
return "unix:///run/user/1000/podman/podman.sock"
}
return dockerSocket
}
var processor Processor
func TestMain(m *testing.M) {
// This processor is used to test all methods
processor = Processor{
AppName: testAppTemplate.Name,
DB: testDB(),
// SnapshotProcessor *apps.SnapshotProcessor
DockerSock: getTestDockerSock(),
BindIPHTTP: "127.0.0.1",
BindIPSSH: "127.0.0.1",
AppsPath: path.Join(getPWD(), "tmp/apps"),
}
exitVal := m.Run()
os.Exit(exitVal)
}
func TestProcessorCreate(t *testing.T) {
err := processor.Create(testAppTemplate)
assert.Nil(t, err)
processor.Delete()
assert.Nil(t, err)
}
// func TestProcessorList(t *testing.T) {
// }
func TestProcessorGet(t *testing.T) {
err := processor.Create(testAppTemplate)
assert.Nil(t, err)
app, err := processor.Get(false)
assert.Nil(t, err)
assert.Equal(t, "running", app.State)
processor.Delete()
assert.Nil(t, err)
}
func TestProcessorRegister(t *testing.T) {
err := processor.Register(testAppTemplate)
assert.Nil(t, err)
app, err := processor.Get(false)
assert.Nil(t, err)
assert.Equal(t, "no-container", app.State)
processor.Delete()
assert.Nil(t, err)
}
func TestProcessorUpdate(t *testing.T) {
err := processor.Create(testAppTemplate)
assert.Nil(t, err)
app, err := processor.Get(false)
assert.Nil(t, err)
assert.Equal(t, "running", app.State)
updatedTemplate := testAppTemplate
updatedTemplate.Memory = 1024
err = processor.Update(updatedTemplate)
assert.Nil(t, err)
assert.Equal(t, "running", app.State)
app, err = processor.Get(false)
assert.Nil(t, err)
assert.Equal(t, 1024, app.Memory)
processor.Delete()
assert.Nil(t, err)
}
func TestProcessorDelete(t *testing.T) {
err := processor.Create(testAppTemplate)
assert.Nil(t, err)
processor.Delete()
assert.Nil(t, err)
}
func TestProcessorStop(t *testing.T) {
err := processor.Create(testAppTemplate)
assert.Nil(t, err)
err = processor.Stop()
assert.Nil(t, err)
app, err := processor.Get(false)
assert.Nil(t, err)
assert.Equal(t, "exited", app.State)
processor.Delete()
assert.Nil(t, err)
}
func TestProcessorStart(t *testing.T) {
err := processor.Create(testAppTemplate)
assert.Nil(t, err)
err = processor.Stop()
assert.Nil(t, err)
app, err := processor.Get(false)
assert.Nil(t, err)
assert.Equal(t, "exited", app.State)
err = processor.Start()
assert.Nil(t, err)
app, err = processor.Get(false)
assert.Nil(t, err)
assert.Equal(t, "running", app.State)
processor.Delete()
assert.Nil(t, err)
}
// func TestProcessorRestart(t *testing.T) {
// }
// func TestProcessorUpdateKeys(t *testing.T) {
// }
// func TestProcessorSetPassword(t *testing.T) {
// }
// func TestProcessorProcesses(t *testing.T) {
// }
// func TestProcessorEnableTech(t *testing.T) {
// }
// func TestProcessorRebuild(t *testing.T) {
// }
// func TestProcessorAddLabel(t *testing.T) {
// }
// func TestProcessorRemoveLabel(t *testing.T) {
// }
// func TestProcessorGetNode(t *testing.T) {
// }
// func TestProcessorCreateSnapshot(t *testing.T) {
// }
// func TestProcessorRestoreFromSnapshot(t *testing.T) {
// }
// func TestProcessorListSnapshots(t *testing.T) {
// }
// func TestProcessorListAppsSnapshots(t *testing.T) {
// }
// func TestProcessorListSnapshotsByLabel(t *testing.T) {
// }
// func TestProcessorGetSnapshot(t *testing.T) {
// }
// func TestProcessorGetSnapshotDownloadLink(t *testing.T) {
// }
// func TestProcessorDeleteSnapshot(t *testing.T) {
// }
// func TestProcessorDeleteAppSnapshots(t *testing.T) {
// }

132
glue/stats.go Normal file
View File

@ -0,0 +1,132 @@
package glue
import (
"log"
"github.com/getsentry/sentry-go"
"github.com/jinzhu/gorm"
"github.com/rosti-cz/node-api/apps"
docker "github.com/rosti-cz/node-api/containers"
)
// StatsProcessor covers all methods that are needed
// to gather information about application containers.
type StatsProcessor struct {
DB *gorm.DB
DockerSock string
BindIPHTTP string
BindIPSSH string
AppsPath string
}
// returns instance of getAppProcessor
func (s *StatsProcessor) getAppProcessor() apps.AppsProcessor {
processor := apps.AppsProcessor{
DB: s.DB,
}
processor.Init()
return processor
}
// updateUsage updates various resource usage of the container/app in the database
func (s *StatsProcessor) UpdateUsage(name string) error {
processor := s.getAppProcessor()
app, err := processor.Get(name)
if err != nil {
return err
}
container := docker.Container{
App: &app,
DockerSock: s.DockerSock,
BindIPHTTP: s.BindIPHTTP,
BindIPSSH: s.BindIPSSH,
AppsPath: s.AppsPath,
}
state, err := container.GetState()
if err != nil {
return err
}
err = processor.UpdateResources(
name,
state.State,
state.OOMKilled,
state.CPUUsage,
state.MemoryUsage,
state.DiskUsageBytes,
state.DiskUsageInodes,
state.Flags,
state.IsPasswordSet,
)
return err
}
// Updates only container's state. Check current status of the container and saves it into the database.
func (s *StatsProcessor) UpdateState(name string) error {
processor := s.getAppProcessor()
app, err := processor.Get(name)
if err != nil {
return err
}
container := docker.Container{
App: &app,
DockerSock: s.DockerSock,
BindIPHTTP: s.BindIPHTTP,
BindIPSSH: s.BindIPSSH,
AppsPath: s.AppsPath,
}
state, err := container.Status()
if err != nil {
return err
}
err = processor.UpdateState(
app.Name,
state.Status,
state.OOMKilled,
)
return err
}
// gatherStats loops over all applications and calls updateUsage to write various metric into the database.
func (s *StatsProcessor) GatherStats() error {
processor := s.getAppProcessor()
appList, err := processor.List()
if err != nil {
return err
}
for _, app := range appList {
err := s.UpdateUsage(app.Name)
if err != nil {
sentry.CaptureException(err)
log.Println("STATS ERROR:", err.Error())
}
}
return nil
}
// gatherStates loops over all apps and updates their container state
func (s *StatsProcessor) GatherStates() error {
processor := s.getAppProcessor()
appList, err := processor.List()
if err != nil {
return err
}
for _, app := range appList {
err := s.UpdateState(app.Name)
if err != nil {
sentry.CaptureException(err)
log.Println("STATE ERROR:", err.Error())
}
}
return nil
}

18
glue/types.go Normal file
View File

@ -0,0 +1,18 @@
package glue
import "github.com/rosti-cz/node-api/apps"
const ownerUID = 1000
const ownerGID = 1000
// Path where authorized keys are
const sshPubKeysLocation = "/srv/.ssh/authorized_keys"
// SnapshotMetadata is snapshot structure encapsulation that combines key and metadata about the snapshot
type SnapshotMetadata struct {
Key string `json:"key"`
Metadata apps.Snapshot `json:"metadata"`
}
// SnapshotsMetadata is returned by handlers
type SnapshotsMetadata []SnapshotMetadata

35
go.mod
View File

@ -3,26 +3,31 @@ module github.com/rosti-cz/node-api
go 1.14 go 1.14
require ( require (
github.com/Microsoft/go-winio v0.4.18 // indirect github.com/Microsoft/go-winio v0.6.1 // indirect
github.com/StackExchange/wmi v0.0.0-20210224194228-fe8f1750fd46 // indirect github.com/StackExchange/wmi v1.2.1 // indirect
github.com/docker/distribution v2.7.1+incompatible // indirect github.com/containerd/log v0.1.0 // indirect
github.com/docker/docker v1.13.1 github.com/distribution/reference v0.5.0 // indirect
github.com/docker/docker v25.0.3+incompatible
github.com/docker/go-connections v0.4.0 github.com/docker/go-connections v0.4.0
github.com/docker/go-units v0.4.0 // indirect github.com/docker/go-units v0.5.0 // indirect
github.com/go-ole/go-ole v1.2.5 // indirect github.com/getsentry/sentry-go v0.26.0
github.com/gobuffalo/packr v1.30.1 github.com/gobuffalo/packr v1.30.1
github.com/golang/protobuf v1.5.2 // indirect github.com/gogo/protobuf v1.3.2 // indirect
github.com/jinzhu/gorm v1.9.14 github.com/jinzhu/gorm v1.9.14
github.com/kelseyhightower/envconfig v1.4.0 github.com/kelseyhightower/envconfig v1.4.0
github.com/labstack/echo v3.3.10+incompatible github.com/labstack/echo/v4 v4.10.0
github.com/labstack/gommon v0.3.0 // indirect
github.com/mholt/archiver/v3 v3.5.0
github.com/minio/minio-go/v7 v7.0.14 github.com/minio/minio-go/v7 v7.0.14
github.com/nats-io/nats-server/v2 v2.6.1 // indirect github.com/moby/term v0.5.0 // indirect
github.com/nats-io/nats.go v1.12.3 github.com/morikuni/aec v1.0.0 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect github.com/nats-io/nats.go v1.23.0
github.com/opencontainers/image-spec v1.0.2
github.com/pkg/errors v0.9.1 github.com/pkg/errors v0.9.1
github.com/satori/go.uuid v1.2.0
github.com/shirou/gopsutil v2.20.6+incompatible github.com/shirou/gopsutil v2.20.6+incompatible
github.com/stretchr/testify v1.4.0 github.com/stretchr/testify v1.8.4
google.golang.org/protobuf v1.27.1 // indirect go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.24.0 // indirect
gorm.io/driver/mysql v1.4.7
gorm.io/gorm v1.25.2-0.20230530020048-26663ab9bf55
gotest.tools/v3 v3.5.1 // indirect
) )

2638
go.sum

File diff suppressed because it is too large Load Diff

View File

@ -2,16 +2,16 @@ package main
import ( import (
"fmt" "fmt"
"io/ioutil" "io"
"log" "log"
"net/http" "net/http"
"os" "os"
"strings"
"github.com/labstack/echo" "github.com/labstack/echo/v4"
"github.com/rosti-cz/node-api/apps" "github.com/rosti-cz/node-api/apps"
"github.com/rosti-cz/node-api/common" "github.com/rosti-cz/node-api/common"
"github.com/rosti-cz/node-api/docker" "github.com/rosti-cz/node-api/glue"
"github.com/rosti-cz/node-api/node"
) )
func homeHandler(c echo.Context) error { func homeHandler(c echo.Context) error {
@ -21,16 +21,15 @@ func homeHandler(c echo.Context) error {
} }
func listAppsHandler(c echo.Context) error { func listAppsHandler(c echo.Context) error {
err := gatherStates() processor := glue.Processor{
if err != nil { DB: common.GetDBConnection(),
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) DockerSock: config.DockerSocket,
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
} }
processor := apps.AppsProcessor{ applications, err := processor.List(false)
DB: common.GetDBConnection(),
}
applications, err := processor.List()
if err != nil { if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
} }
@ -41,16 +40,17 @@ func listAppsHandler(c echo.Context) error {
// Returns one app // Returns one app
func getAppHandler(c echo.Context) error { func getAppHandler(c echo.Context) error {
name := c.Param("name") name := c.Param("name")
fast := c.Param("fast")
err := updateState(name) processor := glue.Processor{
if err != nil { AppName: name,
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) DB: common.GetDBConnection(),
DockerSock: config.DockerSocket,
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
} }
app, err := processor.Get(fast == "1")
processor := apps.AppsProcessor{
DB: common.GetDBConnection(),
}
app, err := processor.Get(name)
if err != nil { if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
} }
@ -63,35 +63,33 @@ func getAppHandler(c echo.Context) error {
func createAppHandler(c echo.Context) error { func createAppHandler(c echo.Context) error {
registerOnly := c.QueryParam("register_only") == "1" registerOnly := c.QueryParam("register_only") == "1"
app := apps.App{} appTemplate := apps.App{}
err := c.Bind(&app) err := c.Bind(&appTemplate)
if err != nil { if err != nil {
return c.JSONPretty(http.StatusBadRequest, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusBadRequest, Message{Message: err.Error()}, JSONIndent)
} }
processor := apps.AppsProcessor{ processor := glue.Processor{
DB: common.GetDBConnection(), AppName: appTemplate.Name,
} DB: common.GetDBConnection(),
err = processor.New(app.Name, app.SSHPort, app.HTTPPort, app.Image, app.CPU, app.Memory) DockerSock: config.DockerSocket,
if err != nil { BindIPHTTP: config.AppsBindIPHTTP,
if validationError, ok := err.(apps.ValidationError); ok { BindIPSSH: config.AppsBindIPSSH,
return c.JSONPretty(http.StatusBadRequest, Message{Errors: validationError.Errors}, JSONIndent) AppsPath: config.AppsPath,
}
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
} }
if !registerOnly { if registerOnly {
container := docker.Container{ err = processor.Register(appTemplate)
App: &app, if err != nil && strings.Contains(err.Error(), "validation error") {
} return c.JSONPretty(http.StatusBadRequest, Message{Errors: []string{err.Error()}}, JSONIndent)
} else if err != nil {
err = container.Create()
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
} }
} else {
err = container.Start() err = processor.Create(appTemplate)
if err != nil { if err != nil && strings.Contains(err.Error(), "validation error") {
return c.JSONPretty(http.StatusBadRequest, Message{Errors: []string{err.Error()}}, JSONIndent)
} else if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
} }
} }
@ -103,45 +101,24 @@ func createAppHandler(c echo.Context) error {
func updateAppHandler(c echo.Context) error { func updateAppHandler(c echo.Context) error {
name := c.Param("name") name := c.Param("name")
app := apps.App{} appTemplate := apps.App{}
err := c.Bind(&app) err := c.Bind(&appTemplate)
if err != nil { if err != nil {
return c.JSONPretty(http.StatusBadRequest, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusBadRequest, Message{Message: err.Error()}, JSONIndent)
} }
processor := apps.AppsProcessor{ processor := glue.Processor{
DB: common.GetDBConnection(), AppName: name,
DB: common.GetDBConnection(),
DockerSock: config.DockerSocket,
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
} }
appPointer, err := processor.Update(name, app.SSHPort, app.HTTPPort, app.Image, app.CPU, app.Memory) err = processor.Update(appTemplate)
if err != nil { if err != nil && strings.Contains(err.Error(), "validation error") {
if validationError, ok := err.(apps.ValidationError); ok { return c.JSONPretty(http.StatusBadRequest, Message{Errors: []string{err.Error()}}, JSONIndent)
return c.JSONPretty(http.StatusBadRequest, Message{Errors: validationError.Errors}, JSONIndent) } else if err != nil {
}
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
app = *appPointer
container := docker.Container{
App: &app,
}
err = container.Destroy()
if err != nil && err.Error() == "no container found" {
// We don't care if the container didn't exist anyway
err = nil
}
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
err = container.Create()
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
err = container.Start()
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
} }
@ -152,31 +129,19 @@ func updateAppHandler(c echo.Context) error {
func stopAppHandler(c echo.Context) error { func stopAppHandler(c echo.Context) error {
name := c.Param("name") name := c.Param("name")
processor := apps.AppsProcessor{ processor := glue.Processor{
DB: common.GetDBConnection(), AppName: name,
DB: common.GetDBConnection(),
DockerSock: config.DockerSocket,
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
} }
app, err := processor.Get(name) err := processor.Stop()
if err != nil { if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
} }
container := docker.Container{
App: app,
}
status, err := container.Status()
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
// Stop the container only when it exists
if status != "no-container" {
err = container.Stop()
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
}
return c.JSON(http.StatusOK, Message{Message: "ok"}) return c.JSON(http.StatusOK, Message{Message: "ok"})
} }
@ -184,30 +149,15 @@ func stopAppHandler(c echo.Context) error {
func startAppHandler(c echo.Context) error { func startAppHandler(c echo.Context) error {
name := c.Param("name") name := c.Param("name")
processor := apps.AppsProcessor{ processor := glue.Processor{
DB: common.GetDBConnection(), AppName: name,
DB: common.GetDBConnection(),
DockerSock: config.DockerSocket,
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
} }
app, err := processor.Get(name) err := processor.Start()
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
container := docker.Container{
App: app,
}
status, err := container.Status()
if err != nil {
return err
}
if status == "no-container" {
err = container.Create()
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
}
err = container.Start()
if err != nil { if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
} }
@ -219,19 +169,15 @@ func startAppHandler(c echo.Context) error {
func restartAppHandler(c echo.Context) error { func restartAppHandler(c echo.Context) error {
name := c.Param("name") name := c.Param("name")
processor := apps.AppsProcessor{ processor := glue.Processor{
DB: common.GetDBConnection(), AppName: name,
DB: common.GetDBConnection(),
DockerSock: config.DockerSocket,
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
} }
app, err := processor.Get(name) err := processor.Restart()
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
container := docker.Container{
App: app,
}
err = container.Restart()
if err != nil { if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
} }
@ -249,19 +195,35 @@ func setPasswordHandler(c echo.Context) error {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
} }
processor := apps.AppsProcessor{ processor := glue.Processor{
DB: common.GetDBConnection(), AppName: name,
DB: common.GetDBConnection(),
DockerSock: config.DockerSocket,
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
} }
app, err := processor.Get(name) err = processor.SetPassword(password.Password)
if err != nil { if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
} }
container := docker.Container{ return c.JSON(http.StatusOK, Message{Message: "ok"})
App: app, }
}
err = container.SetPassword(password.Password) // Clear password for the app user in the container
func clearPasswordHandler(c echo.Context) error {
name := c.Param("name")
processor := glue.Processor{
AppName: name,
DB: common.GetDBConnection(),
DockerSock: config.DockerSocket,
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
}
err := processor.ClearPassword()
if err != nil { if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
} }
@ -273,24 +235,20 @@ func setPasswordHandler(c echo.Context) error {
func setKeysHandler(c echo.Context) error { func setKeysHandler(c echo.Context) error {
name := c.Param("name") name := c.Param("name")
body, err := ioutil.ReadAll(c.Request().Body) body, err := io.ReadAll(c.Request().Body)
if err != nil { if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
} }
processor := apps.AppsProcessor{ processor := glue.Processor{
DB: common.GetDBConnection(), AppName: name,
DB: common.GetDBConnection(),
DockerSock: config.DockerSocket,
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
} }
app, err := processor.Get(name) err = processor.UpdateKeys(string(body) + "\n")
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
container := docker.Container{
App: app,
}
err = container.SetFileContent(sshPubKeysLocation, string(body)+"\n", "0600")
if err != nil { if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
} }
@ -301,73 +259,26 @@ func setKeysHandler(c echo.Context) error {
func setServicesHandler(c echo.Context) error { func setServicesHandler(c echo.Context) error {
name := c.Param("name") name := c.Param("name")
quickServices := &QuickServices{} tech := &Technology{}
err := c.Bind(quickServices) err := c.Bind(tech)
if err != nil { if err != nil {
return c.JSONPretty(http.StatusBadRequest, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusBadRequest, Message{Message: err.Error()}, JSONIndent)
} }
processor := apps.AppsProcessor{ processor := glue.Processor{
DB: common.GetDBConnection(), AppName: name,
DB: common.GetDBConnection(),
DockerSock: config.DockerSocket,
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
} }
app, err := processor.Get(name)
err = processor.EnableTech(tech.Name, tech.Version)
if err != nil { if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
} }
container := docker.Container{
App: app,
}
if quickServices.Python {
err = container.SetTechnology("python")
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
}
if quickServices.PHP {
err = container.SetTechnology("php")
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
}
if quickServices.Node {
err = container.SetTechnology("node")
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
}
if quickServices.Ruby {
err = container.SetTechnology("ruby")
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
}
if quickServices.Deno {
err = container.SetTechnology("deno")
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
}
if quickServices.Memcached {
err = container.SetTechnology("memcached")
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
}
if quickServices.Redis {
err = container.SetTechnology("redis")
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
}
return c.JSON(http.StatusOK, Message{Message: "ok"}) return c.JSON(http.StatusOK, Message{Message: "ok"})
} }
@ -375,33 +286,15 @@ func setServicesHandler(c echo.Context) error {
func rebuildAppHandler(c echo.Context) error { func rebuildAppHandler(c echo.Context) error {
name := c.Param("name") name := c.Param("name")
processor := apps.AppsProcessor{ processor := glue.Processor{
DB: common.GetDBConnection(), AppName: name,
DB: common.GetDBConnection(),
DockerSock: config.DockerSocket,
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
} }
app, err := processor.Get(name) err := processor.Rebuild()
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
container := docker.Container{
App: app,
}
err = container.Destroy()
if err != nil && err.Error() == "no container found" {
// We don't care if the container didn't exist anyway
err = nil
}
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
err = container.Create()
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
err = container.Start()
if err != nil { if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
} }
@ -419,10 +312,15 @@ func addLabelHandler(c echo.Context) error {
return c.JSONPretty(http.StatusBadRequest, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusBadRequest, Message{Message: err.Error()}, JSONIndent)
} }
processor := apps.AppsProcessor{ processor := glue.Processor{
DB: common.GetDBConnection(), AppName: name,
DB: common.GetDBConnection(),
DockerSock: config.DockerSocket,
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
} }
err = processor.AddLabel(name, label.Value) err = processor.AddLabel(label.Value)
if err != nil { if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
} }
@ -440,10 +338,15 @@ func deleteLabelHandler(c echo.Context) error {
return c.JSONPretty(http.StatusBadRequest, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusBadRequest, Message{Message: err.Error()}, JSONIndent)
} }
processor := apps.AppsProcessor{ processor := glue.Processor{
DB: common.GetDBConnection(), AppName: name,
DB: common.GetDBConnection(),
DockerSock: config.DockerSocket,
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
} }
err = processor.RemoveLabel(name, label.Value) err = processor.RemoveLabel(label.Value)
if err != nil { if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
} }
@ -456,36 +359,20 @@ func deleteLabelHandler(c echo.Context) error {
func deleteAppHandler(c echo.Context) error { func deleteAppHandler(c echo.Context) error {
name := c.Param("name") name := c.Param("name")
processor := apps.AppsProcessor{ go func(name string) {
DB: common.GetDBConnection(), processor := glue.Processor{
} AppName: name,
app, err := processor.Get(name) DB: common.GetDBConnection(),
if err != nil { DockerSock: config.DockerSocket,
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) BindIPHTTP: config.AppsBindIPHTTP,
} BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
go func(app *apps.App) {
container := docker.Container{
App: app,
} }
err := processor.Delete()
status, err := container.Status()
if err != nil { if err != nil {
log.Println("ERROR delete application problem: " + err.Error()) log.Printf("Deletion of application failed: %v", err)
} }
if status != "no-container" { }(name)
err = container.Delete()
if err != nil {
log.Println("ERROR delete application problem: " + err.Error())
}
}
err = processor.Delete(app.Name)
if err != nil {
log.Println("ERROR delete application problem: " + err.Error())
}
}(app)
return c.JSON(http.StatusOK, Message{Message: "deleted"}) return c.JSON(http.StatusOK, Message{Message: "deleted"})
} }
@ -495,9 +382,44 @@ func getOrphansHander(c echo.Context) error {
return c.JSON(http.StatusOK, []string{}) return c.JSON(http.StatusOK, []string{})
} }
// Save metadata for the app
func saveMetadataHandler(c echo.Context) error {
name := c.Param("name")
processor := glue.Processor{
AppName: name,
DB: common.GetDBConnection(),
SnapshotProcessor: &snapshotProcessor,
DockerSock: config.DockerSocket,
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
}
body, err := io.ReadAll(c.Request().Body)
if err != nil {
return fmt.Errorf("error reading request body: %v", err)
}
err = processor.SaveMetadata(string(body))
if err != nil {
return fmt.Errorf("error while save metadata: %v", err.Error())
}
return nil
}
// Return info about the node including performance index // Return info about the node including performance index
func getNodeInfoHandler(c echo.Context) error { func getNodeInfoHandler(c echo.Context) error {
node, err := node.GetNodeInfo() processor := glue.Processor{
DB: common.GetDBConnection(),
DockerSock: config.DockerSocket,
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
NodeProcessor: &nodeProcessor,
}
node, err := processor.GetNode()
if err != nil { if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
} }
@ -509,19 +431,15 @@ func getNodeInfoHandler(c echo.Context) error {
func getAppProcessesHandler(c echo.Context) error { func getAppProcessesHandler(c echo.Context) error {
name := c.Param("name") name := c.Param("name")
processor := apps.AppsProcessor{ processor := glue.Processor{
DB: common.GetDBConnection(), AppName: name,
DB: common.GetDBConnection(),
DockerSock: config.DockerSocket,
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
} }
app, err := processor.Get(name) processes, err := processor.Processes()
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
container := docker.Container{
App: app,
}
processes, err := container.GetProcessList()
if err != nil { if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
} }
@ -534,7 +452,15 @@ func metricsHandler(c echo.Context) error {
var metrics string var metrics string
// Node indexes // Node indexes
node, err := node.GetNodeInfo() processor := glue.Processor{
DB: common.GetDBConnection(),
DockerSock: config.DockerSocket,
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
NodeProcessor: &nodeProcessor,
}
node, err := processor.GetNode()
if err != nil { if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
} }
@ -552,18 +478,135 @@ func metricsHandler(c echo.Context) error {
metrics += fmt.Sprintf("rosti_node_memory_index{hostname=\"%s\"} %f\n", hostname, node.MemoryIndex) metrics += fmt.Sprintf("rosti_node_memory_index{hostname=\"%s\"} %f\n", hostname, node.MemoryIndex)
metrics += fmt.Sprintf("rosti_node_sold_memory{hostname=\"%s\"} %d\n", hostname, node.SoldMemory) metrics += fmt.Sprintf("rosti_node_sold_memory{hostname=\"%s\"} %d\n", hostname, node.SoldMemory)
processor := apps.AppsProcessor{ if elapsedMetric != -1 {
DB: common.GetDBConnection(), metrics += fmt.Sprintf("rosti_node_stats_time_elapsed{hostname=\"%s\"} %f\n", hostname, float64(elapsedMetric)/1000000000)
} }
apps, err := processor.List()
apps, err := processor.List(true)
if err != nil { if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent) return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
} }
for _, app := range *apps { for _, app := range apps {
metrics += fmt.Sprintf("rosti_node_app_disk_usage_bytes{hostname=\"%s\", app=\"%s\"} %d\n", hostname, app.Name, app.DiskUsageBytes) metrics += fmt.Sprintf("rosti_node_app_disk_usage_bytes{hostname=\"%s\", app=\"%s\"} %d\n", hostname, app.Name, app.DiskUsageBytes)
metrics += fmt.Sprintf("rosti_node_app_disk_usage_inodes{hostname=\"%s\", app=\"%s\"} %d\n", hostname, app.Name, app.DiskUsageInodes) metrics += fmt.Sprintf("rosti_node_app_disk_usage_inodes{hostname=\"%s\", app=\"%s\"} %d\n", hostname, app.Name, app.DiskUsageInodes)
} }
return c.String(http.StatusOK, metrics) return c.String(http.StatusOK, metrics)
} }
func CreateSnapshotHandler(c echo.Context) error {
name := c.Param("name")
body := createSnapshotBody{}
err := c.Bind(body)
if err != nil {
return c.JSONPretty(http.StatusBadRequest, Message{Message: err.Error()}, JSONIndent)
}
processor := glue.Processor{
AppName: name,
DB: common.GetDBConnection(),
SnapshotProcessor: &snapshotProcessor,
DockerSock: config.DockerSocket,
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
}
err = processor.CreateSnapshot(body.Labels)
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
return c.JSONPretty(http.StatusOK, Message{Message: "ok"}, JSONIndent)
}
func RestoreFromSnapshotHandler(c echo.Context) error {
name := c.Param("name")
snapshot := c.Param("snapshot")
processor := glue.Processor{
AppName: name,
DB: common.GetDBConnection(),
SnapshotProcessor: &snapshotProcessor,
DockerSock: config.DockerSocket,
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
}
err := processor.RestoreFromSnapshot(snapshot)
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
return c.JSONPretty(http.StatusOK, Message{Message: "ok"}, JSONIndent)
}
func ListSnapshotsHandler(c echo.Context) error {
name := c.Param("name")
processor := glue.Processor{
AppName: name,
DB: common.GetDBConnection(),
SnapshotProcessor: &snapshotProcessor,
DockerSock: config.DockerSocket,
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
}
snapshots, err := processor.ListSnapshots()
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
return c.JSON(http.StatusOK, snapshots)
}
func ListAppsSnapshotsHandler(c echo.Context) error {
name := c.Param("name")
apps := []string{}
err := c.Bind(&apps)
if err != nil {
return c.JSONPretty(http.StatusBadRequest, Message{Message: err.Error()}, JSONIndent)
}
processor := glue.Processor{
AppName: name,
DB: common.GetDBConnection(),
SnapshotProcessor: &snapshotProcessor,
DockerSock: config.DockerSocket,
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
}
snapshots, err := processor.ListAppsSnapshots(apps)
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
return c.JSON(http.StatusOK, snapshots)
}
func ListSnapshotsByLabelHandler(c echo.Context) error {
label := c.Param("label")
processor := glue.Processor{
AppName: "",
DB: common.GetDBConnection(),
SnapshotProcessor: &snapshotProcessor,
DockerSock: config.DockerSocket,
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
}
snapshots, err := processor.ListSnapshotsByLabel(label)
if err != nil {
return c.JSONPretty(http.StatusInternalServerError, Message{Message: err.Error()}, JSONIndent)
}
return c.JSON(http.StatusOK, snapshots)
}

File diff suppressed because it is too large Load Diff

100
jsonmap/jsonmap.go Normal file
View File

@ -0,0 +1,100 @@
package jsonmap
import (
"bytes"
"context"
"database/sql/driver"
"encoding/json"
"errors"
"fmt"
"strings"
"gorm.io/driver/mysql"
"gorm.io/gorm"
"gorm.io/gorm/clause"
"gorm.io/gorm/schema"
)
// JSONMap defined JSON data type, need to implements driver.Valuer, sql.Scanner interface
type JSONMap map[string]interface{}
// Value return json value, implement driver.Valuer interface
func (m JSONMap) Value() (driver.Value, error) {
if m == nil {
return nil, nil
}
ba, err := m.MarshalJSON()
return string(ba), err
}
// Scan scan value into Jsonb, implements sql.Scanner interface
func (m *JSONMap) Scan(val interface{}) error {
if val == nil {
*m = make(JSONMap)
return nil
}
var ba []byte
switch v := val.(type) {
case []byte:
ba = v
case string:
ba = []byte(v)
default:
return errors.New(fmt.Sprint("Failed to unmarshal JSONB value:", val))
}
t := map[string]interface{}{}
rd := bytes.NewReader(ba)
decoder := json.NewDecoder(rd)
decoder.UseNumber()
err := decoder.Decode(&t)
*m = t
return err
}
// MarshalJSON to output non base64 encoded []byte
func (m JSONMap) MarshalJSON() ([]byte, error) {
if m == nil {
return []byte("null"), nil
}
t := (map[string]interface{})(m)
return json.Marshal(t)
}
// UnmarshalJSON to deserialize []byte
func (m *JSONMap) UnmarshalJSON(b []byte) error {
t := map[string]interface{}{}
err := json.Unmarshal(b, &t)
*m = JSONMap(t)
return err
}
// GormDataType gorm common data type
func (m JSONMap) GormDataType() string {
return "jsonmap"
}
// GormDBDataType gorm db data type
func (JSONMap) GormDBDataType(db *gorm.DB, field *schema.Field) string {
switch db.Dialector.Name() {
case "sqlite3":
return "JSON"
case "mysql":
return "JSON"
case "postgres":
return "JSONB"
case "sqlserver":
return "NVARCHAR(MAX)"
}
return ""
}
func (jm JSONMap) GormValue(ctx context.Context, db *gorm.DB) clause.Expr {
data, _ := jm.MarshalJSON()
switch db.Dialector.Name() {
case "mysql":
if v, ok := db.Dialector.(*mysql.Dialector); ok && !strings.Contains(v.ServerVersion, "MariaDB") {
return gorm.Expr("CAST(? AS JSON)", string(data))
}
}
return gorm.Expr("?", string(data))
}

72
main.go
View File

@ -5,10 +5,14 @@ import (
"log" "log"
"time" "time"
"github.com/labstack/echo" "github.com/getsentry/sentry-go"
sentryecho "github.com/getsentry/sentry-go/echo"
"github.com/labstack/echo/v4"
"github.com/nats-io/nats.go" "github.com/nats-io/nats.go"
"github.com/rosti-cz/node-api/apps" "github.com/rosti-cz/node-api/apps"
"github.com/rosti-cz/node-api/apps/drivers"
"github.com/rosti-cz/node-api/common" "github.com/rosti-cz/node-api/common"
"github.com/rosti-cz/node-api/glue"
"github.com/rosti-cz/node-api/node" "github.com/rosti-cz/node-api/node"
) )
@ -17,6 +21,10 @@ const JSONIndent = " "
var config common.Config var config common.Config
var nc *nats.Conn var nc *nats.Conn
var snapshotProcessor apps.SnapshotProcessor
var nodeProcessor node.Processor
var elapsedMetric int // time elapsed while loading stats about apps
func _init() { func _init() {
var err error var err error
@ -24,9 +32,18 @@ func _init() {
// Load config from environment variables // Load config from environment variables
config = *common.GetConfig() config = *common.GetConfig()
// Sentry
sentry.Init(sentry.ClientOptions{
Dsn: config.SentryDSN,
AttachStacktrace: true,
Environment: config.SentryENV,
TracesSampleRate: 0.1,
})
// Connect to the NATS service // Connect to the NATS service
nc, err = nats.Connect(config.NATSURL) nc, err = nats.Connect(config.NATSURL)
if err != nil { if err != nil {
sentry.CaptureException(err)
log.Fatalln(err) log.Fatalln(err)
} }
@ -35,11 +52,35 @@ func _init() {
DB: common.GetDBConnection(), DB: common.GetDBConnection(),
} }
processor.Init() processor.Init()
// Prepare snapshot processor
snapshotProcessor = apps.SnapshotProcessor{
AppsPath: config.AppsPath,
TmpSnapshotPath: config.SnapshotsPath,
IndexLabel: config.SnapshotsIndexLabel,
Driver: drivers.S3Driver{
S3AccessKey: config.SnapshotsS3AccessKey,
S3SecretKey: config.SnapshotsS3SecretKey,
S3Endpoint: config.SnapshotsS3Endpoint,
S3SSL: config.SnapshotsS3SSL,
Bucket: config.SnapshotsS3Bucket,
},
}
nodeProcessor = node.Processor{
DB: common.GetDBConnection(),
}
nodeProcessor.Init()
// Reset elapsed stats
elapsedMetric = -1
} }
func main() { func main() {
_init() _init()
defer nc.Drain() defer nc.Drain()
defer sentry.Flush(time.Second * 10)
// Close database at the end // Close database at the end
db := common.GetDBConnection() db := common.GetDBConnection()
@ -50,14 +91,24 @@ func main() {
// Stats loop // Stats loop
go func() { go func() {
statsProcessor := glue.StatsProcessor{
DB: common.GetDBConnection(),
DockerSock: config.DockerSocket,
BindIPHTTP: config.AppsBindIPHTTP,
BindIPSSH: config.AppsBindIPSSH,
AppsPath: config.AppsPath,
}
for { for {
log.Println("Stats gathering started") log.Println("Stats gathering started")
start := time.Now() start := time.Now()
err := gatherStats() err := statsProcessor.GatherStats()
if err != nil { if err != nil {
sentry.CaptureException(err)
log.Println("LOOP ERROR:", err.Error()) log.Println("LOOP ERROR:", err.Error())
} }
elapsed := time.Since(start) elapsed := time.Since(start)
elapsedMetric = int(elapsed)
log.Printf("Stats gathering elapsed time: %.2fs\n", elapsed.Seconds()) log.Printf("Stats gathering elapsed time: %.2fs\n", elapsed.Seconds())
time.Sleep(300 * time.Second) time.Sleep(300 * time.Second)
} }
@ -66,8 +117,9 @@ func main() {
// Node stats // Node stats
go func() { go func() {
for { for {
err := node.Log() err := nodeProcessor.Log()
if err != nil { if err != nil {
sentry.CaptureException(err)
log.Println("NODE PERFORMANCE LOG ERROR:", err.Error()) log.Println("NODE PERFORMANCE LOG ERROR:", err.Error())
} }
time.Sleep(5 * time.Minute) time.Sleep(5 * time.Minute)
@ -79,6 +131,7 @@ func main() {
e.Renderer = t e.Renderer = t
e.Use(TokenMiddleware) e.Use(TokenMiddleware)
e.Use(sentryecho.New(sentryecho.Options{}))
// NATS handling // NATS handling
// admin.apps.ALIAS.events // admin.apps.ALIAS.events
@ -121,6 +174,9 @@ func main() {
// Set password for the app user in the container // Set password for the app user in the container
e.PUT("/v1/apps/:name/password", setPasswordHandler) e.PUT("/v1/apps/:name/password", setPasswordHandler)
// Clear password for the app user in the container
e.DELETE("/v1/apps/:name/password", clearPasswordHandler)
// Copies body of the request into /srv/.ssh/authorized_keys // Copies body of the request into /srv/.ssh/authorized_keys
e.PUT("/v1/apps/:name/keys", setKeysHandler) e.PUT("/v1/apps/:name/keys", setKeysHandler)
@ -130,6 +186,9 @@ func main() {
// Rebuilds existing app, it keeps the data but creates the container again // Rebuilds existing app, it keeps the data but creates the container again
e.PUT("/v1/apps/:name/rebuild", rebuildAppHandler) e.PUT("/v1/apps/:name/rebuild", rebuildAppHandler)
// Save metadata about app
e.POST("/v1/apps/:name/metadata", saveMetadataHandler)
// Adds new label // Adds new label
e.POST("/v1/apps/:name/labels", addLabelHandler) e.POST("/v1/apps/:name/labels", addLabelHandler)
@ -139,6 +198,13 @@ func main() {
// Delete one app // Delete one app
e.DELETE("/v1/apps/:name", deleteAppHandler) e.DELETE("/v1/apps/:name", deleteAppHandler)
// Snapshots
e.POST("/v1/apps/:name/snapshots", CreateSnapshotHandler)
e.POST("/v1/apps/:name/snapshots/restore/:snapshot", RestoreFromSnapshotHandler)
e.GET("/v1/apps/:name/snapshots", ListSnapshotsHandler)
e.GET("/v1/snapshots", ListAppsSnapshotsHandler)
e.GET("/v1/snapshots/by-label", ListSnapshotsByLabelHandler)
// Orphans returns directories in /srv that doesn't match any hosted application // Orphans returns directories in /srv that doesn't match any hosted application
e.GET("/v1/orphans", getOrphansHander) e.GET("/v1/orphans", getOrphansHander)

View File

@ -1,23 +1,33 @@
package node package node
import ( import (
"github.com/rosti-cz/node-api/common"
"github.com/shirou/gopsutil/cpu" "github.com/shirou/gopsutil/cpu"
"github.com/shirou/gopsutil/disk" "github.com/shirou/gopsutil/disk"
"github.com/shirou/gopsutil/load" "github.com/shirou/gopsutil/load"
"github.com/shirou/gopsutil/mem" "github.com/shirou/gopsutil/mem"
"github.com/jinzhu/gorm"
) )
const history = 72 * 3600 / 300 // 3 days, one record every five minutes const history = 72 * 3600 / 300 // 3 days, one record every five minutes
func init() { // Processor covers Node related methods for monitoring and calculating performance indexes.
db := common.GetDBConnection() type Processor struct {
db.AutoMigrate(PerformanceLog{}) DB *gorm.DB
}
func (p *Processor) Init() {
p.DB.AutoMigrate(PerformanceLog{})
}
// GetNodeInfo returns information about this node
func (p *Processor) GetNodeInfo() (*Node, error) {
node, err := p.Index()
return node, err
} }
// Log creates a record for all important metrics used as // Log creates a record for all important metrics used as
func Log() error { func (p *Processor) Log() error {
performanceLog := PerformanceLog{} performanceLog := PerformanceLog{}
// Load // Load
@ -45,8 +55,7 @@ func Log() error {
performanceLog.DiskSpaceUsage = diskUsage.UsedPercent / 100.0 performanceLog.DiskSpaceUsage = diskUsage.UsedPercent / 100.0
// Save // Save
db := common.GetDBConnection() err = p.DB.Create(&performanceLog).Error
err = db.Create(&performanceLog).Error
if err != nil { if err != nil {
return err return err
} }
@ -54,13 +63,13 @@ func Log() error {
// and clean // and clean
// we have to use this stupid approach because DELETE doesn't support ORDER BY and LIMIT // we have to use this stupid approach because DELETE doesn't support ORDER BY and LIMIT
toDeleteLogs := []PerformanceLog{} toDeleteLogs := []PerformanceLog{}
err = db.Order("id DESC").Limit("99").Offset(history).Find(&toDeleteLogs).Error err = p.DB.Order("id DESC").Limit("99").Offset(history).Find(&toDeleteLogs).Error
if err != nil { if err != nil {
return err return err
} }
for _, toDeleteLog := range toDeleteLogs { for _, toDeleteLog := range toDeleteLogs {
err = db.Delete(&toDeleteLog).Error err = p.DB.Delete(&toDeleteLog).Error
if err != nil { if err != nil {
return err return err
} }
@ -71,7 +80,7 @@ func Log() error {
// Index returns number from 0 to 1 where 0 means least loaded and 1 maximally loaded. // Index returns number from 0 to 1 where 0 means least loaded and 1 maximally loaded.
// It uses history of last 72 hours // It uses history of last 72 hours
func index() (*Node, error) { func (p *Processor) Index() (*Node, error) {
node := Node{ node := Node{
Index: 1.0, Index: 1.0,
} }
@ -82,11 +91,9 @@ func index() (*Node, error) {
return &node, err return &node, err
} }
db := common.GetDBConnection()
logs := []PerformanceLog{} logs := []PerformanceLog{}
err = db.Find(&logs).Error err = p.DB.Find(&logs).Error
if err != nil { if err != nil {
return &node, err return &node, err
} }

View File

@ -1,7 +1 @@
package node package node
// GetNodeInfo returns information about this node
func GetNodeInfo() (*Node, error) {
node, err := index()
return node, err
}

106
stats.go
View File

@ -1,106 +0,0 @@
package main
import (
"log"
"github.com/rosti-cz/node-api/apps"
"github.com/rosti-cz/node-api/common"
"github.com/rosti-cz/node-api/docker"
)
// updateUsage updates various resource usage of the container/app in the database
func updateUsage(name string) error {
processor := apps.AppsProcessor{
DB: common.GetDBConnection(),
}
app, err := processor.Get(name)
if err != nil {
return err
}
container := docker.Container{
App: app,
}
state, err := container.GetState()
if err != nil {
return err
}
err = processor.UpdateResources(
name,
state.State,
state.CPUUsage,
state.MemoryUsage,
state.DiskUsageBytes,
state.DiskUsageInodes,
)
return err
}
// Updates only container's state
func updateState(name string) error {
processor := apps.AppsProcessor{
DB: common.GetDBConnection(),
}
app, err := processor.Get(name)
if err != nil {
return err
}
container := docker.Container{
App: app,
}
state, err := container.Status()
if err != nil {
return err
}
err = processor.UpdateState(
app.Name,
state,
)
return err
}
// gatherStats loops over all applications and calls updateUsage to write various metric into the database.
func gatherStats() error {
processor := apps.AppsProcessor{
DB: common.GetDBConnection(),
}
appList, err := processor.List()
if err != nil {
return err
}
for _, app := range *appList {
err := updateUsage(app.Name)
if err != nil {
log.Println("STATS ERROR:", err.Error())
}
}
return nil
}
// gatherStates loops over all apps and updates their container state
func gatherStates() error {
processor := apps.AppsProcessor{
DB: common.GetDBConnection(),
}
appList, err := processor.List()
if err != nil {
return err
}
for _, app := range *appList {
err := updateState(app.Name)
if err != nil {
log.Println("STATE ERROR:", err.Error())
}
}
return nil
}

View File

@ -5,7 +5,7 @@ import (
"io" "io"
"github.com/gobuffalo/packr" "github.com/gobuffalo/packr"
"github.com/labstack/echo" "github.com/labstack/echo/v4"
) )
// Template struct // Template struct

View File

@ -2,15 +2,10 @@ package main
import ( import (
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"log" "log"
"time"
"github.com/nats-io/nats.go" "github.com/nats-io/nats.go"
"github.com/rosti-cz/node-api/apps"
"github.com/rosti-cz/node-api/common"
"github.com/rosti-cz/node-api/docker"
) )
func errorReplyFormater(m *nats.Msg, message string, err error) error { func errorReplyFormater(m *nats.Msg, message string, err error) error {
@ -54,43 +49,3 @@ func publish(appName string, state string, isErr bool) {
log.Println("ERROR: publish:", err.Error()) log.Println("ERROR: publish:", err.Error())
} }
} }
// waitForApp waits until app is ready or timeout is reached.
// It's used in some async calls that need at least part of the
// environment prepared.
func waitForApp(appName string) error {
processor := apps.AppsProcessor{
DB: common.GetDBConnection(),
}
sleepFor := 5 * time.Second
loops := 6
for i := 0; i < loops; i++ {
err := updateState(appName)
if err != nil {
time.Sleep(sleepFor)
continue
}
app, err := processor.Get(appName)
if err != nil {
return err
}
container := docker.Container{
App: app,
}
status, err := container.Status()
if status == "running" {
return nil
}
time.Sleep(sleepFor)
continue
}
return errors.New("timeout reached")
}

View File

@ -1,9 +1,8 @@
package main package main
import "encoding/json" import (
"encoding/json"
// Path where authorized keys are )
const sshPubKeysLocation = "/srv/.ssh/authorized_keys"
// RequestMessage message // RequestMessage message
type RequestMessage struct { type RequestMessage struct {
@ -57,6 +56,17 @@ type QuickServices struct {
PHP bool `json:"php"` PHP bool `json:"php"`
Ruby bool `json:"ruby"` Ruby bool `json:"ruby"`
Deno bool `json:"deno"` Deno bool `json:"deno"`
Bun bool `json:"bun"`
Memcached bool `json:"memcached"` Memcached bool `json:"memcached"`
Redis bool `json:"redis"` Redis bool `json:"redis"`
} }
// Set technology
type Technology struct {
Name string
Version string
}
type createSnapshotBody struct {
Labels []string `json:"labels"`
}