* Initial work to support curv
* Correct the initial code file location
* Preview and stl mvp working
* Prepare changes for review and preview build
* Run curv inside of /tmp
When exporting an stl it writes temporary files which is not allowed
when deployed to aws unless it's in temp.
* Lock in specific curv commit for reproducible builds
see: https://discord.com/channels/412182089279209474/886321278821216277/912507472441401385
* Add curv to backend schema
* Frontend changes to accommodate curv deploy
* Use vcount instead of vsize, as it's independant of geometry size,
This is good for CadHub usecase where we don't know anything about the
user's project
* Final tweaks for deploy
virtual screen size does matter,and curv is a little more memory hungry
than the other functions
* Format project
Co-authored-by: lf94 <inbox@leefallat.ca>
Co-authored-by: Kurt Hutten <k.hutten@protonmail.ch>
* Switched to Miniconda image
* Update cad endpoint url
and some minor tweaks
Co-authored-by: Jeremy Wright <wrightjmf@gmail.com>
Co-authored-by: Jeremy Wright <wrightjmf@gmail.com>
* Rough changes to make the CadQuery integration work with the customizer
* Tweak runCQ
* Switched to Anaconda
* Cleaned up code
* Update CadHub after anaconda
Related to #547
* Add final tweaks to CQ customizer
* Separated out customizer.json from params.json
* Changes after discussing CadHub integration
* linting runCQ
Co-authored-by: Kurt Hutten <k.hutten@protonmail.ch>
The stls from CadQuery and OpenSCAD are not compressed and so we're
throwing away bandwidth and taking a performance hit by not gziping.
Gzip for s3 basically needs to be gziped before upload and than have
'content-type' : 'text/stl'
'content-encoding' : 'gzip'
set.
https://stackoverflow.com/questions/8080824/how-to-serve-gzipped-assets-from-amazon-s3
The obvious part that needs to change is putObject in
app/api/src/docker/common/utils.js but there might be a few more
nuances.
resolves#391
I've been able to get a proof of concept of downloading a openscad
library when the docker image builds
https://twitter.com/IrevDev/status/1400785325509660678
Since its experimental atm I'll leave it with just the one for now.
I've also got a local dev working again for the cad lambdas.
Resolves#338
Not only does the header need to be added, but the signed URL needs to
have it's expiry rounded so that the return url is the same for a given
window, say 10minutes
I followed this https://advancedweb.hu/cacheable-s3-signed-urls/
basically what this means is that because we're caching the assets
themselves, if as user asks for a part that already exists we'll return
a url for the existing part instead of regenerating it, however if it
was them that generated the part less than 10 minutes ago, they'll still
have to download the asset again. This way it will save us costs and
will be quicker for them.
Resolves#334
Doing so has a number of benefits
- Overcome the 10Mb limit of the API gateway the lambdas have to go
through
- By storing the key as the hash of the code we can return previous
generated assets, i.e. caching
- cost, transfering assets into the bucket within the AWS ecosystem
is faster than return, and there fore the lambdas execute for less time
- Sets us up for the future as when generating artifacts for repos when
there is a change to master etc we want to store these assets somewhere
and s3 is an obvious choice
- Solved a weird CORS issue where I couldn't get CORS working with
binaryMediaTypes enabled, don't need binary types when dumping in s3
Resolves#316