tags : #area/watch #serverless #cloud
source :
date : 2023-03-07
source https://www.lastweekinaws.com/blog/aws-is-asleep-at-the-lambda-wheel/
It scales down to zero, you only pay for what you use, it’s massively event driven, and at least in theory it's fully managed (AWS manages the care and feeding of the service so the only thing you have to worry about is your own business logic.)
Le même article pointe qu'on est dépendant du fournisseur sur la mise à jour des runtimes : ici AWS ne met pas à jour Python (tjs en 3.9 alors qu'elle est dépassée). DE même amazon linux 2022 n'est jamais sortie et est devenue Linux 2023 (tjs pas sortie)
source Arctechnica
However, the “serverless” bit of “serverless” here does mean that we generally don’t need to care about those servers—we are approaching the problem through a few layers of abstraction. Our requirements here are to run some applications, and rather than addressing those requirements first in terms of infrastructure (a “bottom-up” approach), with a serverless approach, we address those requirements in terms of the applications themselves and the amount of compute resources they take up.
le code est en python, notez que c'est Bicep (qui semble être le terraform d'azure) qui est utilisé pour le déploiement
https://techcommunity.microsoft.com/t5/azure-developer-community-blog/serverless-url-shortener/ba-p/3754120

PaaS shares a lot with the serverless paradigm, such as no provisioning of machines and autoscaling. However, the unit of computation is much smaller in the latter.
Serverless computing is also job-oriented rather than application-oriented.
AppOps ou NoOPs
This means that AppOps are on call for the services they have developed. In order for this to work, the infrastructure used needs to support service- or app-level monitoring of metrics as
well as alerting if the service doesn’t perform as expected.
Further, there’s another role necessary: a group of people called the infrastructure team. This team manages the overall infrastructure, owns global policies, and advises the AppOps.
A sometimes-used alternative label for the serverless paradigm is “NoOps"
there are a number of responsibilities found in traditional admin roles that are not applicable in a
serverless setup:
serverless computing is potentially a great fit for use cases that are latency tolerant with a relatively low access frequency. The higher the access frequency and the higher the expectations around latency, the more it usually pays off to have a dedicated machine or container processing the requests.
Typical application areas of serverless computing are:
• Infrastructure and glue tasks, such as reacting to an event triggered from cloud storage or a database
• Mobile and IoT apps to process events, such as user check-in or aggregation functions
• Image processing, for example to create preview versions of an image or extract key frames from a video
• Data processing, like simple extract, transform, load (ETL) pipelines to preprocess datasets
While the serverless paradigm without doubt has its use cases and can help simplify certain workloads, there are naturally limitations and challenges. From most pressing to mildly annoying, these include:
• Stateful services are best implemented outside of serverless functions. Integration points with other platform services such as databases, message queues, or storage are therefore extremely
important.
• Long-running jobs (in the high minutes to hours range) are usually not a good fit; typically you’ll find timeouts in the (high) seconds range.
• Logging and monitoring are a challenge: the current Local development can be challenging: usually developers need to develop and test within the online environment.
• Language support is limited: most serverless offerings