Security Vulnerability Detection in IaC Configuration files that implements AST-matching technique to provide faster and better results.
The research configuration file comes with the extension packages under the example/ folder.
To open up the example research script, run the "IaC Research File" command using the vscode command pallette.
(CTRL + SHIFT + P in Windows, CMD + SHIFT + P in MacOS)
You can trigger the run command from command pallette searching for IaC Configuration File Scan.
(CTRL + SHIFT + P in Windows, CMD + SHIFT + P in MacOS) => Search IaC Configuration File Scan
IaC configuration file vulnerability scan for:
✅ Dockerfiles
✅ Compose.yaml/compose.yml Files
✅ Terraform files (.tf)
Internet connection for the application logic queries to the remote cloud service.
Vulnerability ID |
Vulnerability Code |
Explanation |
Consequence |
1 |
oracle_compute_no-public-ip |
Oracle Compute instance requests an IP reservation from a public pool. The compute instance has the ability to be reached from outside, you might want to sonder the use of a non public IP. |
Requesting an IP reservation from a public pool for a compute instance in Terraform can expose the instance to potential external threats, as it becomes directly reachable from the internet. This configuration may increase the risk of attacks such as port scanning, brute force, and denial-of-service (DoS). To enhance security, it is advisable to consider using private IP addresses and controlling access through defined entry points like a VPN or a jump host, especially for instances that handle sensitive data or critical operations. |
2 |
aws_api_gateway_use-secure-tls-policy |
You should not use outdated/insecure TLS versions for encryption. You should be using TLS v1.2+. |
Using outdated or insecure TLS versions for encryption exposes your system to vulnerabilities and potential attacks. Ensuring the use of TLS v1.2 or higher enhances security by providing stronger encryption and reducing the risk of data breaches. |
3 |
aws_autoscaling_no-public-ip |
You should limit the provision of public IP addresses for resources. Resources should not be exposed on the public internet, but should have access limited to consumers required for the function of your application. |
Provisioning public IP addresses for resources unnecessarily exposes them to potential security threats and unauthorized access. Limiting access to only essential consumers enhances security by reducing the attack surface and protecting sensitive information. |
4 |
aws_autoscaling_enable-at-rest-encryption |
Block devices should be encrypted to ensure sensitive data is held securely at rest. |
If block devices are not encrypted, sensitive data stored on them is vulnerable to unauthorized access and potential data breaches. This lack of encryption compromises the security and confidentiality of the data at rest. |
5 |
aws_cloudfront_enforce-https |
You should use HTTPS, which is HTTP over an encrypted (TLS) connection, meaning eavesdroppers cannot read your traffic. |
CloudFront is available through an unencrypted connection |
6 |
aws_cloudfront_use-secure-tls-policy |
You should not use outdated/insecure TLS versions for encryption. You should be using TLS v1.2+. |
Using outdated or insecure TLS versions for encryption exposes your system to vulnerabilities and potential attacks. Ensuring the use of TLS v1.2 or higher enhances security by providing stronger encryption and reducing the risk of data breaches. |
7 |
aws_documentdb_enable-log-export |
Document DB does not have auditing by default. To ensure that you are able to accurately audit the usage of your DocumentDB cluster you should enable export logs. |
Without enabling export logs for DocumentDB, it is difficult to accurately audit the usage and activity within the cluster. This lack of auditing capability increases the risk of undetected unauthorized access and complicates the investigation of security incidents. |
8 |
aws_documentdb_enable-storage-encryption |
Encryption of the underlying storage used by DocumentDB ensures that if their is compromise of the disks, the data is still protected. |
Without encryption of the underlying storage used by DocumentDB, data is vulnerable if the disks are compromised. This lack of encryption can lead to unauthorized access and data breaches, compromising the security and confidentiality of the stored information. |
9 |
aws_ec2_enable-volume-encryption |
By enabling encryption on EBS volumes you protect the volume, the disk I/O and any derived snapshots from compromise if intercepted. |
If encryption is not enabled on EBS volumes, the volume data, disk I/O, and any derived snapshots are vulnerable to interception and unauthorized access. This lack of encryption increases the risk of data breaches and compromises the confidentiality and integrity of stored information. |
10 |
aws_ec2_enable-launch-config-at-rest-encryption |
Block devices should be encrypted to ensure sensitive data is held securely at rest. |
If block devices are not encrypted, sensitive data stored on them is vulnerable to unauthorized access and potential data breaches. This lack of encryption compromises the security and confidentiality of the data at rest. |
11 |
aws_ec2_no-default-vpc |
Default VPC does not have a lot of the critical security features that standard VPC comes with, new resources should not be created in the default VPC and it should not be present in the Terraform. |
Creating resources in the default VPC, which lacks critical security features of a standard VPC, exposes them to potential security vulnerabilities. This configuration increases the risk of unauthorized access and data breaches, compromising the overall security of the infrastructure. |
12 |
aws_ec2_no-excessive-port-access |
Ensure access to specific required ports is allowed, and nothing else. |
If access is not restricted to specific required ports, it increases the risk of unauthorized access and potential security vulnerabilities. Ensuring only necessary ports are open minimizes the attack surface and enhances the security of the system. |
13 |
aws_ec2_no-public-egress-sgr |
Opening up ports to connect out to the public internet is generally to be avoided. You should restrict access to IP addresses or ranges that are explicitly required where possible. |
If ports are opened to connect to the public internet without restrictions, it exposes your resources to potential security threats and unauthorized access. Restricting access to specific IP addresses or ranges reduces this risk and enhances overall security by limiting connectivity to trusted sources only. |
14 |
aws_ec2_no-public-ingress-acl |
Opening up ACLs to the public internet is potentially dangerous. You should restrict access to IP addresses or ranges that explicitly require it where possible. |
If ACLs are opened to the public internet, it significantly increases the risk of unauthorized access and potential attacks. Restricting access to specific IP addresses or ranges minimizes this risk, ensuring that only trusted sources can interact with your resources. |
15 |
aws_ec2_no-public-ip-subnet |
You should limit the provision of public IP addresses for resources. Resources should not be exposed on the public internet, but should have access limited to consumers required for the function of your application. |
If resources are provisioned with public IP addresses unnecessarily, they become exposed to the internet, increasing the risk of unauthorized access and attacks. Limiting access to only necessary consumers enhances security by reducing the attack surface and protecting sensitive information. |
16 |
aws_ecr_enable-image-scans |
Repository image scans should be enabled to ensure vulnerable software can be discovered and remediated as soon as possible. |
The ability to scan images is not being used and vulnerabilities will not be highlighted |
17 |
aws_ecr_enforce-immutable-repository |
ECR images should be set to IMMUTABLE to prevent code injection through image mutation. |
Image tags could be overwritten with compromised images |
18 |
aws_efs_enable-at-rest-encryption |
If your organization is subject to corporate or regulatory policies that require encryption of data and metadata at rest, we recommend creating a file system that is encrypted at rest, and mounting your file system using encryption of data in transit. |
Data can be read from the EFS if compromised |
19 |
aws_eks_no-public-cluster-access |
EKS clusters are available publicly by default, this should be explicitly disabled in the vpc_config of the EKS cluster resource. |
If EKS clusters are publicly accessible by default, they are exposed to potential external threats and unauthorized access. Disabling public access in the `vpc_config` of the EKS cluster resource helps mitigate the risk of security breaches and protects the cluster's integrity. |
20 |
aws_elasticsearch_enable-domain-encryption |
You should ensure your Elasticsearch data is encrypted at rest to help prevent sensitive information from being read by unauthorised users. |
If Elasticsearch data is not encrypted at rest, sensitive information stored in the database is vulnerable to unauthorized access. This increases the risk of data breaches and exposure of confidential information to unauthorized users. |
21 |
aws_elasticsearch_enable-in-transit-encryption |
Traffic flowing between Elasticsearch nodes should be encrypted to ensure sensitive data is kept private. |
In transit data between nodes could be read if intercepted |
22 |
aws_elasticsearch_enforce-https |
Plain HTTP is unencrypted and human-readable. This means that if a malicious actor was to eavesdrop on your connection, they would be able to see all of your data flowing back and forth. |
HTTP traffic can be intercepted and the contents read |
23 |
aws_elasticsearch_use-secure-tls-policy |
You should not use outdated/insecure TLS versions for encryption. You should be using TLS v1.2+. |
Outdated SSL policies increase exposure to known vulnerabilities |
24 |
aws_elasticache_enable-at-rest-encryption |
Data stored within an Elasticache replication node should be encrypted to ensure sensitive data is kept private. |
If data stored within an ElastiCache replication node is not encrypted, it is vulnerable to unauthorized access and potential data breaches. This lack of encryption compromises the privacy and security of sensitive information, increasing the risk of data exposure. |
25 |
aws_elasticache_enable-in-transit-encryption |
Traffic flowing between Elasticache replication nodes should be encrypted to ensure sensitive data is kept private. |
If traffic between ElastiCache replication nodes is not encrypted, sensitive data may be exposed to interception and unauthorized access during transmission. This lack of encryption compromises data privacy and security, increasing the risk of data breaches and unauthorized data manipulation. |
26 |
alb-not-public |
There are many scenarios in which you would want to expose a load balancer to the wider internet, but this check exists as a warning to prevent accidental exposure of internal assets. You should ensure that this resource should be exposed publicly. |
Exposing a load balancer to the internet without ensuring it is necessary can lead to unintended access to internal assets, increasing the risk of unauthorized access and potential data breaches. This exposure can compromise the security and integrity of the internal network and associated resources. |
27 |
elb_drop-invalid-headers |
By setting drop_invalid_header_fields to true, anything that doe not conform to well known, defined headers will be removed by the load balancer. |
Invalid headers being passed through to the target of the load balance may exploit vulnerabilities.By setting drop_invalid_header_fields to true, anything that doe not conform to well known, defined headers will be removed by the load balancer. |
28 |
elb_http-not-used |
You should use HTTPS, which is HTTP over an encrypted (TLS) connection, meaning eavesdroppers cannot read your traffic. |
Plain HTTP is unencrypted and human-readable. This means that if a malicious actor was to eavesdrop on your connection, they would be able to see all of your data flowing back and forth. |
29 |
aws_iam_no-root-access-keys |
CIS recommends that all access keys be associated with the root user be removed. Removing access keys associated with the root user limits vectors that the account can be compromised by. Removing the root user access keys also encourages the creation and use of role-based accounts that are least privileged. |
Compromise of the root account compromises the entire AWS account and all resources within it. |
30 |
kms_auto-rotate-keys |
You should configure your KMS keys to auto rotate to maintain security and defend against compromise. |
Long life KMS keys increase the attack surface when compromised |
31 |
mq_enable-audit-logging |
Logging should be enabled to allow tracing of issues and activity to be investigated more fully. Logs provide additional information and context which is often invalauble during investigation |
Without audit logging it is difficult to trace activity in the MQ broker |
32 |
mq_enable-general-logging |
Logging should be enabled to allow tracing of issues and activity to be investigated more fully. Logs provide additional information and context which is often invalauble during investigation |
Without logging enabled, tracing issues and investigating activities within the system is challenging, leading to delayed or incomplete incident responses. The absence of detailed logs and historical data can hinder effective troubleshooting and security investigations, increasing the risk of undetected breaches and operational disruptions. |
33 |
mq_no-public-access |
Public access of the MQ broker should be disabled and only allow routes to applications that require access. |
If public access to the MQ broker is not disabled, it exposes the broker to potential unauthorized access and attacks. Limiting access only to necessary applications enhances security by reducing the attack surface and preventing potential data breaches and unauthorized data transmissions. |
34 |
msk_enable-in-transit-encryption |
Encryption should be forced for Kafka clusters, including for communication between nodes. This ensure sensitive data is kept private. |
If encryption is not enforced for Kafka clusters, including communication between nodes, sensitive data can be exposed to interception and unauthorized access. This lack of encryption compromises data privacy and security, increasing the risk of data breaches and unauthorized data manipulation during transmission. |
35 |
neptune_enable-storage-encryption |
Encryption of Neptune storage ensures that if their is compromise of the disks, the data is still protected. |
Without encryption of Neptune storage, data is vulnerable if the underlying disks are compromised, leading to potential unauthorized access and data breaches. Encryption ensures that even if the disks are accessed by unauthorized parties, the data remains protected and unreadable. |
36 |
neptune_encryption-customer-key |
Encryption using AWS keys provides protection for your Neptune underlying storage. To increase control of the encryption and manage factors like rotation use customer managed keys. |
If Neptune databases use only AWS-managed keys for encryption, there is less control over key management practices, such as rotation and access policies. This reliance can lead to potential security risks and compliance issues, as customer managed keys offer enhanced control and customization to meet specific security requirements. |
37 |
rds_enable-performance-insights |
Enabling Performance insights allows for greater depth in monitoring data. For example, information about active sessions could help diagose a compromise or assist in the investigation |
Without adequate monitoring, performance related issues may go unreported and potentially lead to compromise. |
38 |
rds_no-classic-resources |
AWS Classic resources run in a shared environment with infrastructure owned by other AWS customers. You should run resources in a VPC instead. |
Running AWS resources in the Classic environment exposes them to potential security risks due to the shared nature of the infrastructure. By not utilizing a Virtual Private Cloud (VPC), resources are more vulnerable to attacks and unauthorized access from other AWS customers, compromising the isolation and security that a VPC offers. |
39 |
rds_no-public-db-access |
Database resources should not publicly available. You should limit all access to the minimum that is required for your application to function. |
If database resources are publicly accessible, they are vulnerable to a wider range of attacks, such as SQL injection, brute force, and denial of service (DoS). This configuration increases the likelihood of unauthorized access, data breaches, and potential data loss, undermining the security of sensitive information. |
40 |
s3_ignore-public-acls |
S3 buckets should ignore public ACLs on buckets and any objects they contain. By ignoring rather than blocking, PUT calls with public ACLs will still be applied but the ACL will be ignored. |
If S3 buckets are configured to ignore rather than block public ACLs, PUT calls containing public ACLs will be processed but the ACL settings will be disregarded. This approach prevents the accidental public exposure of buckets and objects, reducing the risk of unauthorized access and data leaks while maintaining compatibility with existing application workflows that might unknowingly include public ACLs. |
41 |
s3_no-public-access-with-acl |
Buckets should not have ACLs that allow public access |
Public access to the bucket can lead to data leakage |
42 |
s3_no-public-buckets |
S3 buckets should restrict public policies for the bucket. By enabling, the restrict_public_buckets, only the bucket owner and AWS Services can access if it has a public policy. |
If S3 buckets do not restrict public policies using the `restrict_public_buckets` option, they can be accessible to unauthorized external users. This configuration increases the risk of unintended data exposure and potential data leaks, making sensitive information accessible to the public. |
43 |
ssm_avoid-leaks-via-http |
The data.http block can be used to send secret data outside of the organisation. |
Using the data.http block in Terraform to send secret data can lead to unintended data exposure if the destination is not secure. This practice increases the risk of sensitive information being intercepted or misused, potentially leading to data breaches and violations of privacy regulations. |
44 |
ecs_enable-container-insight |
Cloudwatch Container Insights provide more metrics and logs for container based applications and micro services. |
Not all metrics and logs may be gathered for containers when Container Insights isn't enabled |
45 |
api_gateway_enable-cache-encryption |
Method cache encryption ensures that any sensitive data in the cache is not vulnerable to compromise in the event of interception |
Data stored in the cache that is unencrypted may be vulnerable to compromise |
46 |
api_gateway_enable-tracing |
X-Ray tracing enables end-to-end debugging and analysis of all API Gateway HTTP requests |
Without full tracing enabled it is difficult to trace the flow of logs |
47 |
api_gateway_no-public-access |
API Gateway methods should generally be protected by authorization or api key. OPTION verb calls can be used without authorization |
If API Gateway methods are not protected by authorization or an API key, sensitive data accessed through these APIs can be exposed to unauthorized users. This exposure significantly increases the risk of data breaches and unauthorized operations, undermining the security and integrity of the application's data flows. |
48 |
|
|
|
49 |
athena_enable-at-rest-encryption |
Athena databases and workspace result sets should be encrypted at rests. These databases and query sets are generally derived from data in S3 buckets and should have the same level of at rest protection. |
If Athena databases and workspace result sets are not encrypted at rest, sensitive data derived from S3 buckets is vulnerable to unauthorized access and potential exposure. This lack of encryption could lead to data breaches, compromising the confidentiality and integrity of the data used in analytical processes. |
50 |
autoscaling_enforce-http-token-imds |
IMDS v2 (Instance Metadata Service) introduced session authentication tokens which improve security when talking to IMDS. By default aws_instance resource sets IMDS session auth tokens to be optional. To fully protect IMDS you need to enable session tokens by using metadata_options block and its http_tokens variable set to required. |
Instance metadata service can be interacted with freely |
51 |
cloudfront_enable-logging |
You should configure CloudFront Access Logging to create log files that contain detailed information about every user request that CloudFront receives |
Logging provides vital information about access and usage |
52 |
cloudtrail_enable-all-regions |
When creating Cloudtrail in the AWS Management Console the trail is configured by default to be multi-region, this isn't the case with the Terraform resource. Cloudtrail should cover the full AWS account to ensure you can track changes in regions you are not actively operting in. |
If a CloudTrail is not configured to cover all regions, changes or activities in non-monitored regions might go undetected. This partial coverage can lead to significant security risks, as unauthorized actions or breaches in these unmonitored regions can occur without being logged, potentially resulting in unnoticed security vulnerabilities or compliance issues. |
53 |
cloudtrail_enable-at-rest-encryption |
Cloudtrail logs should be encrypted at rest to secure the sensitive data. Cloudtrail logs record all activity that occurs in the the account through API calls and would be one of the first places to look when reacting to a breach. |
Data can be freely read if compromised |
54 |
cloudtrail_enable-log-validation |
Log validation should be activated on Cloudtrail logs to prevent the tampering of the underlying data in the S3 bucket. It is feasible that a rogue actor compromising an AWS account might want to modify the log data to remove trace of their actions |
If log validation is not enabled on CloudTrail logs, the integrity of the log data stored in S3 buckets cannot be guaranteed. This lack of validation increases the risk that a malicious actor could tamper with the logs to obscure their activities, hindering forensic investigations and complicating efforts to detect and respond to security incidents. |
55 |
cloudtrail_ensure-cloudwatch-integration |
CloudTrail is a web service that records AWS API calls made in a given account. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. |
If CloudTrail is not enabled in an AWS account, tracking and logging API calls across the infrastructure is impossible. This omission can lead to significant gaps in security monitoring and auditing, making it difficult to detect and respond to unauthorized access or changes in the environment, potentially leading to unnoticed breaches or compliance issues. |
56 |
cloudtrail_require-bucket-access-logging |
Amazon S3 bucket access logging generates a log that contains access records for each request made to your S3 bucket. An access log record contains details about the request, such as the request type, the resources specified in the request worked, and the time and date the request was processed. |
If S3 bucket access logging is not enabled on the CloudTrail S3 bucket, there may be no detailed records of access events, such as reads and writes. This lack of logging data hampers the ability to perform thorough security audits and incident analysis, potentially delaying the detection and response to unauthorized access or data breaches. |
57 |
cloudwatch_log-group-customer-key |
CloudWatch log groups are encrypted by default, however, to get the full benefit of controlling key rotation and other KMS aspects a KMS CMK should be used. |
Log data may be leaked if the logs are compromised. No auditing of who have viewed the logs. |
58 |
codebuild_enable-encryption |
All artifacts produced by your CodeBuild project pipeline should always be encrypted |
If artifacts produced by a CodeBuild project pipeline are not encrypted, sensitive data contained within those artifacts could be exposed if unauthorized access occurs. This lack of encryption increases the risk of data breaches and intellectual property theft. |
59 |
config_aggregate-all-regions |
The configuration aggregator should be configured with all_regions for the source. This will help limit the risk of any unmonitored configuration in regions that are thought to be unused. |
If the configuration aggregator in AWS is not set to include all regions, configurations in unmonitored regions may go unchecked, increasing the risk of unnoticed security misconfigurations or unauthorized activities. This can lead to vulnerabilities and potential breaches in supposedly inactive or less monitored regions. |
60 |
documentdb_encryption-customer-key |
Encryption using AWS keys provides protection for your DocumentDB underlying storage. To increase control of the encryption and manage factors like rotation use customer managed keys. |
Using only AWS-managed keys for encrypting DocumentDB means less control over encryption specifics, such as key rotation and access policies. This limitation can lead to compliance issues and reduced security, as customer managed keys offer enhanced capabilities to tailor encryption practices to specific security requirements. |
61 |
dynamodb_enable-recovery |
DynamoDB tables should be protected against accidentally or malicious write/delete actions by ensuring that there is adequate protection. |
Accidental or malicious writes and deletes can't be rolled back |
62 |
dynamodb_table-customer-key |
DynamoDB tables are encrypted by default using AWS managed encryption keys. To increase control of the encryption and control the management of factors like key rotation, use a Customer Managed Key. |
If DynamoDB tables use only AWS-managed default encryption keys rather than Customer Managed Keys (CMKs), users have less control over encryption practices like key rotation and auditing. This can lead to potential security risks and compliance issues, as relying solely on AWS-managed keys may not meet certain organizational or regulatory encryption standards. |
63 |
ebs_encryption-customer-key |
Encryption using AWS keys provides protection for your EBS volume. To increase control of the encryption and manage factors like rotation use customer managed keys. |
Using AWS managed keys does not allow for fine grained control |
64 |
ec2_add-description-to-security-group |
Security group rules should include a description for auditing purposes. Simplifies auditing, debugging, and managing security groups. |
Descriptions provide context for the firewall rule reasons. Not having them complicates auditing and understanding the purpose and appropriateness of security configurations. This can lead to difficulties in managing and identifying outdated or unnecessary rules, increasing the risk of security gaps and misconfigurations |
65 |
ec2_add-description-to-security-group-rule |
Security group rules should include a description for auditing purposes. Simplifies auditing, debugging, and managing security groups. |
Descriptions provide context for the firewall rule reasons. Not having them complicates auditing and understanding the purpose and appropriateness of security configurations. This can lead to difficulties in managing and identifying outdated or unnecessary rules, increasing the risk of security gaps and misconfigurations |
66 |
ec2_enforce-http-token-imds |
IMDS v2 (Instance Metadata Service) introduced session authentication tokens which improve security when talking to IMDS. By default aws_instance resource sets IMDS session auth tokens to be optional. To fully protect IMDS you need to enable session tokens by using metadata_options block and its http_tokens variable set to required. |
If the IMDS session authentication tokens are set to optional rather than required, unauthorized users may exploit this to intercept or spoof metadata requests. This can lead to the leakage of sensitive data or credentials associated with AWS instances, increasing the risk of security breaches and potential system compromises. |
67 |
ec2_enforce-launch-config-http-token-imds |
IMDS v2 (Instance Metadata Service) introduced session authentication tokens which improve security when talking to IMDS. By default aws_instance resource sets IMDS session auth tokens to be optional. To fully protect IMDS you need to enable session tokens by using metadata_options block and its http_tokens variable set to required. |
If the IMDS session authentication tokens are set to optional rather than required, unauthorized users may exploit this to intercept or spoof metadata requests. This can lead to the leakage of sensitive data or credentials associated with AWS instances, increasing the risk of security breaches and potential system compromises. |
68 |
ec2_volume-encryption-customer-key |
Encryption using AWS keys provides protection for your EBS volume. To increase control of the encryption and manage factors like rotation use customer managed keys. |
Using AWS managed keys does not allow for fine grained control |
69 |
ecr_repository-customer-key |
Images in the ECR repository are encrypted by default using AWS managed encryption keys. To increase control of the encryption and control the management of factors like key rotation, use a Customer Managed Key. |
Using AWS managed keys does not allow for fine grained control |
70 |
ecs_enable-container-insight |
Cloudwatch Container Insights provide more metrics and logs for container based applications and micro services. |
Not all metrics and logs may be gathered for containers when Container Insights isn't enabled |
71 |
aws_elasticsearch_enable-domain-logging |
Amazon ES exposes four Elasticsearch logs through Amazon CloudWatch Logs: error logs, search slow logs, index slow logs, and audit logs.All the logs are disabled by default. |
Logging provides vital information about access and usage |
72 |
aws_elasticache_add-description-for-security-group |
Security groups and security group rules should include a description for auditing purposes. |
If security groups and their rules lack descriptions, it complicates auditing and understanding the purpose and appropriateness of security configurations. This can lead to difficulties in managing and identifying outdated or unnecessary rules, increasing the risk of security gaps and misconfigurations. |
73 |
aws_elasticache_enable-backup-retention |
Redis clusters should have a snapshot retention time to ensure that they are backed up and can be restored if required. |
If Redis clusters do not have a snapshot retention policy set, there may be no backups available to restore data in case of data corruption or loss. This can result in permanent data loss and significant disruptions to operations dependent on that data. |
74 |
aws_iam_no-password-reuse |
IAM account password policies should prevent the reuse of passwords. |
Password reuse increase the risk of compromised passwords being abused |
75 |
aws_iam_require-lowercase-in-passwords |
AM account password policies should ensure that passwords content including at least one lowercase character. |
Short, simple passwords are easier to compromise |
76 |
aws_iam_require-numbers-in-passwords |
IAM account password policies should ensure that passwords content including at least one number. |
Short, simple passwords are easier to compromise |
77 |
aws_iam_require-symbols-in-passwords |
IAM account password policies should ensure that passwords content including a symbol. |
Short, simple passwords are easier to compromise |
78 |
aws_iam_require-uppercase-in-passwords |
IAM account password policies should ensure that passwords content including at least one uppercase character. |
Short, simple passwords are easier to compromise |
79 |
aws_iam_set-max-password-age |
IAM account password policies should have a maximum age specified. |
Long life password increase the likelihood of a password eventually being compromised |
80 |
aws_iam_set-minimum-password-length |
IAM account password policies should ensure that passwords have a minimum length.The account password policy should be set to enforce minimum password length of at least 14 characters. |
If IAM account password policies do not enforce a minimum password length, weaker and shorter passwords may be used, increasing the risk of brute force attacks and unauthorized access. This can lead to compromised user accounts and potential data breaches. |
81 |
aws_lambda_restrict-source-arn |
When the principal is an AWS service, the ARN of the specific resource within that service to grant permission to. |
Without this, any resource from principal will be granted permission – even if that resource is from another account. Not providing the source ARN allows any resource from principal, even from other accounts |
82 |
aws_mq_enable-general-logging |
Logging should be enabled to allow tracing of issues and activity to be investigated more fully. Logs provide additional information and context which is often invalauble during investigation |
If logging is not enabled, tracing issues and investigating activities within systems become significantly more difficult. This lack of detailed logs and historical data can hinder effective troubleshooting and delay responses to security incidents, potentially exacerbating damage and operational disruptions. |
83 |
aws_rds_enable-performance-insights-encryption |
When enabling Performance Insights on an RDS cluster or RDS DB Instance, and encryption key should be provided. |
Data can be read from the RDS Performance Insights if it is compromised |
84 |
aws_rds_encrypt-cluster-storage-data |
Encryption should be enabled for an RDS Aurora cluster.When enabling encryption by setting the kms_key_id, the storage_encrypted must also be set to true. |
Data can be read from the RDS cluster if it is compromised |
85 |
encrypt-instance-storage-data |
Encryption should be enabled for an RDS Database instances. When enabling encryption by setting the kms_key_id. |
Data can be read from RDS instances if compromised |
86 |
aws_redshift_encryption-customer-key |
Redshift clusters that contain sensitive data or are subject to regulation should be encrypted at rest to prevent data leakage should the infrastructure be compromised. |
Redshift clusters created without subnet details and running in EC2 classic mode are not contained within a Virtual Private Cloud (VPC), exposing them to potential security risks. This setup allows broader network access, increasing the likelihood of unauthorized access and potential data breaches. |
87 |
aws_redshift_use-vpc |
Redshift clusters that are created without subnet details will be created in EC2 classic mode, meaning that they will be outside of a known VPC and running in tennant. |
Data may be leaked if infrastructure is compromised |
88 |
aws_s3_block-public-acls |
S3 buckets should block public ACLs on buckets and any objects they contain. By blocking, PUTs with fail if the object has any public ACL. |
PUT calls with public ACLs specified can make objects public |
89 |
aws_s3_block-public-policy |
S3 bucket policy should have block public policy to prevent users from putting a policy that enable public access. |
Without a block public access policy on S3 buckets, users can unintentionally or maliciously set policies that expose data to public access, leading to potential data leaks and unauthorized data exposure. |
90 |
aws_s3_enable-bucket-encryption |
S3 Buckets should be encrypted to protect the data that is stored within them if access is compromised. |
If S3 buckets are not encrypted, data stored within them is vulnerable to unauthorized access and potential exposure in the event of a security breach. This can lead to data theft, loss of privacy, and compliance violations. |
91 |
aws_s3_enable-bucket-logging |
Buckets should have logging enabled so that access can be audited. |
If logging is not enabled on S3 buckets, access to the bucket cannot be audited, which complicates the detection of unauthorized access or data breaches. This lack of visibility increases the risk of security incidents going unnoticed and unaddressed. |
92 |
aws_s3_enable-versioning |
Versioning in Amazon S3 is a means of keeping multiple variants of an object in the same bucket. You can use the S3 Versioning feature to preserve, retrieve, and restore every version of every object stored in your buckets. With versioning you can recover more easily from both unintended user actions and application failures. |
If versioning is disabled in Amazon S3, accidental deletions or overwrites of objects cannot be reversed, leading to potential permanent data loss. Additionally, it limits the ability to restore previous versions of data after application failures or human errors, increasing the risk of operational disruptions. |
93 |
aws_sns_enable-topic-encryption |
Topics should be encrypted to protect their contents. |
The SNS topic messages could be read if compromised |
94 |
aws_sqs_enable-queue-encryption |
Queues should be encrypted to protect queue contents. |
The SQS queue messages could be read if compromised |
95 |
aws_workspaces_workspace_enable-disk-encryption |
Workspace volumes for both user and root should be encrypted to protect the data stored on them. |
If workspace volumes for both user and root are not encrypted, sensitive data stored on these volumes could be accessed and read by unauthorized parties, leading to potential data breaches and privacy violations. |
96 |
kubernetes_no-public-egress |
You should not expose infrastructure to the public internet except where explicitly required |
Exfiltration of data to the public internet |
97 |
kubernetes_no-public-ingress |
You should not expose infrastructure to the public internet except where explicitly required |
Exposure of infrastructure to the public internet |
98 |
github_no-plain-text-action-secrets |
For the purposes of security, the contents of the plaintext_value field have been marked as sensitive to Terraform, but this does not hide it from state files. State should be treated as sensitive always. |
Unencrypted sensitive plaintext value can be easily accessible in code. |
99 |
github_require_signed_commits |
GitHub branch protection should be set to require signed commits. You can do this by setting the require_signed_commits attribute to 'true'. |
Commits may not be verified and signed as coming from a trusted developer |
100 |
github_enable_vulnerability_alerts |
GitHub repository should be set to use vulnerability alerts. You can do this by setting the vulnerability_alerts attribute to 'true'. |
Known vulnerabilities may not be discovered |
101 |
github_private |
Github repository should be set to be private. You can do this by either setting private attribute to 'true' or visibility attribute to 'internal' or 'private'. |
Anyone can read the contents of the GitHub repository and leak IP |
102 |
aws_iam_enforce-group-mfa |
IAM groups should be protected with multi factor authentication to add safe guards to password compromise.Use terraform-module/enforce-mfa/aws to ensure that MFA is enforced |
Failing to protect IAM groups with Multi-Factor Authentication (MFA) can leave your Terraform-managed AWS environment vulnerable to unauthorized access, especially if password credentials are compromised. Implementing MFA adds an additional layer of security by requiring a second form of verification, significantly reducing the risk of an attacker gaining access through stolen or weak credentials. Using the `terraform-module/enforce-mfa/aws` ensures that MFA is enforced for all IAM group users, providing a stronger security posture against potential breaches. |