No supported authentication methods available(server sent: public key)

When you use Putty to connect to your AWS instance and get the error ‘No Supported authentication methods available’, check out whether you did the below.

  1. Either the user name is incorrect. If in case you are using Amazon Linux AMI it is mostly ec2-user.
  2. You are trying to connect without loading the private key you had downloaded. In Putty, go to Connection Tab -> Auth and then load the private key you had downloaded (the ppk file). Enter the hostname now in Session tab and click the open button. You should be able to login to the EC2 instance now.

Can’t connect to Mysql server on *.amazonnews.com (110)

If you are trying to connect to AWS RDS Database let’s say from an EC2 instance or a client and you are getting the error ‘Can’t connect to Mysql Server (110)’, then the most probable cause is the security group configuration in AWS RDS. Check whether the incoming rule is defined to allow instances to connect on the AWS RDS port. If its mysql, then the port is most likely 3306 or as defined in your RDS configuration.

If in case, you have defined your rule but still can’t connect, try putting in the default IP 0.0.0.0/ and check whether you are able to connect. If you are able to connect, then the most likely cause is that you had configured your custom IP wrongly. I was trying to connect to RDS and wasted so much of my time assuming that my EC2’s instance IP is a certain IP and that was wrong.

AWS – Database Services, Differences and their use cases

I have been reading on AWS services these days and I noticed that the more I learn, the more I find that I don’t know a vast majority of things and its becoming tough to remember also. There is nothing much of a quick cheat sheet also available to remember the services and when to use one. So, if not for anyone out there who is going through similar trouble, it would be at least for me that its best I jot down each of these services, their differences and when to use which service.

 AWS RDSAWS Dynamo DBAmazon Elasti CacheAmazon NeptuneAmazon RedshiftAmazon QLDBAmazon Document DBAmazon KeyspacesAmazon Timestream
Database TypeRDBMS Fully Managed ServiceNoSQL Fully Managed ServiceMemory CacheFully Managed Graph Database ServiceDatawarehouseServerless Ledger Database that is hashed and is immutable, append onlyDocument Type (JSON) DB managed serviceServerless Fully Managed Service Compatible with Apache CassandraServerless Timeseries DB for IOT
Supported EnginesMysql, MSSQL, Oracle, Aurora, PostgreSQL, Maria DB Amazon Elasticache in-memory cache/Data store, Amazon Elasticache for Redis in-memory data store Based on PostgreSQL 8.0.2Ion Documents, JSON DocumentsAny JSON like Document, Full Compatible with Mongo DB In memory store for recent data and magnetic store for historic data.
UsecaseRelational Transaction Processing and RDBMS appsFor Gaming, Web, Mobile, IOT appsFor Fast Performance and Quick Access and Sub-millisecond latency like gamingSocial Networking (Where ever relationships are to be worked out)For Analysis of Data from multiple DBs, ETL, For Financial Transactions, Insurance Claims Where Low Latency is needed and where apache cassandra like solution is desiredIOT Apps
ComponentsUnderlying DBMS Engine and their replicasSchemaless TablesNodes(RAM), Shards (group of nodes), Cluster (Group of Shards)Cluster, ReaderCluster Nodes (Leader & Compute Node)uses journal storage and index storageCluster (Primary and Read Replicas)  
CommunicationDatabase DriversQuery Apache Tinkerpop Gremlin, Spar QLODBC, JDBCIntegrates with AWS KinesisUsing Cluster, Reader and Instance End PointsUse Cassandra Query LanguageBuilt in Query Engine
PerformanceDepends on the instance sizes selected.Limted by the throughput and capacity chosenSub-millisecond latencyThrough Replica InstancesParallel Processing, Caching, Data Storage Automatic Backups, Point in RecoveryUnlimited Through putAuto-Scaling Architecture with 1000x faster than relational databases
While this seems to be the comparison as of date, amazon services undergo tremendous changes day by day and if you come to know of more use cases, differences and notes, please feel free to add your comment here.

Check List Before you upgrade a Security Tool

Any security control or activity is usually frowned upon as a bottle neck and security is added only as an after thought. When this is the case, how does one handle administration of Security Tools or upgrades of tools? What I see happening across industry, is that the upgrades are planned based on the release life cycle of the underlying software or once in a quarter or half-yearly. But when this is done, only the instructions followed in the upgrade.txt is followed like a text-book pattern and one does not fore-see more than that.

So, what should one do differently?

Step 1: Check what the upgrade mentions and whether there are any red flags associated with the update. One of the security tools in code scanning mentioned that they will flag any code that uses CBC mode in symmetric encryption. The inference you can gather from this is that, you may be having some of our projects in green field until then. Once you upgrade, then if these projects are using the above mode, almost all will fail and suddenly it will look like the entire security control is not working or you become a bottle neck.

Hence, before you upgrade, check whether any of the red flags mentioned in the security release, impacts any of the projects/applications. Notify the IT team. Plan for Risk Acceptance for a short term and remediation for a long term. Then plan the upgrade.

Step 2: Security Tool is also a software and may involve bugs. It is always better to do a version comparison exercise before rolling out the tool. Take a sample set of projects and do an evaluation exercise comparing the result with the earlier version and current version. Only if the results are promising and doesn’t create many false negatives or false positives, go for the upgrade.

Step 3: Try the upgrade in a test environment to check if all goes well.

Step 4: Do an infrastructure sizing, future DB size growth and plan for it.

Step 5: Always notify end users before planning an upgrade.

Step 6: Do the upgrade during non-usage hours.

Step 7: Test out all features after the upgrade.

Step 8: Send a notification to all end users post upgrade along with instructions on what the change is.

Step 9: Allocate man-hours to offer L1 support at least during initial 2 weeks.

Step 10: Monitor the success of the upgrade, note down any learnings and introduce process improvement changes to the upgrade SOP document.

Any thing that you think that I may have left out?

IAST Versus DAST – In DevSecOps Pipeline

During one of my consulting engagements, a customer SME asked me why can’t he use just IAST as a security control in DevSecOps and use DAST/App PT out of band. It is a very interesting proposition I should say.

IAST – Interactive Application Security Testing – is having your agents running on your application run time that get triggered when users navigate the application. To get to know the vulnerabilities, the development team does not need to wait for the security team to be available and hence this is an early winner. One may argue that in DevSecOps, one can integrate even DAST control. But almost all scanners in the market today except the scan job to be pre-configured and scan-id passed via the CI build. That means, every time the application use-case changes or there is a major workflow, the security team’s intervention becomes a must.

Where-as in IAST, the security team is not needed. The extra advantage is that, in IAST while the vulnerabilities in URL are pointed out, the exact stack trace is also given which makes it easier for developers to fix.

I would also like to point out some of the disadvantages of choosing IAST over DAST.

  1. IAST supports only very few technologies as of now like Java, .Net, Ruby etc.
  2. Some attacks like Session Management related or sophisticated coverage is not provided by IAST.

What this means is that, while one can use IAST to catch all early birds in a DevOps Pipe line, the DAST can follow later say once in a quarter or two based on security team’s bandwidth.