malinoffHi, what should I do if nobody from aws support responded in almost 24 hours for high severity issue on business support plan?
sgranescalate through your SDM
malinoffsgran: thanks, where can I find a way to contact him?
shanemeyersmalinoff: open a chat. it isn’t foolproof, but we’ve found it to be a better way to get eyes on the issue quickly
sgranmalinoff: you should already have a working relationship. If you don't, go through chat
malinoffshanemeyers, sgran: okay, it looks like I have to fill another issue just to open a chat. Is this normal or I'm missing something?
malinoffsorry for asking dumb questions
shanemeyersno, just open the ticket in the console and start writing like you were going to reply to the ticket but select chat instead of email
malinoffoh, thanks
malinoffvery intuitive :)
urbanendeavourhi
urbanendeavourHow can I list all the resources that I have provisioned on AWS?
malinoffshanemeyers, sgran: thank you both very much. Looks like chat is the only way to actually get a response :)
malinoffurbanendeavour: why do you need it? There are several possible answers to this question and the correct one depends on your reasons
sgrancomplain - that should not be the case
urbanendeavourI want to know what I will be billed
urbanendeavourI also would like to know how many resources I have for inventory.
malinoffurbanendeavour: https://console.aws.amazon.com/billing/home#/costexplorer
urbanendeavourThanks, so no resource list?
sgranthere are several resources lists
sgranone per resource type
malinoffurbanendeavour: i *think* the only way to actually get everything is to run cloudformation template designer.
malinoffBut it will produce a giant, hardly readable json file which you will have to sort out manually
urbanendeavourthanks
oneroadanyone here know about EBS?
oneroad1 of my EBS volumes is performing really slowly
oneroad hdparm -t /dev/xvdf /dev/xvdf: Timing buffered disk reads: 6 MB in 3.68 seconds = 1.63 MB/sec
oneroadcompared with others:
oneroadhdparm -t /dev/xvdg /dev/xvdg: Timing buffered disk reads: 356 MB in 3.01 seconds = 118.22 MB/sec
oneroadif it possible I've ended up with a shitty EBS volume?
oneroadand if so, the solution is to snapshot and create a new EBS volume right?
malinoffoneroad: yes, you can do that. I would also open an issue, such performance degradation may reveal the underlying disk storage issues
oneroadmalinoff: I think it's actually my fault
oneroadI'm using gp2 volumes
oneroadand I think I 'burst' for too long and am being throttled
netchohi all, modified my rds instance (bootsted storage from 256GB to 1TB) and it is in modifyin status for more than an hour.... i don't think this is good?
netchocan i revert it or smth?
_makis there an simple way to enable access to the port 22 for an instance from all ips in the aws infrastructure?
_makI checked their ip-ranges.json and it is huge, I tried to add all ips from us-east-1 and got an error saying I could only have 50 rules
_makcan't find anything useful on google
ecornips@_mak no idea sorry but have you considered using iptables on the boxes and a script like https://enterprisey.enterprises/t/matching-autonomous-system-numbers-in-iptables/25 to do it by ASN on each machine?
_makI haven't.. I'll check the link.. thanks
ecornipsor reduce your scope by setting up a single machine (or small group of machines) with a fixed IP as your SSH ‘jump host, which you would let any IP access to. Then for the rest of your machines, restrict access to only allow connections from the jump host IPs.
ecornipsI also strongly recommend changing your SSH port to something non-standard, e.g. 5544 (pulling random number out of my head), so port scanners can’t easily find them
_makecornips: how changing the port nr would avoid the port from being discovered by scanners?
ecornipsmost port scanners just check common ports and all ports < 1024. You can make the port scanner scan every single port (1-65535) but that takes significantly longer and raises bigger risk of IDS systems picking them up as port scanning.
ecornipse.g. the most popular tool nmap has options for only scanning the top X services https://nmap.org/book/man-port-specification.html
_makecornips: oh nice, that's new to me, I thought the standard practice was to scan all ports...
ecornipsIf I was an attacker, I would focus on ssh running on the default port since there’s soo many of them, and anyone running on a higher port indicates to me that they’re already at least semi-aware of security issues
_makthanks :)
_makright
_makecornips: but in the case of amazon
_makwhat can an attacker do without the key?
_makexploit a bug on ssh?
ecornipssame as with non-amazon — yeah
_makthat would be it?
ecornipsthere’s been issues with ssh in the past involving keys too
ecornipsI think Debian had a flawed keygen at some point, so if Amazon issued keys turned out to have a similar undiscovered flaw…
_makhmm I see
ecornipsbut in practice there’s no absolute rule, design to meet your security requirements not some absolute fort knox if that’s not what is required
_makecornips: yeah, good points, thanks for that
ecornipsno probs
ljosberinnhi all! i have deployed a python app to beanstalk. the thing is that python app should generate and store some pdf files. now, i was usually doing all this directly on system but it seems that i cannot do it like that on beanstalk...
ljosberinnso, i wonder, what would you suggest me to do?
ljosberinni'd like to store those documents actually somewhere else, rather than on the same machine where python code is (e.g. S3 or similar?)
ljosberinnbut not sure is that recommended way to do... does aws have some other solution for this use case?
ljosberinnshould i use EFS rather than S3?
gregf_hello
gregf_i've managed to login to the aws console. where would i get the endpoint for dynamodb im using please?
Rein|VPSAWS workspaces, what is the memory limit on them?
Rein|VPSwhy are AWS workspaces limit to 7.6GiB of mem and is this likely to change ever?
AikiLinuxHi, is there a way to get aws cli to give a more descriptive response to why it is rejecting access to a bucket ?
AikiLinuxI know I defined the credentials right and gave access permissions, yet still it does not allow access
AikiLinuxwhile other account are able to read from the bucket
AikiLinuxalso when I try to see the event in the cloud trail, it does not show it , says event not found
gregf_hello, im trying to perform a simple action on dynamodb(listtables in this case) and its failing
gregf_User: arn:aws:iam::906376499260:user/vms_master is not authorized to perform: dynamodb:ListTables on resource:
gregf_this is on live(this setup was done by the previous dev :|). is there a way to enable this please?
gregf_i've got a local dynamodb running and i dont have any issues and nothing uses iam roles on dev
jamesllondonlaptWhat's wrong with this aws-cli Query? 'Reservations[*].Instances[*].[InstanceId,State.Name,Tags[?Key==`Name`].Value]' The Tags...Value causes the output to have a line carriage instead of related vavlues being one a line.
billy_bHow can I set up a lambda function to trigger whenever my website is accessed?
billy_bSpecifcially my wesbite that is a bucket on my S3
billy_bAny ideas on where to start with this?
fodyou can expose it via the API gateway
fodand just hit it with some js
billy_bCan you link me to a tutorial on this?
fodhttp://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started.html
billy_bWhat if I want to trigger the function to get data from my dynamodb database?
billy_bLike someone accesses the website and that triggers the lambda function to get data from dynamodb
billy_bWould that tutorial still apply?
fodi don't see why not.
fodthe tutorial just covers creating a simple REST api - you can have it do anything on the back end.
ricksebaki have some software that is using the API and attempting to create an instance in a subnet, and the API is erroring "SubnetNotFound". but that's the only output of this software, so i don't really know why it's saying that.
ricksebakis that something i can log via cloudtrail?
ricksebakor log via anything. i'd like to get as much info as possible about that API calls, in case it's using an incorrect role or something.
uictamaleI'm getting an awful lot of "unable to complete request at this time" errors when trying to make a new RDS instance
uictamaleanyone else?
ridehdoes cli53 not support instance roles?
rideh@ricksebak yes enable cloud trails for api calls you should see what its executing there. what software?
ricksebakgocd, for building AMI's, but i think i just stumbled onto why it was failing. thanks!
ridehinteresting never used it, started digging into packer
gchristensen<3 gocd, <3 building AMIs
_KaszpiR_anyone knows how AWS EC2 instance Status work, how do they define if instance is healthy? like checking vm process if responding to signals?
gholmsThey don't do anything that involves your instance's internals.
rideh@_KaszpiR_ form your instance screen click the status checks tab
gholmsThe descriptions are on...
gholmsYeah. There.
ricksebakrideh: if you are getting into packer, you might want to investigate gocd as well. they work together.
_KaszpiR_yeah I know that, just wondering if anyone knows more detail about it
ricksebakgocd will spin up a worker instance and then packer will configure the instance however you want. so gocd leverages packer to do most of the work.
rideh@ricksebak cool i'll check ti out
cloudbudis Elastic Network Interfaces with Your VPC is different from normal interface
cloudbudcochi : hi
cloudbudcan i associate one subnet with multiple route tables
morbidcloudbud: No, you cannot.
cloudbudmorbid : what is a route propagation in route table
netchohi all
morbidRouting rules from a VPN or direct connect connection.
netchousing DMS for migratig my databases to aws
morbidSo if you want to reach the network on the other end of a VPN, you'd need to propagate the route.
netchodid like 30 of them but i have issuses with one DB
netchoactualy with one table in it
netchoit goes rly slow... almost 100k rows per hour
netchoi have tambles with more than 100M rows that were migrated rly quiclky
netchoonly issuse with this one...
netchonothing strange in logs... just says load task and thats it
netchoit runs for more than 5 hours and it's only migrated 450k of rows
cloudbudmorbid : what is a classic link n vpc
cloudbud*in vpc
morbidClassicLink can link a VPC to instances running in VPC classic which is the old, single network EC2 platform.
morbidI'd read through the VPC FAQ -- lots of good information in there.
cloudbudmorbid : i cant see classic link option in my vpc
cloudbudwizard
cloudbudwhere to find it
Rein|VPSaws gib mo4r mem pl0x!
morbidcloudbud: You may not have access to EC2 Classic. New accounts do not. hence, classiclink is likely not available.
morbidCheck out the classiclink docs in the VPC user guide for details.
BattleChickeniok
cloudbudokay morbid
Rein|VPSanyone know if AWS workspaces will ever get the max mem up'd to an adult amount?
postroasthow can i populate a dynamodb table with http post and python? can someone point me to a tutorial on this?
BattleChickenI have a problem i can't quite figure out. I've got a scenario set up here where we have a VPN set up
BattleChickenlocal traffic goes local, all other traffic gets funneled through a eni for the VPN, which routes it
BattleChickenI'm having some issues getting things working related to DNS. I set up flow logs that show blocking
BattleChickenthat the network ACLs explicitly seem to allow.. i'm not sure what's doing the blocking, since it shouldn't be blocked.. example line from the log: 722397377189 eni-111111111 111.111.0.12 111.111.12.184 58466 53 17 2 180 1465930791 1465930851 REJECT OK
BattleChickenmy network ACL allows the 111.111.0.0/18
postroastanyone have any tips?
postroastAlso do http post request to dynamodb typicall use javascript as the language?
postroastLike is all of the code on this page in .js? : http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.LowLevelAPI.html
postroast^javascript
BattleChicken... gotta be something with my network security..
BattleChickenwe did all of the allow/disallow on the network ACLs, but the permission model in AWS is least access, so if something restricts it more somewhere else i'd get rejectsion
BattleChickenanyone know if there's a detailed log/method to determine what rule is responsible for blocking traffic?
BattleChickeni assume not
highbasshey guys... does anyone here use jenkins with codedeploy? ... i wanted to know if there is a plugin that supports pulling github instead of pulling code from s3.. triggered through jenkins
highbassseems like the jenkins codedeploy plugin only supports s3 pulls
BattleChickenIf you have two subnets in the same VCP, and one says ALLOW ALL
BattleChickenand the other says DENY ALL.. does everything get denied?
cloudbudhow internet gateways perform network address translation (NAT) for instances that have been assigned public IP addresses.
cloudbudmorbid : pls can you tell me
morbidSorry dude. I don't understand your question. They route traffic and have a route out to the Internet. Beyond that, it's a mystery to me in terms of how :)
morbidhighbass: I think people generally put a tarball on S3 from their CI system, but I've never personally used CodeDeploy so I'm not sure.
highbassmorbid: what is the advantage of that? i just see it as an extra step... tar up the code... push it to s3 and then get codedeploy to execute and that tar is pulled down....
highbasswhereas feeding just a commit id through git requires one pulldown straight from git
highbassnot sure what i am missing in this scenario
highbassless steps
morbidNot sure if it's an advantage or a limitation. :/ Trying to dig up what I recall reading on this..
highbassthanks morbid ...
highbasslooks like the plugin doesn't support git integration just s3 push ... i will have to create custom script to feed codedeploy the commit id
morbidYeah, I think that's right. I wonder if you could use CodePipeline to do what you want, though.
morbidNot my strong suit, those services.
cloudbudmorbid : actually i was reading about internet gateways on vpc.
cloudbudAn Internet gateway serves two purposes: to provide a target in your VPC route tables for Internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IP addresses.
cloudbuddid not undersatnd how internet gateways work as NAT
morbidYeah, I get you. To us it's invisible. The igw might be doing the magic to wire up the public IP, but that's a detail I don't believe we need to worry about.
cloudbudmorbid : im still not able to understand how it works as NAT ??
morbidIt doesn't in terms of a NAT gateway or NAT instance. It's serving as a NAT for the VPC to bridge the instances' public IPs. I don't think you need to understand it more deeply -- i don't anyway.
cloudbudmorbid : what do u mean by wire up the public IP
morbidI take that paragraph you quote to mean it's doing the address translation between the instances' private and public IPs
morbidwhile NAT gateways/instances translate traffic for instances without a public IP.
morbidbut I'm just interpreting the paragraph you pasted. :) As I said, those are implementation details we don't need to be concerned with IMO.
cloudbudhttp://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway.html
cloudbudi was reading that link morbid
postroasthow can i update a table that is in my dynamodb using http post? with python if possible
Shackle_You'll probably want to use the Python SDK. https://aws.amazon.com/sdk-for-python/
cloudbudmorbid : does the elastic ip option in vpc and ec2 console is same ???
BlackMariaa boto question. if I want to list what amis are in use in a couple of accounts, what is the best way to do that... resource-based policy or iam role?
BadApehello, wondering if anyone can point me in the right direction, i've pushed an update to my website to my s3 bucket, i then expired the cache on cloudfront but i never see the new content that is in my s3 bucket
chucky_zBadApe: not sure if you got an answer or not
chucky_zbut the cache typically takes for-frickin-ever to expire
chucky_zdid you refresh and make sure that it was done?
BadApechucky_z, i checked several times
BadApei did /* to invalidate all objects
BadApebut for some reason they old site is still there
BadApebut the s3 bucket doesn't have the old code
BattleChickeni cannot figure out why 53 and 5989 are being blocked.
BattleChickenit's explicitly enabled in the network ACL!
alex88hello everyone, I'm a customer that requested us to do a data pipeline, basically event ingestion, processing, storing and analytics
alex88what does aws offer to solve that?
gholmsBattleChicken: What about security groups?
gholmsalex88: The aptly-named AWS Data Pipeline :P
BattleChickeni think I'm good. do you know offhand what setting/option in the sec group might cause it?
alex88gholms: kinesis -> lambda/emr -> dynamodb/redshift?
alex88oh it's a service on its own https://aws.amazon.com/datapipeline/
BattleChickeni just double checked. the security group seems wide open
BattleChickenwe're doing the access restriction (at least currently) via the network ACL
bilb_onomy "aws_secret_access_key" is gonna be really really long right?
bilb_onoits the contents of the .pem file they gave me and told me they would never give me again?
gholmsThe pem file is the ssh key pair.
bilb_onogholms: so thats not it right?
gholmsThe access key is 40 characters long.
bilb_onoI can find the access key ID. But not the secret access key
bilb_onoit just lists the access key id, tells me a few things about it ,and gives me the option to delete it or make it inactive
bilb_ononothign mentioned about the secret one
gholmsYou can't get a secret key from the web UI once it's already created.
gholmsIt only sends it to you at creation time.
bilb_onook well they gave me a pem file at creation time
bilb_onois that not it?
gholmsDoes the .pem file look like an ssh key?
bilb_onoyeah
gholmsThen that is not your secret key.
chucky_zsecret key is an actual string of text
gholmsThe secret key is 40 characters of ASCII.
chucky_zusually has some \ and + in it, unlike the access key
bilb_onough
bilb_onoso Ill have to make a new one it seems
chucky_zusually pretty trivial to make a new user and apply the same roles and groups
bilb_onoyeah
chucky_zand just either delete the problem user, or deactivate forever
chucky_zyou'll get a little page that'll say 'hey here's your secret key, do you want to download it?'
chucky_zand a confirmation that's like 'be dang sure you wanna close this page'
gholmsYou can just create a new key for the user and then delete the old key, too.
gholmsNo need to delete the whole user
alex88gholms: seems that to autoscale kinesis you need an external tool.. wow that sucks
gholmsYeah, they'll probably release something that does that some day just like they did with instance scaling.
alex88aws is so good for such things and they fail in these simple things
alex88like integrating a tool they made into their services
gholmsThey do it all incrementally.
gholmse.g. ELB driving auto-scaling
alex88there isn't either a metric that shows how much percentage of stream processing you're using
alex88you should do something by yourself to compare the available capacity and the usage metrics
alex88maybe you can use WriteProvisionedThroughputExceeded but when it's > 0 you should already have scaled
nicolas17hi
nicolas17I need to run some code periodically, if more than 5 minutes pass between runs there will be data loss (or not quite, but I'm simplifying), would a cloudwatch-timer-triggered lambda be reliable enough?
nicolas17cloudwatch timers aren't second-accurate at all, but if I make a timer set to trigger every 4 minutes, I'm wondering if I can rely on it never delaying more than 4 or 5 minutes (even if the actual delay is all over the place within that range)
gholmsWell, if you make it run every two you can miss a round or two without a problem. Would that be worth it?