ayogihello guys, is it possible to do vertical auto scaling in AWS
ayogican i configure instances to scale up and down in terms of CPU power and RAM, as per the computation requirement
n473I mean, you could probably trigger a Lambda function off a CloudWatch alarm, that would copy your existing ASG launch configuration, modify it to change the instance type, save it as a new LC, and then modify the ASG to use that new LC, then cycle the instances within it
ayogin473: how challenging is this, does it require lot of effort
n473ayogi: if you're familiar with one of the Lambda-supported SDKs, it shouldn't be too impossible, but there will probably be quite a few challenges involved to get it to work effectively
n473I mean I'd do it as a learning exercise, but I don't think I'd be willing to run it in production
n473much like communism, it is good in theory
n473but will probably prove to be problematic in practice
ayogii am running a script, which requires high computation power for sometime and i do not want to keep running the large instances
ayogiso if i could scale up and down, it would save me money
n473so you require the instances to be running at all times, but during the time period in which you want to run this script, you want to raise the instance tier, and then lower it back down?
ayogiyeah exactly, not instances, only one instance
ayogiyeah i want to raise the allocated resources, and for AWS it means change tier
n473ah
n473how long would you be running at this scaled-up capacity?
ayogiit would be better, if there is something like an agent which monitors the resource usage, and then scales up and down, as per the usage.
ayogii am not sure how much time it would take
n473couple of minutes, probably
n473does this instance have stateful data on it?
ayogidoes it need to be time bounded?
n473as in, could you launch it from an AMI?
ayogin473: let me check that
n473I mean, does the instance store any stateful information on it? If you were to terminate it and launch a new instance from an AMI, would you be losing data? And if so, is there a way for you to externalize that data?
n473I say this because it would be pretty easy to write a Lambda function that could be triggered by a CW alarm that simply launches a new higher-tier instance from an AMI and then terminates the existing one (maybe swapping EIPs or DNS if you need it to)
n473than it would be to have to detach and reattach EBS volumes to maintain data consistency across instance scale-ups
BeerLoverhow to make sure that you can only access eb through route 53 and not the url environment provides
ayogin473: yeah there is no stateful data on the instance
ayogithe script connects to a DB, fetches the data and does some computation on it
ayogiand stores the result somewhere else
n473so does the instance do some other shit the rest of the time?
n473in that case, just create an AMI of the instance, set up a CW alarm to monitor CPU on it, have the alert trigger lambda, have lambda launch new higher-tier instance from AMI, update CW alarm, etc.
n473I mean it's going to be a bit of work
n473but it's plausible
ayogin473: i have another scenarios as well where the computation result is stored locally on the instance
curiouspigHI guys, does anyone knows what means the threshold value under the RDS monitor graph?
narcancuriouspig: hi, where exactly?
armayssECS task stopped due to: Essential container in task exited.
armaysshello i have this error in elastic beanstalk
pluszakI'm trying to create an Alarm for cloudfront, how do I add an email address to the dropdown that's available there?
pluszakThe only one available belongs to a different team :/
BadApehello, is there a managed ldap service on AWS?
BadApei don't have windows AD, however i am deploying a number of services i can manage access to with ldap
pluszakwhy ldap?
davehewyhi, is it possible to resolve pub ELB dns to private IP when using peering connections?
davehewyquite difficult to do HA when dependencies are in peered vpc
BadApepluszak, what would you suggest?
pluszakI don't know, just asking why ldap
pluszakdavehewy: have you tried?
BadApepluszak, because jenkins, nexus and the other tools we want to deploy use ldap
pluszakwell, use the directory service then
davehewy@pluszak, how do you mean?
pluszakhttps://aws.amazon.com/directoryservice/
BadApebut requires me to have an existing AD doesn't it?
BadApei just wanted simple openldap
pluszak"Simple AD is a standalone managed directory that is powered by Samba 4 Active Directory Compatible Server"
armaysspluszak, do you know can is it possible to debug when ELB fails ? The logs i downloaded dont furnish enough explication
pluszaksounds like it doesn't need AD
BadApeah
BadApethanks
pluszakdavehewy: ping/traceroute the dns and see how it moves about
BadApewell i don't know where samba comes from, i can only see Amazon Cognito Your User Pools and MS AD
pluszakThere is even an option to spin up a MS AD
pluszakbut the simple one is at the bottom on the left
BadApei have that option, but not samba4
BadApeall i can see is http://pasteboard.co/HJPC8aDNm.png
yuppiehello, whats the best way of deleting a lot of security groups at one time?
yuppieor how do i just pass on this error? botocore.exceptions.ClientError: An error occurred (InvalidPermission.NotFound) when calling the RevokeSecurityGroupIngress operation: The specified rule does not exist in this security group.
davehewy@pluszak, i have done, and i have DNS resolution feature turned on which sounds like exactly what I want.
davehewythough its not behaving as i would expect
pluszakdavehewy: elaborate
curiouspignarcan in RDS
narcancuriouspig: indeed, but where exactly, have you got a screen shot?
guestdavehewy: does DNS resolution need to be enabled on both VPCs involved?
davehewyit does
davehewyand it is
davehewytraceroute on pub elb dns just gives back the public route :/
armaysscan somebody please tell me how i can debug myself when a ecs task stops due to a container in elastic beanstalk ?
vegardxWhat did you expect, davehewy, that it would somehow be routed internally? You could probably do that with a static route, but not sure why you would want that. Use internal ELBs or private zones in Route 53 to map to private addresses.
davehewy@vegardx, https://aws.amazon.com/about-aws/whats-new/2016/07/announcing-dns-resolution-support-for-vpc-peering/
davehewythast what this would suggest ey?
roryoff
vegardxdavehewy: If you need peering, yes. But you can do it without peering as well. You have public and private zones.
vegardxMaking it possible to resolve foo.bar.zoo differently, depending on being inside the VPC or outside.
amcmarmayss - go to ecs, pick the cluster, pick the service, view events tab
amcmHaven't used beanstalk, but that's what I'd do without it
davehewy@vegardx, finding your answers diff to follow
vegardxYou have an ELB that you want to access using internal network inside your VPC, and not over the public network?
davehewyif i want to control egress direction yes
davehewyhence it not being as straightforward as simply "using the pub dns"
vegardxLooking at things I'm not sure what I'm suggesting is possible with only one ELB. You might have to have two, one internal and one external, both can respond on the same fqdn, but using a different zone.
kgirthof_lol accidentally swapped cnames between two eb environments -- just took down the entire search functionality for a major online shopping retailer conglomerate
kgirthof_perhaps I should build in a check
kgirthof_ACTION thinks about consequences 
BadHorsieI have a VPCA (172.20.32.0/24) peered to VPCB (10.58.20.0/24), VPCB has access to a VPN 10.54.0.0/16 through a VPN tunnel, I could create an instance in VPCB to nat the incoming traffic but not sure how to route the traffic to said instance because of different subnet
BadHorsieWhat would be the right way to do that?
kubblaiBadHorsie: badhorsie i think you need a nat gateway running a point to point vpn server to 10.54.0.0/16 subnet then create a route for that subet to the nat gateway remember to disable source/destination checks on the instance. i think im understanding your setup
kubblaias vpc peering is non transitive you probably need to do this in VPCA and VPCB in order to access that subnet from each
kubblaisomeone may be able to correct me here when it comes to NAT Gateways
kubblaiand sorry its a NAT instance not Gateway i always get the 2 names confused, NAT Gateway is AWS controlled Nat'ing
BadHorsiekubblai: from the Route Table tho, I don't see how I could redirect the packets from VPCA to a specific server/IP
kubblaiyou need to create the Instance first then edit your route table in VPCA and point 10.54.0.0/16 at the nat instance you made
kubblaiyou'd do it by instanceid BadHorsie
BadHorsiekubblai: thanks again!
Pwntus[Lambda, Cognito]: Hi. I'm facing a design issue, where I want to show data (fetched from lambda API) to the public. I can create a Cognito Identity but I don't want the user to provide credentials. The current solution is an authenticated backend server that relays information to the frontend user. However, if a user wants to organize the data using Elasticsearch queries, things get complicated. Any tips/comments on how to do this differently? Thanks.
plushyWhat can I upgrade m1.small to? Like, keeping the root disk as it is
manpearpighey guys, i need some advice regarding my dynamodb table. is this the right channel to ask?
roberthlmanpearp_: Yes
manpearp_sweet
manpearp_i have a table of items, where it stores a StoreID, Brand Name, ManufactureID and Quantity
manpearp_i have the primary key as storeid, and sort key as manufactureid
manpearp_the problem, different brand names may have the same manufactureId as another brand
manpearp_how should i approach this?
roberthlWell obviously primary key must be unique, so if having StoreId as the partition key is non-negotiable you'll need to invent an unique field to use for your sort key
roberthlAlternatively you could use a composite key, e.g. BrandName-ManufactureID in a single field
manpearp_good idea
manpearp_yeah im alittle stuck as far as the partition key is concerned
manpearp_should i just generate a UUID?
manpearp_StoreID will indicate the store that carries the brand
roberthlUsing a UUID is a pretty common pattern, but bear in mind you can do beginswith() lookups against sort keys so if you can create a composite key starting with the manufacturer id you would then still be able to look up by that value
manpearp_it's not life or death that i need to have storeid as a primary key, its just something i thought was the correct way to approach it
manpearp_the only field i need to update is quantity
manpearp_i just want to know what store has what item & brand
manpearp_i might go with the composite key approach, seems simple and straight to the point
manpearp_if i were to use a uuid as a sort key, i'd have to store that uuid somewhere everytime i want to make an update to that item correct?
jordanlis there a typical pattern for retrieving timeseries data from dynamodb in time-sorted order? for example, in a use case where i'm storing log msgs from irc channels. i'd like to be able to paginate through msgs for a given irc channel, starting with the most recent ones.
jordanli'm trying dynamo for the first time and don't know all its features/conventions yet
roberthlmanpearp_: You do need access to it
manpearp_dang, that's pretty tricky now
manpearp_it's probably better that i just create a compositekey
roberthljordanl: The only idiomatic way would be to use a timestamp as the sort key to the highest precision feasible
manpearp_my only concern with using a composite key is that i have it connected to elastic search
manpearp_it might cause issues when searching for brands/manfuactureId
jordanlroberthl: e.g. partition key = irc channel ID, sort key = timestamp
jordanl?
jordanlno index, just query by irc channel ID and get paginated results back?
roberthlThat's what I'm suggesting, although you'd want to use at least microsecond precision to limit the chance of messages with the same timestamp overwriting each other
roberthlIn the time series table I have, I use a sort key of "{Timestamp}-{MessageHash}" to avoid that problem - but I'm not sure how kosher that is
manpearp_do you have any suggestions for a workaround when using a compositekey & elastic search?
roberthlNot really without understanding how you use ES
jordanlyes, microsecond precision is doable. i understand the need
manpearp_i'm using es to search up brands & manufactureId
manpearp_i think what i'll do is have a composite key, and still have brand & manufactureId separate
jordanlnewbie question: to get a single item, you have to provide the full partition key + sort keys? so in this example that would be ChannelId + Timestamp
roberthlYes, that's right
impermanenceDoes anyone know who to get total current SSD (EBS) using across all ec2 instances for a particular account?
chainzyou can't do that in the console?
impermanencechainz: can *you*?
roberthlaws ec2 describe-volumes --query Volumes[].Size --output text | tr '\t' '+' | bc
chainzwhen i log in, click ec2, then volumes on the left
chainzthat doesn't seem right roberthl
impermanencek, then how do you get total volume usage? not a volume. all volumes. arithmetic sum.
chainzoh you want the total capacity usage
chainzi thought you just wanted a count of the volumes
jordanlso is it a bit cumbersome developing dynamodb apps when you need to provide so many components to fetch a single item? in a web app i guess you could construct a composite hash of the combined fields?
chainzroberthl's command might do that
impermanenceI put "using" but I meant "usage"
impermanencesorry
roberthlimpermanence: Do you have the aws cli? You can try the command I pasted
impermanenceroberthi: yep.
chainzroberthl: is that final number in gb?
roberthlchainz: Yeah
roberthlIt won't work if you have more than 500 volumes though
impermanenceroberthi: 117937
impermanencelooks pretty good.
roberthl117 terabytes, wow.
felixjetwhen you create a new record set on Route 53, the Value for A entry contains this text: "IPv4 address. Enter multiple addresses on separate lines."
felixjetwhat does this mean?
felixjetit's a round-robin?
roberthlfelixjet: An A record can point at multiple IP addresses (i.e. round-robin) and if you want that you put each IP on a separate line. But you can just point it at a single address.
felixjetbut how does it resolve?
felixjeta random one from that list?
felixjetor i have to create multiple A records
roberthlYes, the DNS resolver will select one from the list and return that
felixjetwhat about WRR? you can only specify a single weight value
felixjetnot a weight for every IP
roberthlIf you need to use that then you should create multiple A records
felixjetso every A record will have it's weight
felixjetand multiple IPs can be used here too?
chainzyah, i thought 30tb was a lot heh
roberthlfelixjet: Yes
impermanenceroberthi: in the time since I made that comment it's already up to 118TB.
impermanenceroberthi: clusters!
felixjetroberthl, and to use round-robin i don't have to use CNAME ?
roberthlfelixjet: Don't fully understand the question, but you can't have multiple values for CNAME so you cannot round-robin to a set of CNAMEs
impermanenceroberthi: so...in terms of "volumes"...you're command is return EBS (SSD), Magnetic, Ephemeral...
impermanence"your" sorry
felixjetthe docs doesn't say anything about value being multiple addresses to be round-robin :/
felixjetin fact, it says that you have to use WRR
felixjetand assign an equal weight
felixjetaccording to http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html
felixjet"Use the weighted routing policy when you have multiple resources that perform the same function (for example, web servers that serve the same website) "
felixjet"Weighted resource record sets let you associate multiple resources with a single DNS name. This can be useful for a variety of purposes, including load balancing and testing new versions of software."
roberthlRound-robin has existed in DNS since long before Route53 existed
roberthlimpermanence: Correct, but you could query based on the volume type
impermanenceroberthi: trying to watch our total disk expansion against our soft account limits...
impermanenceroberthi: which we hit two weeks ago and brought down production
roberthlimpermanence: aws ec2 describe-volumes --query "Volumes[?VolumeType == 'gp2'].Size" --output text | tr '\t' '+' | bc
impermanenceroberthi: thank you, kind sir.
roberthlimpermanence: If you are turning this in to something more serious, I would use a Lambda to query the API using the SDK and write a CloudWatch metric. The CLI version will silently break once you exceed 500 volumes.
robert45hey guys, Im getting the following error: "robert is not authorized to perform: iam:ListRoles on resource:" does anyone know which policy name do I have to enable for this?
roberthlrobert45: IAMReadOnlyAccess
robert45roberthl thank you so much!
robert45have a great weekend guys
felixjetwhere can i get a Route53 Access Key Id?
felixjetsecurity credentials page?
felixjetor using IAM?
robert45hi guys, quick question, I created a scheduled EBS snapshots using a CloudWatch event, is it possible to send an email notification from within AWS after the EBS snapshot is completed?
roberthlrobert45: Are you using Lambda?
roberthlfelixjet: Yes, create a user in IAM and you have the option to generate an access key id for that user
robert45roberthl hi again! Nope, just a CloudWatch rule
roberthlrobert45: OK, You can use the `createSnapshot` event described here to trigger an SNS message https://aws.amazon.com/blogs/aws/new-cloudwatch-events-for-ebs-snapshots/
robert45roberthl tx, Im creating a new CloudWatch rule for createSnapshot although I dont have a event selector like that page shows
felixjetthanks robert
roberthlrobert45: It looks like the console has changed, http://i.imgur.com/p0OTLKB.png
robert45roberthl thats weird! why Im not seeing that
robert45roberthl http://imgur.com/A9PBU9g
roberthlDo you not find "EC2 in the "Service Name" drop down?
robert45roberthl I found it! sorry. Its asking me to select a Target, do I need to choose SNS topic?
robert45SNS topic shows "no item"
roberthlYes, you'll need to create one and create a subscription to your email address
robert45roberthl I think I got it, thanks so much again for your help
robert45bye guys
TorgeirWhat is an ECU in AWS actually? I see instance varies in the number on ECU
mehworkHow long should a new IAM role take to propagate to where i can use it? (I'm still getting error "User <my-arn> is not authorized to assume IAM Role ..." 10 minutes later)
mehworkbut it's my first time creating one, so i don't know if somehow i did something wrong or not, but i followed all the steps properly it seems
mehworkmy arn is us east coast region but the s3 bucket i'm trying to COPY from is west coast
kgirthof_mehwork: are you trying to add it to an instance or a user
mehworkan instance i think. I created a new IAM Role (from my normal amazon account) as a 'Redshift Service' and attached it to 'AmazonS3ReadOnlyAccess'
CrunchyChewieanyone using Troposphere with ECS and ALBs?
mehworkkgirthof_: like this https://docs.aws.amazon.com/redshift/latest/gsg/rs-gsg-create-an-iam-role.html
toastedpenguinmaking sure I read this right....the most recent announcement of enhancements for EFS didn't provide Windows Support yet correct
cochitrue
mehworkfeels like something else is going wrong, it shouldn't take this long to work should it
ploshymehwork: I think their official SLA is up to like 45 minutes? Usually it's not more than 15, tho
knew2thisI am attempting to create a dynamo db in the first of table: client_credentials primary key: client_id but then I need a hash of credentials. It needs like {client id: {credentials: [{"info", "info", info"}]}} I am having a hard time understanding the creation of dynamo databases. I understand how to manage them and their output, but not how to create them.
ploshymehwork: but it looks that that error you're getting isn't a 'this role doesn't exist', it's a 'you don't have permissions to this role'. Do you have sts:AssumeRole on your IAM user?
ploshyI also think you might need a trust relationship on the role with your user
ploshyknew2this: are you using the cli or the console?
knew2this the console to create it
ploshySo what're you confused by?
knew2thisWhen I create a table named "client credentials" and a primary key "client_id", i don't know how to add the hash of credentials.
knew2thisWe already have a database, but I don't have access to their aws, so i am trying to recreate my own version so I can work on it and modify it the way it needs to be modified
knew2thisAnd I can't work on it or modify it til I match it up with the current version
knew2thisI keep getting errors
mehworkploshy: I'm not using an iam user, i just am using my regular amazon account to create the IAM role
mehworkmaybe the issue is i'm somehow not really assigning it to my cluster, i'm not really sure how to
ploshyknew2this: so if you've created the table, you should be able to create an item. In the "Create item" modal, hit the plus button next to client_id and it'll ask you what kind of data to add. Choose map, and it'll be what you want.
mehworkyeah, it looks like there's more i have to do that that other link didn't mention https://docs.aws.amazon.com/redshift/latest/mgmt/copy-unload-iam-role.html
mehworkthese docs are nice but i hate how they break stuff up and leave out vital info and don't always link you to what you need
knew2thisploshy: insert map?
ploshyknew2this: I think append, not insert
ploshyNot 100% what the difference is, admittedly
knew2thisploshy: thank you that is exactly what I needed to do
ploshyknew2this: glad to help, hope it's smooth sailing
kibibyte3hi
kibibyte3how to set instance attrribute ?
kibibyte3is same as metatag ?
roberthlkibibyte3: What's the context of this? Do you have a link to docs?
kibibyte3roberthl, http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-constraints.html
kibibyte3You can add custom metadata to your container instances, known as attributes. Each attribute has a name and an optional string value. You can use the built-in attributes provided by Amazon ECS or define custom attributes.
roberthlAh, I assumed this was EC2 - can't help with ECS
kibibyte3k
manpearpighi, how do you guys deal with aws gateway timing out when executing a lambda command?
manpearpigthe lambda event does take some time to execute, should i create a separate table for it to poll with results?